uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,877,628,088,554 | arxiv | \section{Introduction}
\par Moral decision-making in communication often concerns the choice whether to tell the truth or to deceive. Whilst it is generally agreed that it is bad to tell lies that increase your benefit at the expense of that of another person (black lies), moral philosophers have long argued about if and when telling a lie that increases the benefit of another person (white lie) is morally acceptable.
We find Socrates pointing out to one of his interlocutors in Plato's Republic that, ``when any of our so-called friends are attempting, through madness or ignorance, to do something bad, isn't it a useful drug for preventing them?'' - suggesting that, given the circumstances, deception might be the `good' thing to do (Plato, 1997). At the other end of the spectrum we find Immanuel Kant, for whom good intentions or consequences cannot justify an act of lying. For Kant, telling even a white lie is ``by its mere form a crime of a human being against his own person and a worthlessness that must make him contemptible in his own eyes.'' (Kant, 1996).
\par This raises the question whether prosocial agents would tell such `useful' lies, or condemn them, as Kant did. Prosocial behaviour, that is, behaviour intended to benefit other people or society as a whole, is widely considered as the right course of action in situations in which there is a conflict between one's own benefit and that of others. The Golden Rule, which encapsulates the essence of prosociality, is indeed ``found in some form in almost every ethical tradition'' (Blackburn, 2003). Thus, a prosocial person, when facing the decision of whether to tell a white lie or not may experience a conflict between two diverging moral dictates, one pushing towards lying for the benefit of others and the other pushing towards telling the truth regardless of circumstance.
\par Since most human interaction revolves around communication and involves some degree of prosociality, understanding how this conflict is resolved is not only interesting from the theoretical point of view of moral philosophy, but also from a more practical point of view. For instance, taking verbatim an example from Erat and Gneezy (2012): ``should a supervisor give truthful feedback to a poorly performing employee, even when such truthful feedback has the potential to reduce the employee's confidence and future performance?'' What does telling a white lie signal about the supervisor's prosocial tendencies?
\par The focus of the present paper is on this type of questions and, more generally, the moral conflict between lying aversion and prosocial behaviour.
\par To measure prosocial tendencies and aversion to telling white lies we build on previous studies in behavioural economics, which place economic games into experimental settings. Specifically, the Dictator Game (DG), due to its setup, has proven useful in measuring altruistic proclivities in recruited subjects. In a standard DG, one player (the dictator) is given an initial endowment and is asked to decide how much of it, if any, to transfer to a passive player (the recipient), who is given nothing. The anonymity and confidentiality of decisions are ensured to rule out incentives (such as reputation) to share their endowment with the recipient. Although the theory of homo \oe conomicus predicts that dictators keep the whole endowment for themselves, research has shown that a significant proportion of dictators allocate a non-trivial share to recipients (Camerer 2003; Engel 2011; Forsythe, Horowitz, Savin \& Sefton, 1994; Kahneman, Knetsch \& Thaler, 1986).
\commentout{
\par In the standard formulation of \textit{pure altruism}, an agent's utility is a monotonic function, not only of his or her own allocation, but also of the utility or allocation of other agents. Thus, with the allocation remaining constant, a purely altruistic agent experiences greater utility as the allocation of other agents increases. Moreover, under standard assumptions of positive but diminishing marginal utility of money, a purely altruistic dictator is expected to transfer a fraction $m$ of their endowment to the recipient; as long as the marginal utility of $m$ is greater for the recipient, sharing maximises the dictator's utility. Varying degrees of altruism can be expressed by the relative weight of the utility or allocation of other agents in the utility calculation for the well-being of the agent concerned.
\par\textit{Impure altruism} theory, on the other hand, hypothesises that donors, besides experiencing utility from the well-being of the recipient, also incur a warm-glow utility from the act of giving itself. Accordingly, Andreoni (1989, 1990) models the utility of an impure altruist dictator as a function of his own well-being, the well-being of the recipient, and a warm-glow component (that reflects the joy of giving) based on the size of the transfer. More generally, since an impure altruist cares about the recipient but also derives utility from giving per se, she experiences greater utility in situations in which her contribution to others' well-being has been greater, other things being equal.
\par Finally, inequality aversion theories hypothesise that agents experience disutility from inequality and this, in turn, motivates altruism. Inequality-averse dictators are thus expected to share their endowment with the recipient, in order to reduce inequality between them. \par
\par Finally, inequality aversion theories hypothesise that agents
experience disutility from inequality and this, in turn, motivates
altruism. Inequality-averse dictators are thus expected to share their
endowment with the recipient to reduce inequality.} \par
Akin to the manner in which the DG is used in research as the paradigmatic game with which to
investigate altruism, extensive use has been made of the Prisoner's
Dilemma (PD) in experimental settings in order to investigate
cooperative behaviour in agents. In the standard one-shot two-player
PD, both players can either cooperate or defect. If a player
cooperates, he pays $c$ and bestows $b>c$ on the other player while,
if he defects, he pays and gives $0$. Clearly homo \oe conomicus
would defect in any case since, irrespective of the strategy of the other,
the optimal strategy is to give $0$. Yet in day-to-day life people often do
cooperate and, perhaps unsurprisingly, research has shown that even in
anonymous one-shot PD experiments a significant percentage of people choose to
cooperate (see, e.g., Rapoport, 1965).
\par More recently, behavioural scientists have delved into choices people make regarding deception in different circumstances and under different conditions. Unlike cooperation and altruism, lying aversion is not measured by a unique and standard economic game and (at least) three different models have been put forward (Gneezy, Rockenbach \& Serra-Garcia, 2013). However, irrespective of the model used, findings all point to the same direction: while the classic approach in economics assumes that people are selfish and that lying in itself does not involve any cost, accumulating evidence suggests that a significant amount of people are lie-averse in economic and social interactions (Abelar, Becker \& Falk, 2014; Cappelen, S\o rensen \& Tungodden 2013; Erat \& Gneezy, 2012; Gneezy, 2005; Gneezy, Rockenbach \& Serra-Garcia, 2013; Hurkens \& Kartik, 2009; Lundquist, Ellingsen, Gribbe \& Johannesson, 2009; Weisel \& Shalvi, 2015).
\par Recent research has shed light also on \emph{when} people are more likely to use deception. Shalvi, Dana, Handgraaf and de Dreu (2011) find that ``observing desired counterfactuals attenuates the degree to which people perceive lies as unethical''. Wiltermuth (2011) finds that people are more likely to cheat when the benefit of doing so is split between themselves and another person, even when the other beneficiary is a stranger with whom they had no interaction.
Gino, Arial \& Ariely (2013) distinguish among the mechanisms that may drive this increased willingness to cheat when the spoils are split with others. They suggest that the ability to justify self-serving actions as appropriate when others benefit is a stronger driver for unethical behavior than pure concern for others. They also find that people cheat more when the number of beneficiaries increases and that individuals feel less guilty about their dishonest behavior when others benefit from it.
Conrads, Irlenbusch, Rilke \& Walkowitz (2013) examine the impact of two prevalent compensation schemes, individual piece-rates (under which each individual gets one compensation unit for each unit they produce) and team incentives (under which the production of the team is pooled and each individual receives one half of a compensation per unit of the joint production output). They find that lying is more prevalent under team incentives than under the individual piece-rates scheme. Thus, their results add to the evidence in Wiltermuth (2011) and Gino et al. (2013) suggesting that individuals are more willing to lie when the benefits of doing so are shared with others.
Cohen, Gunia, Kim-Jun and Murnighan (2009) test whether groups lie more than individuals. They find that groups are more inclined to lie than individuals when deception is guaranteed to best serve their economic interest, but lie relatively less than individuals when honesty can be used strategically. Their results suggest that groups are more strategic than individuals in that they will use or avoid deception in order to maximize their economic outcome.
\par
However, with few exceptions, no previous studies have investigated the relation between prosocial behaviour and aversion to telling white lies. Shalvi and de Dreu (2014) show that oxytocin, a neuropeptide known to promote affiliation and cooperation with others, promote group-serving dishonesty. Levine and Schweitzer (2014) report that people who tell weakly altruistic white lies (lies that benefit the other person at a small or even null cost for the liar) are perceived as more moral than those who tell the truth. In a subsequent work, Levine and Schweitzer (2015) show that trustors in a trust game allocate more money to people who have told a weakly altruistic white lie in a previous game than to people who have told the truth. This result provides evidence that telling a weakly altruistic white lies signal prosocial behavior in observers. However, Levine and Schweitzer (2015) do not measure trustees' behavior and thus it remains unclear whether those who tell a weakly altruistic white lies are really more prosocial than those who tell the truth. One corollary of the results of the current paper is a positive answer to this question.
\par More closely related to our work is that of Cappelen et al. (2013), which explores the correlation between altruism in the DG and aversion to telling a Pareto white lie (PWL), that is, a lie that increases the benefits of both the liar and the listener, providing evidence that telling a Pareto white lie give significantly \emph{less} in the Dictator Game. Our work build on and extend this paper.
\par
Indeed, although its results represent a good starting point, more research is needed to develop a better understanding of the relation between aversion to telling a white lie and prosocial behaviour. First of all, most everyday situations are better modelled by a PD, rather than a DG. Since altruism in the DG and cooperation in the PD are different behaviours (virtually all altruistic people cooperate, but the converse does not hold - see Capraro, Jordan \& Rand, 2014), it is also important to investigate the correlation between cooperation in the PD and aversion to telling a white lie. Second, many white lies are not Pareto optimal, but involve a cost for the liar (altruistic white lies, AWL). Thus, it is important to go beyond Pareto white lies and explore also the correlation between prosocial behaviour and altruistic white lies.
\par To fill this gap, we implemented a 2x2 experiment, in which subjects play a two-stage game. In the first stage they play one out of two possible treatments in a variation of the Deception Game introduced by Gneezy et al. (2013). In these treatments they have the opportunity to tell either a Pareto or an altruistic white lie. In the second stage participants are assigned to either the PD or the DG. We refer the reader to the next section for more details about the experimental design and to the Results section for a detailed description of the results. Here we anticipate that we have found evidence of three major results: (i) both altruism and cooperation are positively correlated with aversion to telling a Pareto white lie; (ii) both altruism and cooperation
are negatively correlated with aversion to telling an altruistic white lie; (iii) men are more likely than women to tell an altruistic white lie, but not to tell a Pareto white lie.
\commentout{
While the classic approach in economics assumes that people are selfish and that lying in itself does not involve any cost, accumulating evidence suggests that a significant amount of people are lie-averse in economic and social interactions. Since the radical assumptions that people are \textit{fully honest} (experiencing infinite disutility from lying) or \textit{fully dishonest} (their utility function is unaffected by lying) have been shown to be problematic, experiments have been conducted attempting to understand when and why people choose to lie. A number of experiments have shown that the decision to lie depends on the incentives involved. Findings from Gneezy 2005 suggest that people are sensitive to their own gain when choosing to lie; when the payoff from lying increases, people are more likely to lie. Moreover, agents are sensitive to the disutility that lying may cause for others. In particular, as Gneezy 2005 indicates, ``the average person prefers not to lie, when doing so only increases her payoff a little but reduces the others payoff a great deal''. As mentioned above, Erat and Gneezy 2012 found that some people are reluctant to tell a lie even when doing so would make all parties materially better off. Since lying in such circumstances benefits both the decision maker and the other individuals involved, selfish and altruistic individuals alike have an incentive to lie. Thus, Erat and Gneezy's findings suggest that there is a intrinsic cost of lying, independent of social preferences regarding outcomes, that is sometimes high enough to trump consequentialist considerations in decision-making. Lopez-Perez and Spiegelman 2013 suggest the possibility of a pure aversion to lying, independent of any other motives. (Lundquist et al., 2009; Abelar et al., 2014; Cappelen, Sorensen and Tungodden 2012; Gneezy, 2005; Hurkens and Kartik, 2009; Erat and Gneezy, 2012; Gneezy, Rockenbach and Serra-Garcia, 2013).
\par Another possible dimension to lie aversion may involve what Charness and Dufwenberg 2006 term guilt aversion - it is possible that feelings of guilt are brought about by the perception of having engaged in morally transgressive behaviour, and that people therefore avoid this.
\par\bigskip\textbf{The experiment}
\par\medskip In this paper, we present the results of experiments concerning cooperation, altruism and lie-aversion, in an effort to gain a deeper understanding of whether pro-social people are more lie-averse than others. The purpose of this experiment was to determine whether there is a correlation between pro-social behaviours such as cooperation and altruism, and aversion to lying, regardless of the consequences of lying. We set up experiments involving what Erat and Gneezy 2012 term white lies – lies that benefit only the other player(s) (altruistic white lies) or both liar and the other player(s) (Pareto white lies), and, crucially, cause no harm to others. As noted by Cappelen, Sorensen and Tungodden 2012, Erat and Gneezy's experimental design appears to identify a purely moral dimension to decisions to avoid deception, even in situations where both parties would benefit. We build on this, seeking to determine whether lie-aversion is more pronounced in those who behave in a pro-social way in games standardly taken to provide measures of these behaviours, the PD and DG.
\par Findings in the literature have suggested that the negative utility associated with dishonesty may be significant enough to deter people from lying, even where a lie would produce the optimal situation for all players. The present study also points in this direction.
\par After an initial round of games assigning participants randomly to the DG or PD, two permutations of the Deception Game (developed by Erat and Gneezy 2012) were set up, one involving the possibility to tell an altruistic white lie, whereby another person would benefit significantly at a small cost to the liar; one involving a Pareto white lie, whereby both parties would benefit significantly. Given that the lies in question would not do other parties any harm; we speculated that if people did not lie, it would be because they had some moral aversion to lying, independent of the consequences of doing so.
}
\section{Experimental design and procedure}
\par We set up a two-stage experiment in which we first collect data on participants' lying aversion; followed by data regarding their pro-social preferences. In the first stage, participants were directed to one of two variations on the Deception Game, in the spirit of Gneezy et al. (2013). One variation serves to measure aversion to tell an altruistic white lie; the other variation serves to measure aversion to tell a Pareto white lie. In the second stage of the experiment, the players were randomly assigned to either the DG or the PD. Comprehension questions were asked for each of the four games, before any decision could be made. Participants failing any of the comprehension questions were automatically excluded from the survey. In the next subsections we describe the experimental design. Full experimental instructions are reported in Appendix A.
\subsection{Stage 1: Measure of lying-aversion}
\par In the first stage of the experiment, participants played a Deception Game akin to that of Gneezy et al. (2013) with Pareto White Lies (PWL) and Altruistic White Lies (AWL) treatments. As in Gneezy et al. (2013) two players are paired and the first player has the opportunity to tell a white lie. However, unlike Gneezy et al. (2013), in our Deception game the payoffs of both players depend only on Player 1's choice and not on whether Player 2 believes that Player 1 is telling the truth or telling a lie. In our Deception Game, Player 2 is passive and does not make any decision. We use this variant because we are interested in looking at the relation between Player 1's lying aversion and their pro-social tendencies. Our design allows us to avoid confounding effects due to the beliefs that Player 1 may have about the beliefs of Player 2. Since in our design Player 2 does not make any decision, the beliefs that Player 1 may have about Player 2 do not make any role and a pro-social Player 1 will always tell a white lie, regardless of their beliefs.
\par Specifically, in our Deception Game, Player 1 is assigned to group $i$, where $i\in\{1, 2\}$. The group allocation is communicated only to Player 1. Player 1 can choose between two possible strategies. Option A: telling the number of the group they were assigned to; or Option B: telling the number of the other group. Players in the AWL condition were told that the payoff for each player would be determined as follows:
\begin{itemize}
\item Option A: Player 1 and Player 2 earn $\$0.10$ each.
\item Option B: Player 1 earns $\$0.09$ and Player 2 earns $\$ 0.30$.
\end{itemize}
\par Players in the PWL condition, on the other hand, were told that the payoff for each player would be determined as follows:
\begin{itemize}
\item Option A: Player 1 and Player 2 earn $\$ 0.10$ each.
\item Option B: Player 1 and Player 2 earn $\$ 0.15$ each.
\end{itemize}
\subsection{Stage 2: Measure of pro-sociality}
In the second stage of the game, all participants were randomly assigned to either a one-shot anonymous Dictator Game (DG) or a one-shot anonymous Prisoner's Dilemma (PD) to assess the extent of their altruism toward, or cooperation with, unrelated individuals.
\par In the DG, dictators were given an initial endowment of $\$ 0.20$ and were asked to decide how much money, if any, to \emph{transfer} to a recipient, who was given nothing. Each dictator was informed that the recipient they were matched with would have no active role and would only receive the amount of money the dictator decides to give. In the PD, participants were given an initial endowment of $\$ 0.10$ and were asked to decide whether to \emph{transfer} the $\$ 0.10$ to the other participant (cooperate) or not (defect). Each time a participant transfers their $\$0.10$, the other participant earns $\$0.20$. Each participant was informed that the participant they were matched with would be facing the same decision problem.
\par We deliberately chose to use the word ''transfer'', rather than ``give'', ``cooperate'', or similar words, in order to minimise possible framing effects caused by the moral weight associated with names of the strategies.
\section{Results}
Participants living in the United States were recruited via the crowd-sourcing internet marketplace Amazon Mechanical Turk (Paolacci, Chandler \& Ipeirotis, 2010; Horton, Rand \& Zeckhauser, 2011; Bartneck, Duenser, Moltchanova \& Zawieska, 2015). A total of 1212 subjects (59\% males, mean age = 33.83) passed the comprehension questions and participated in our experiment.
\par In the first stage of the experiment, 614 subjects played in the AWL treatment while 598 subjects were assigned to the PWL treatment. Pareto white lies were told extremely more frequently than altruistic white lies: whilst only 23\% of the participants chose to tell an altruistic white lie, 83\% of the subjects lied in the PWL treatment (Wilcoxon Rank sum, $p < .0001$). These results are qualitatively in line with those reported by Erat and Gneezy (2012), who found that 43\% of people lie in their AWL treatment and 76\% of participants lie in their PWL treatment. The effect of demographic questions on lying aversion will be discussed separately.
\par In the second stage of the experiment, 697 participants were assigned to the DG, while 515 played the PD. Dictators on average transferred 22\% of their endowment, whilst in the PD cooperation was chosen 35\% of the time. Linear regression predicting DG donation as a function of the three main demographic variables (sex, age, and education level) shows that women donated more than men (coeff = $1.74$, $p < .0001$), that elders donated slightly more than younger people (coeff $=0.03$, $p=0.048$) and that education level has no significant effect on DG donations (coeff = 0.04, $p=0.78$)\footnote{The fact that women give more than men in the DG is reasonably well established, as the majority of studies report either this effect (Eckel \& Grossmann, 1997; Andreoni \& Vesterlund, 2001; Dufwenberg \& Muren, 2006; Houser \& Schunk, 2009; Dreber, Ellingsen, Johannesson \& Rand, 2013; Dreber, von Essen \& Ranehill 2014; Capraro \& Marcelletti 2014; Capraro 2015) or a null effect (e.g. Dreber et al. 2013; Bolton \& Katok, 1995). Also the fact that elders donate more than younger people is relatively well established (see Engel (2011) for a meta-analysis reporting a marginally significant effect and Capraro \& Marcelletti (2014) for a replication of this effect on an AMT sample).}. On the other hand, logit regression predicting cooperation as a function of the three main demographic variables shows that none of them has a significant effect on cooperative behaviour (all $p$'s $>0.15$).
\par \vspace{\baselineskip}
\hspace{0.38 cm} \textbf{Altruism and lying-aversion}
\par Figure 1 reports the average DG donation of liars and honests in both the AWL and the PWL treatments and suggests that honest people were more altruistic than liars in the PWL treatment, but less altruistic than liars in the AWL treatment. To confirm this we use linear regression predicting DG donation using a dummy variable, which takes value 1 (resp. 0) if the participant has told the truth (resp. a lie). Results show that, in the AWL treatment, honest people were marginally significantly less altruistic than liars (coeff $=-1.14$, $p = 0.063$), and that, in the PWL treatment, honest people were significantly more altruistic than liars (coeff $= 1.43$, $p = 0.035$). Next we examine whether these differences are driven by individual differences. To do this, we repeat the linear regressions including controls on the three main demographic variables (sex, age, and level of education). Results show that, in the AWL treatment, honest people were significantly less altruistic than liars (coeff $=-1.33$, $p = 0.03$), and that, in the PWL treatment, honest people were marginally significantly more altruistic than liars (coeff $ = 1.26$, $p = 0.06$). In both cases, the only significant demographic variable is the gender of the participant (AWL: coeff $= 1.74$, $p = 0.0008$, PWL: coeff $= 1.79$, $p = 0.0008$). Full regression table is reported in Appendix B, Table 1. Thus, although the difference in altruism between liars and honest people is partly driven by the gender of the participant, it remains significant or close to significant also after controlling for this variable, suggesting existence of a true effect of aversion to tell a white lie on altruistic behaviour in the Dictator Game.
\begin{figure}[h]
\centering
\includegraphics[scale=0.70]{DG.jpg}
\caption{\emph{Average DG donation of liars and honests in both the AWL and the PWL treatments. Error bars represent the standard errors of the means. In the Pareto White Lies treatment, honest people tend to me more altruistic than liars (linear regression with no control on socio-demographic variables: coeff $= 1.43$, $p = 0.035$; with control: coeff $= 1.26$, $p = 0.06$). In the Altruistic White Lies treatment, honest people tend to me less altruistic than liars (linear regression with no control on socio-demographic variables: coeff $= -1.14$, $p = 0.063$; with control: coeff $= -1.33$, $p = 0.03$).}}
\label{fig:DGlie}
\end{figure}
\par \vspace{\baselineskip}
\hspace{0.38 cm}\textbf{Cooperation and lying-aversion}
\par \par Figure 2 reports the average PD cooperation of liars and honests in both the AWL and the PWL treatments and suggests that, as in the DG case, honest people were more cooperative than liars in the PWL treatment, but less cooperative than liars in the AWL treatment. To confirm this we use logit regression predicting PD cooperation using a dummy variable, which takes value 1 (resp. 0) if the participant has told the truth (resp. a lie). Results show that, in the AWL treatment, honest people were highly less altruistic than liars (coeff $=-1.25$, $p < .0001$), and that, in the PWL treatment, honest people were significantly more altruistic than liars (coeff $= 0.71$, $p = 0.04$). Next we examine whether these differences are driven by individual differences. To do this, we repeat the logit regressions including controls on the three main demographic variables. Results show that, in the AWL treatment, honest people were less altruistic than liars (coeff $=-1.31$, $p < .0001$), and that, in the PWL treatment, honest people were more altruistic than liars (coeff $ = 0.79$, $p = 0.02$). In both cases, none of the demographic variable is significant (only gender has a marginally significant effect ($p = 0.08$), but only in the PWL treatment). Full regression table is reported in Appendix B, Table 2.
\begin{figure}[h]
\centering
\includegraphics[scale=0.70]{PD.jpg}
\caption{\emph{Average PD cooperation of liars and honests in both the AWL and the PWL treatments. Error bars represent the standard errors of the means. In the Pareto White Lies treatment, honest people tend to me more cooperative than liars (logit regression with no control on socio-demographic variables: $= 0.71$, $p = 0.04$; with control: coeff $ = 0.79$, $p = 0.02$). In the Altruistic White Lies treatment, honest people tend to me less cooperative than liars (logit regression with no control on socio-demographic variables: coeff $=-1.25$, $p < .0001$; with control: coeff $=-1.31$, $p < .0001$).}}
\label{fig:PDlie}
\end{figure}
\par \vspace{\baselineskip}
\hspace{0.38 cm}\textbf{Gender differences in deception}
\par Gender differences in deceptive behaviour have attracted considerable attention since the work of Dreber and Johannesson (2008), who found that men are more likely than women to tell a black lie, that is, a lie that benefits the liar at the expenses of the listener. In the context of white lies, Erat and Gneezy (2012) found that women are more likely than men to tell an altruistic lie, but men are more likely than women to tell a Pareto white lie. Interestingly, the latter result was not replicated by Cappelen et al. (2013), who found no gender differences in lying aversion in the context of Pareto white lies. In line with this latter result, we also find no gender differences in the PWL treatment. Indeed, logit regression predicting the probability of telling a Pareto white lie as a function of sex, age, and level of education shows no significant effect of gender (coeff $= 0.25$, $p = 0.25$) and age (coeff $=-0.01$, $p = 0.47$) and, if anything, shows a significant negative effect of the level of education (coeff $=-0.18$, $p = 0.04$). Interestingly, in the context of altruistic white lies, we even find the reverse correlation of that reported in Erat and Gneezy (2012). In our sample, men are slightly more likely than women to tell an altruistic white lie (26\% vs 18\%). The difference is statistically significant as shown by logit regression predicting the probability of telling an altruistic white lie as a function of sex, age, and level of education (gender: coeff $= 0.50$, $p=0.02$; age: coeff $=-0.00$, $p = 0.85$; education: coeff $=-0.02$, $p = 0.73$). We refer the reader to Figure 3 for a visual representation of gender differences in deceptive behaviour.
\begin{figure}[h]
\centering
\includegraphics[scale=0.70]{genderdifferences.jpg}
\caption{\emph{Proportion of females across treatments divided between liars and honests. In the Pareto White lie treatment, there is no statistically significant gender difference in deceptive behaviour. On the other hand, in the Altruistic White lie treatment, we found that women are significantly more likely than men to tell the truth.}}
\label{fig:PDlie}
\end{figure}
\section{Discussion}
\par We conducted this experiment to explore the relation between cooperation, altruism, and aversion to telling white lies among participants. Cooperative tendencies were measured through the Prisoner's Dilemma (PD); altruistic tendencies were measured through the Dictator Game (DG); and lying-aversion was measure using a Deception Game. The setup of our Deception Game was such that if Player 1 chose to lie then there would be an increase in monetary outcome for both players (the Pareto white lie variant, PWL) or an increase in monetary outcome for Player 2 at a small cost to Player 1 (the altruistic white lie variant, AWL). Our design differed from previous versions of the Deception Game in that the payoffs of both players depend only on the decision of the first player. This design allows us to study the correlation between Player 1's lying-aversion and pro-social tendencies without adding the potentially confounding factor that Player 1 may have beliefs about the behaviour of Player 2.
\par Our results provide evidence of three major findings: (i) both altruism and cooperation are positively correlated with aversion to telling a Pareto white lie; (ii) both altruism and cooperation are negatively correlated with aversion to telling an altruistic white lie; (iii) men are more likely than women to tell an altruistic white lie, but not to tell a Pareto white lie.
\par These results make several contributions to the literature. The positive correlation between altruism and aversion to telling a Pareto white lie was also found by Cappelen et al. (2013). Our results replicate and extend this finding as they also show that the same correlation holds true when considering cooperative behaviour (as opposed to altruistic behaviour), and that these correlations disappear and are actually even reversed when considering altruistic white lies (as opposed to Pareto white lies). These are not trivial extensions. Indeed, a positive correlation between altruism in the DG and aversion to telling a Pareto white lie can, in principle, be explained by assuming that there are two types of agents: (i) \emph{non-purely} utilitarian agents, who aim at maximising the social welfare and choose the strategy that maximises their payoff in case multiple strategies give rise to the same social welfare (e.g., Charness \& Rabin, 2002; Capraro, 2013); and (ii) \emph{purely} egalitarian agents, who minimise payoff differences, irrespective of their own payoff (e.g., Fehr \& Schmidt, 1999; Bolton \& Ockenfels, 2000 - with suitable values for the parameters of the models). Assuming this types distribution, non-purely utilitarian agents always tell Pareto white lies (because they increase the social welfare) and give nothing in the DG (because giving does not increase the social welfare); and purely egalitarian agents give half in the DG, yet are indifferent between telling a Pareto white lie or not in the Deception Game (since they both minimise payoff differences). Thus, if the proportion of utilitarian agents is large enough, this would generate a positive correlation between altruism in the DG and aversion to tell a Pareto white lie, that would be consistent with the findings in Cappelen et al. (2013).
\par On the other hand, explaining our results using distributional preferences is much harder. Indeed, to explain the negative correlation between cooperation in the PD and telling a PWL, one must assume that the majority of utilitarian people actually defects in the PD. But this assumption clashes with the very nature of utilitarian people that is that of maximizing the total welfare and thus to cooperate in the PD.
\par One potential explanation for our findings is that subjects have two possible degrees of moral motivations (low or high) towards either of two moral principles (utilitarianism and deontology). Utilitarian people follow distributional preferences for maximizing the social welfare; deontological people follow non-distributional preferences for doing what they think it is the right thing to do. We assume that these types of individuals act as follows:
\commentout{
explanation is that a proportion of people
experience a conflict between maximising their own monetary payoff and
doing what they think it is the morally right thing to do, whose proximate
path is a desire to avoid norm-based guilt (Basile and Mancini, 2011; Carn\`i, Petrocchi, Del Miglio, Mancini and Couyoumdjian, 2013; Izard, 1977; Lewis,
1971; Monteith, 1993; Mosher, 1965; Piers and Singer, 1971; Wertheim and
Schwartz, 1983). According to this interpretation, a proportion of these
"homines morales" may have non-distributional notions of what they think it
is the morally right thing to do: irrespective of economic consequences,
they cooperate, share, tell the truth. Besides explaining our first finding, interestingly, this interpretation is consistent also with our second finding. On the one hand, telling the truth in the AWL condition maximises the sender's individual gain and so attracts self-regarding agents; on the other hand, telling an altruistic white lie maximises the gain of the receiver at a cost to the liar and so attracts other-regarding agents. These attractors work in opposite directions to those observed in the PWL conditions and might explain the reversal of significant interaction between pro-sociality and lying aversion in the AWL treatments.
}
\begin{itemize}
\item \emph{High utilitarian} people give half in the DG, cooperate in the PD, and lie in the AWL and in the PWL.
\item \emph{High deontological} people give half in the DG, cooperate in the PD, and tell the truth in the AWL and in the PWL.
\item \emph{Low utilitarian} people keep in the DG, keep in the PD, tell the truth in the AWL and are indifferent in the PWL.
\item \emph{Low deontological} people keep in the DG, keep in the PD, tell the truth in the AWL and are indifferent in the PWL.
\end{itemize}
According to this partition, the positive correlation between truth-telling and DG-donation/PD-cooperation in the PWL treatment would be driven by \emph{high deontological} subjects; and the negative correlation between truth-telling and DG-donation/PD-cooperation in the AWL treatment would be driven by \emph{low utilitarian} and \emph{low deontological} subjects.
\par There might be \emph{high utilitarian }subjects as well, but they do not show
up because we have only one case per subject. An interesting direction for future research is therefore to do a within-subject design with many trials, using different payoffs, aimed at establishing the position of each subject in a two-dimensional space.
\par Of course, more research is needed also to support the existence of a possibly non-distributional ``deontological domain'', containing all those actions that a
particular individual considers to be morally right independently of their economic consequences, and to classify the actions belonging to this domain. For instance, here we have focussed on altruism and cooperation, as they are the most studied pro-social behaviours. However, they are not the only ones. Future research may be devoted to understand whether the same correlations with truth-telling hold for other pro-social behaviours, such as benevolence (i.e., acting in such a way to increase the other's payoff beyond one's own, Capraro, Smyth, Mylona, \& Niblo 2014) and hyper-altruism (i.e., weighting the other's payoff more than one's own, Crockett, Kurth-Nelson, Siegel, Dayan \& Dolan 2014; Capraro 2015).
\par Additionally, our results connect to the work of Levine and Schweitzer (2015), which reported that telling an altruistic white lie signals pro-social tendencies in observers: third parties, when playing in the role of trustors in the Trust Game, allocate more money to people who have told an altruistic white lie in a previous Deception Game than to those who have told the truth. However, Levine and Schweitzer (2015) did not measure trustees' behavior and so it remained unclear whether people telling an altruistic white lie were really more prosocial than those telling the truth, or it was an observer's false belief. Our results provide evidence that this is not a false belief, as they show that subjects telling a white lie are both more altruist and more cooperative than those telling the truth.
\par Finally, our results add to the literature regarding gender differences in deceptive behaviour. Dreber and Johannesson (2008) found that men were more likely than women to tell black lies (e.g., lies that increase the liar's benefit at the expense of the listener). A similar result was shown by Friesen and Gangadharan (2012), who found that men are more likely than women to behave dishonestly for their own benefit. Yet, Childs (2012) failed to replicate this gender effect using a very similar design to that in Dreber and Johannesson (2008). In the context of white lies, Erat and Gneezy (2012) reported that women are more likely than men to tell an altruistic white lie, but men are more likely than women to tell a Pareto white lie. This latter result was not replicated by Cappelen et al. (2013), who found no gender differences in telling a Pareto white lie. In line with the latter result, our results also show no gender difference in telling a Pareto white lie. But, interestingly, we found gender differences in telling an altruistic white lie, but in the opposite direction than that reported in Erat and Gneezy (2012): we found that men are more likely than women to tell an altruistic white lie. Taken together, these results suggest that it may be premature to draw general conclusions about whether there are general gender differences in lying, and call for further studies.
|
2,877,628,088,555 | arxiv | \section{Introduction}
Let X be a topological space. We will consider configuration space of n ordered distinct points in X:
$$F(X,n)=\{(x_1,\ldots,x_n)\in X^n| x_i\neq x_j,\quad \forall \quad i\neq j \}$$
If X admits a group action $G\times X \rightarrow X$, we consider orbit configuration space:
$$F_G(X,n)=\{(x_1,\ldots,x_n)\in X^n| G(x_i)\cap G(x_j)=\emptyset,\quad \forall \quad i\neq j \}$$
The notion of configuration space was introduced in physics in 1940's. In mathematics, configuration spaces were first introduced by Fadell and Neuwirth\cite{[FN]} in 1962.
The classical configuration space is $F(\R^2, n)\cong F(\C,n)$, it is exactly the complement space of the union of finite hyperplanes in $\C^n$, its fundamental group eqauls to classical pure braid group. In 1969, Arnol'd\cite{[VA]} computed the cohomology ring of $F(\C,n)$, it is the form of Orlik-Solomon algebra in arrangement theory. $F(\R^k,n)$ is the complement of finite union of linear subspaces of codimension k in $\R^{nk}$, F.R.Cohen\cite{[CF]} calculated its integral cohomology ring as a free Lie algebras with each generator corresponding to a codimension-k subspace. In 2000, Feichtner and Ziegler\cite{[FZ00]} determined $H^*(F(S^k,n);\Z)$; in 2001\cite{[FZ02]}, they computed $H^*(F_{\Z_2}(S^k,n);\Z)$ , where the group aciton is the antipodar map.
If X is a smooth complex projective variety. In 1994, Fulton-MacPherson\cite{[FM]} proved that the rational cohomology ring can be computed from the rational cohomology ring of X and the Chern class of X. Totaro\cite{[To]} improved their work by proving that Chern class is actually irrevalent.
M.A. Xicoténcatl\cite{MA} did a lot of work in his Ph.D. thesis on orbit configuration space where $G$ acts freely on $M$, he computed the cohomology and loop space homology of some free action spaces, such as complements of arrangements, spaces of polynomials and spaces of type $K(\pi, 1)$.
But for other topological spaces with non-free group action, it becomes much harder to compute their homotopy groups and cohomology rings, tools used in above examples can no longer be applied to the computation.
There is a very typical kind of spaces with non-free group action. In 1991, Davis and Januszkiewicz\cite{[DJ]} introduced four classes of nicely behaving manifolds over simple convex polytopes---small covers, quasi-toric manifolds, (real) moment-angle manifolds which have become important objects in toric topology. We are interested to study the orbit configuration spaces $F_{G_d^m}(M,n)$ for a dm-dimensional $G_d^m$-manifold $M$ over a simple convex m-polytope P ---Where $M$ is a small cover and $G_d^m=\Z_2^m$ when $d=1$, and a quasi-toric manifold and $G_d^m=T^m$ when $d=2$. We expect to find the relation between the algebraic topology of $F_{G_d^m}(M,n)$ and combinatoric informations of polytope P. In 2008, Junda Chen\cite{[CLW]} gave an explicit formula for the Euler characteristic of $F_{G_d^m}(M,n)$ in terms of the h-vector of P and gave a description of homotopy type when $n=2$. But there is still some distance between this work and our expectation.
In this paper, we focus on $F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n)$. Since it is the local representation of $F_{\mathbb{Z}_2^m}(M,n)$, the results in this paper will help to improve the study of $F_{\mathbb{Z}_2^m}(M,n)$. Besides, $\Z_2^m \curvearrowright \R^m$ is a typical non-free
group action over Euclidean space, it can be an interesting example for the computation of orbit configuration spaces.
In the pointview of arrangements, $F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n)$ can be regarded as complement space of a collection of subspaces in Euclidean space. The study of complex arrangement is complete. In 1982, Richard Randell \cite{Ran} gave a nice description of the fundamental group of complement of complexification of real arrangements. In 1995, De Concini and Procesi\cite{[DC]} constructed a rational model using only labeled lattice proved that the rational cohomology ring are determined by this lattice. In 1999, Sergey Yuzvinsky \cite{[Yu]} constructed rational model on atomic complex to simplify De Concini and Procesi's result.
There is little result for real arrangements. Goresky-MacPherson describe the integral homology group. In 2000, Mark de Longueville and Carsten A. Schultz \cite{[LS]} computed the integral cohomology ring of geometric ($\geq 2$) real arrangements. For general real arrangements, there isn't a good tool to get its cohomology ring information.
Our main result is an description of $H^*(F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n);\Z_2)$. Let $\mathcal{A}$ be a real arrangement in Euclidean space, $M(\mathcal{A})$ denote its complement space. A differential graded algebra $\widetilde{D}$ is constructed based on the intersection poset of arrangement $\mathcal{A}$. And $H^*(\widetilde{D},\Z_2)$ is computable.
\begin{thm}
There is a ring isomorphism $H^*(F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n);\Z_2)\cong H^*(\widetilde{D},\Z_2)$
\end{thm}
This method can be applied to calculation of mod 2 cohomology ring of any real arrangements.
This paper is organised as follows. In section 2 and 3, we give a brief introduction on the notions of small covers and quasi-toric manifolds, subspace arrangements, Goresky-MacPherson isomorphism and intersection product in arrangements theory; in section 4, we construct a differential graded algebra to describe the cohomology ring of $H^*(F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n);\Z_2)$; in section 5, we give a simple example.
\section{Preliminary}
\subsection{Small covers and quasi-toric manifolds}
By the definitions in \cite{[DJ]}, let $P^m$ be an $m$-dimensional simple convex polytope.
Let $G_d^m$ be $\Z_2^m$, $\F_d=\R$ if d=1; and the torus $T^m$, $\F_d=\C$ if $d=2$.
The natural action of $G_d^m$ on $\F_d^m$ is called the \textit{standard representation} , and the orbit space is $\R^m_+$.
A $dm$-dimensional $G_d^m$-manifold $M^{dm}$ over $P^m$, is a smooth closed $dm$-dimensional manifold $M^{dm}$ with a locally standard $G_d^m$-action such that the orbit space is $P^m$.
A $G_d^m$-manifold $M^{dm}$ is called \textit{small cover} if $d=1$ and \textit{quasi-toric manifold} if $d=2$.
\subsection{Local representation}
Since it is difficult to describe the topology of $F_{G_d^m}(M^{dm},n)$, we first consider its local representation.
For $d=1$,the orbit configuration space is $F_{\Z_2^m}(M^m,n)$, its local representation is $F_{\Z_2^m}(\R^m,n)$.
For $d=2$, the orbit configuration space is $F_{T^m}(M^{2m},n)$, its local representation is $F_{T^m}(\R^{2m},n)$.
In this paper, we only consider the case when d=1.
We can observe that\\ $F_{\Z_2^m}(M^{m},n)=\{(x_1,x_2,...,x_n)\in (\R^m)^n|x_i \in \R^m, \Z_2^m(x_i)\cap \Z_2^m(x_j)=\emptyset , \forall i \neq j\}\\ \hspace*{1.9cm}
=(\R^m)^n\setminus \underset{1\leq i < j\leq n}{\bigcup} \{(x_1,x_2,...,x_n)\in (\R^m)^n|\Z_2^m(x_i)=\Z_2^m(x_j)\}$
Let $A_{ij}\triangleq \{(x_1,x_2,...,x_n)\in (\mathbb{R}^m)^n|x_i\in \mathbb{Z}_2^m(x_j)\}$, it is the union of $2^m$ subspaces in $(\mathbb{R}^m)^n$, each subspace is in the form $$A_{ij}^g\triangleq \{(x_1,x_2,...,x_n)\in (\mathbb{R}^m)^n|x_i=g(x_j),g \in \mathbb{Z}_2^m\}$$ with codimension $m$.
Thus $F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n)$ can be regarded as the complement space of subspace arrangement $\mathcal{A}$ in $(\mathbb{R}^m)^{\times n}$. $\mathcal{A}$ consists of $C_n^2 \times 2^m$ subspaces with codimension $m$. $\mathcal{A}=\{A_{ij}^g|1\leq i<j\leq n, g \in \mathbb{Z}_2^m\}$.
\subsection{Arrangement theory}
In this part, we review some useful concepts about subspace arrangement in \cite{[LS]}.
\subsubsection{\textbf{Notations}}
Let $\mathcal{A}$ be a linear subspace arrangement in a finite-dimensional $\mathbb{R}$-vector space W, let $\mathit{u}\subseteq \mathit{v}\subseteq W$ be linear subspaces.
\begin{itemize}
\item$\pi^\mathit{u}$: the quotient map $W \rightarrow W/\mathit{u}$
\item$\pi^{\mathit{u},\mathit{v}}$: the quotient map $W/\mathit{u} \rightarrow W/\mathit{v}$
\item$\mathcal{A}_\mathit{u}\triangleq\{\mathit{z}\in \mathcal{A}|\mathit{u}\subseteq \mathit{z}\}$ the subarrangement in W
\item$\mathcal{A}/\mathit{u}\triangleq \{\pi^\mathit{u}(\mathit{z})|\mathit{z}\in \mathcal{A}_\mathit{u}\}$ the arrangement in $W/\mathit{u}$
\item$M(\mathcal{A})$ denote the complement space $W \setminus \bigcup \mathcal{A}$
\item $P$ intersection poset, denote the set of all intersections of subset of $\mathcal{A}$.\\
The intersection poset $P$ is partially ordered by the reverse inclusion;\\
It has maximal element $\top \triangleq \bigcap \mathcal{A}$ and minimal element $W \triangleq \bigcap \emptyset$; \\
The join operation $\vee$ in $P$ is given by intersection;\\
$P$ is furnished with a dimension function $d:P \rightarrow \mathbb{N}$; \\
For $\mathit{u},\mathit{v} \in P$, we denote by $[\mathit{u},\mathit{v}],(\mathit{u},\mathit{v}],[\mathit{u},\mathit{v})$ the respective intervals in $P$.
\end{itemize}
For any partially ordered set Q, denote by $\triangle(Q)$ the order complex of Q whose simplices are given by chains in Q.
\subsubsection{\textbf{generic points}}
To establish a map between intersection poset $P$ and subspace arrangement, we have to introduce the concept of generic points.
For $\mathit{u}\in P$, generic point $x^\mathit{u}$ means either a point in $\mathit{u}\setminus \underset{\mathit{z}\in (\mathit{u},T]}\bigcup \mathit{z}$ or a map
\\ \begin{tabular}{rcc}
$x^{\mathit{u}}: [W,\mathit{u}]$&$\rightarrow$&$W/\mathit{u}$\\
$\mathit{v}$&$\mapsto$&$x_\mathit{v}^\mathit{u}$
\end{tabular}
with $x_\mathit{v}^\mathit{u}\in \pi^\mathit{u}(v)\setminus \underset{\mathit{v}^\prime \in (\mathit{v},\mathit{u}]}\bigcup \pi^\mathit{u}(\mathit{v}^\prime)$
Let $\mathit{u}\in P$, with generic points $x^\mathit{u}$, define a affine map
$\phi^{x^\mathit{u}}: \triangle[W,\mathit{u}]\rightarrow W/\mathit{u}$ which is affine on simplices and satisfies\\
$\phi^{x^\mathit{u}}(\mathit{w})=x_\mathit{w}^\mathit{u}\qquad \mathit{w}\in [W, \mathit{u}]$
Note that we can identify the abstract simplicial complex with its geometric realization.
\section{Goresky-MacPherson isomorphism and products}
This chapter mainly states the results in \cite{[LS]}. \subsection{Goresky-MacPherson isomorphism}
Given an arrangement $\mathcal{A}$ in W. If $\epsilon>0$ is a real number. $B_\epsilon$ denotes the open $\epsilon$-ball in W. Then
\begin{equation}
H^k(W \setminus \bigcup \mathcal{A})\xrightarrow[\cong]{i^*}H^k(B_\epsilon \setminus\bigcup\mathcal{A})\xrightarrow[\cong]{\cap[W]}H_{d(W)-k}(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)
\end{equation}
here $\mathit{i}$ denotes inclusion, [W] is the orientation class of W, and $\mathcal{C}B_\epsilon$ is the complement of $B_\epsilon$.
The first isomorphism is trivial induced by inclusion map, the second isomorphism derives from Alexander Duality.
Now if we want to describe the cohomology ring of $M(\mathcal{A})$, we will work mainly in $H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$ with intersection product $\bullet$ given by $(\alpha \cap [W])\bullet (\beta \cap[W])=(\alpha \cup \beta)\cap [W]$, $\quad\alpha,\beta \in H^*(B_\epsilon \setminus\bigcup\mathcal{A})$.
Recall the map $\phi^{x^\mathit{u}}:\triangle[W,\mathit{u}]\rightarrow W/\mathit{u}$
For a simplex $\sigma \in \triangle[W,\mathit{u}]$. $\sigma=<v_0,\dots,v_k>,v_0<\dots<v_k$, $ \phi^{x^\mathit{u}}(\sigma) \subseteq \pi^\mathit{u}({v_0})$, $\phi^{x^\mathit{u}}(\sigma)\cap \pi^\mathit{u}(v_k)=\{x_{v_k}^\mathit{u}\}$.
$ \phi^{x^\mathit{u}}(\triangle(W,\mathit{u}])\subseteq \bigcup\mathcal{A}/\mathit{u}$, $\phi^{x^\mathit{u}}(\triangle[W,\mathit{u}))\subseteq W/\mathit{u}\setminus B_\epsilon^{W/\mathit{u}}$ (for small enough $\epsilon$).
Then we have map of pairs
$$\phi^{x^\mathit{u}}: (\triangle[W,\mathit{u}],\triangle(W,\mathit{u}]\cup\triangle[W,\mathit{u}))\rightarrow(W/\mathit{u},\bigcup\mathcal{A}/\mathit{u}\cup\mathcal{C}B_\epsilon^{W/\mathit{u}})$$
We simplify $\Delta\Delta[W,\mathit{u}]\triangleq(\triangle[W,\mathit{u}],\triangle(W,\mathit{u}]\cup\triangle[W,\mathit{u}))$
Because $ (\pi^\mathit{u})^{-1}(\cup\mathcal{A}/\mathit{u})=\cup\mathcal{A}_\mathit{u}\subseteq\mathcal{A}$ and $(\pi^\mathit{u})^{-1}(\mathcal{C}B_\epsilon^{W/\mathit{u}})\subset \mathcal{C}B_\epsilon$
Thus we can consider the following maps:
\begin{equation}
H_k(\Delta\Delta[W,\mathit{u}])\xrightarrow{\phi_*^{x^\mathit{u}}}H_k(W/\mathit{u},\bigcup\mathcal{A}/\mathit{u}\cup\mathcal{C}B_\epsilon^{W/\mathit{u}})\xrightarrow{\pi_!^\mathit{u}}H_{d(\mathit{u})+k}(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)
\end{equation}
$\pi_!^\mathit{u}$ is given by $\alpha \cap [W/\mathit{u}]\mapsto (\pi^\mathit{u})^*\alpha\cap[W] \qquad \alpha \in H^*(W/\mathit{u})$
Then we can introduce well known Goresky-MacPherson isomorphism.
\begin{thm}[\textbf{Goresky-MacPherson isomorphism}]
Let $\mathcal{A}$ be an arrangement in W and $x^\mathit{u}$ be a choice of gneric points. Then the map
$\sum_{\mathit{u}\in[W,\top]}\pi_!^\mathit{u}\circ \phi_*^{x^\mathit{u}}:\bigoplus_{\mathit{u}\in[W,T]}H_*(\Delta\Delta[W,\mathit{u}])\longrightarrow H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$
\\is an isomorphism as group.
\end{thm}
This proposition is originally proved in \cite{[MR]} by means of stratified Morse theory. There is another elementary proof in \cite{[LS]}.
\subsubsection{\textbf{Products}}
Let $P,Q$ be two intersection posets. $\triangle(P\times Q)=\triangle(P)\times \triangle(Q)$
If $C_*$ denotes the ordered chain complex, there is the well known map
$\quad C_*(\triangle P)\bigotimes C_*(\triangle Q)\xrightarrow{\times}C_*(\triangle(P\times Q))$ given by
\\$\langle u_0,\dots,u_k \rangle \bigotimes\langle v_0,\dots,v_l \rangle \longmapsto \underset{\substack{0=i_0\leq\dots\leq i_{k+l}=k\\0=j_0\leq \dots \leq j_{k+l}=l\\ \forall r \quad (i_{r-1},j_{r-1})\neq(i_r,j_r)}}{\sum}\sigma_{i,j}\langle(u_{i_0},v_{j_0}),\dots,(u_{i_{k+l}},v_{j_{k+l}})\rangle$
\\where the $\sigma_{i,j}$ are signs determined by $\sigma_{i,j}=1$ if k=0 or l=0 and by $\partial(a\times b)=\partial a \times b +(-1)^k a\times \partial b$.
Since $\triangle\triangle P\times \triangle\triangle Q=\triangle\triangle (P\times Q)$, this induces a product
$$\times : H_*(\triangle\triangle P)\otimes H_*(\triangle\triangle Q)\longrightarrow H_*(\triangle\triangle (P\times Q))$$
Let $\mathcal{A}$ be an arrangement in W and $\mathit{u},\mathit{v}\in P$.
To describe the products on $H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$, we have to know the products on $H_*(\triangle\triangle[W,\mathit{u}])$ and $H_*(\triangle\triangle[W,\mathit{v}]) \quad\forall \quad\mathit{u},\mathit{v}\in P$
When $\mathit{u}+\mathit{v}=W$, $(\pi^{\mathit{u}\cap\mathit{v},\mathit{v}},\pi^{\mathit{u}\cap\mathit{v},\mathit{u}}):W/(\mathit{u}\cap\mathit{v})\rightarrow W/\mathit{v}\times W\mathit{u}$ is an isomorphism. $\epsilon_{\mathit{u},\mathit{v}}$ be the degree of this linear isomorphism.
The join operator \begin{center}
$\vee :$ $\begin{tabular}{rclcc}
$[W,\mathit{v}]$&$\times$&$[W,\mathit{u}]$&$\longrightarrow$&$[W,\mathit{v}\cap \mathit{u}]$\\
$(z$&,&$w)$&$\longmapsto$&$z\cap w$
\end{tabular}$
\end{center} induces a simplicial map of pairs
$\vee:\Delta\Delta[W,\mathit{v}]\times \Delta\Delta[W,\mathit{u}]\rightarrow\Delta\Delta[W,\mathit{u}\cap\mathit{v}]$.
The product is given in the same way as that of intersection posets defined above. We will get the following proposition.
\begin{prop}
Let $\mathcal{A}$ be an arrangement in W and $\mathit{u},\mathit{v}$ intersections in $\mathcal{A}$, such that $\mathit{u}+\mathit{v}=W$. Given generic points $x^\mathit{u}$ and $x^\mathit{v}$ we have for $a \in H_k(\Delta\Delta[W,\mathit{u}])$, $b \in H_l(\Delta\Delta[W,\mathit{v}])$, and the generic points $x^{\mathit{u}\cap\mathit{v}}$ constructed above, that
\\$\pi_!^\mathit{u}(\phi_*^{x^\mathit{u}}(a))\bullet\pi_!^\mathit{v}(\phi_*^{x^\mathit{v}}(b))=\epsilon_{\mathit{u},\mathit{v}}(-1)^{l(d(W)-d(\mathit{u}))}\pi_!^{\mathit{u}\cap\mathit{v}}(\phi_*^{x^{\mathit{u}\cap\mathit{v}}}(\vee_*(a\times b)))$.
\end{prop}
When $\mathit{u}+\mathit{v}\neq W$, there exists a non-trivial linear functional $\Lambda
:W\rightarrow \mathbb{R}$ with the kernel containing $\mathit{u}+\mathit{v}$. This induces functionals $\Lambda_\mathit{u},\Lambda_\mathit{v}$ on $W/\mathit{u},W/\mathit{v}$ respectively. then we can choose generic points $x^\mathit{u}$ and $y^\mathit{v}$ such that
\begin{tabular}{ccc}
$\Lambda_\mathit{u}(x_\mathit{w}^\mathit{u})\geq 0$&$\forall \mathit{w}\in (W,\mathit{u}],$&$\Lambda_\mathit{u}(x_\mathit{W}^\mathit{u})>0$,\\
$\Lambda_\mathit{v}(y_\mathit{w}^\mathit{v})\leq 0 $&$ \forall \mathit{w}\in (W,\mathit{v}],$&$\Lambda_\mathit{v}(y_\mathit{W}^\mathit{v})<0.$
\end{tabular}
\begin{prop}
Let $\mathcal{A}$ be an arrangemnet in W and $\mathit{u},\mathit{v}$ intersections in $\mathcal{A}$ with $\mathit{u}+\mathit{v}\neq W$. Then for generic points $x^\mathit{u}$ and $y^\mathit{v}$ constructed above, the composition
\\$H_*(\Delta\Delta[W,\mathit{u}])\bigotimes H_*(\Delta\Delta[W,\mathit{v}])
\\ \hspace*{1cm} \xrightarrow{(\pi_!^\mathit{u}\circ \phi_*^{x^\mathit{u}})\bigotimes(\pi_!^\mathit{v}\circ \phi_*^{y^\mathit{v}})} H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)\bigotimes H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)\\
\hspace*{8cm}\xrightarrow{\quad\bullet\quad}H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$
\\is the zero map.
\end{prop}
The proof of Prop 3.2 and Prop 3.3 sees \cite{[LS]}.
\section{the cohomology of local representation}
In Prop 3.1, Prop 3.2 and Prop 3.3, Goresky-MacPherson isomorphism and intersection product of homology groups depend on the choice of generic points $x^\mathit{u}$. If the arrangements is a $(\geq 2)$-arrangement (that is $\forall \mathit{u},\mathit{v}\in P$, if $\mathit{u}<\mathit{v}$, then $d(\mathit{u})-d(\mathit{v})\geq 2$), by Lemma 5.1 in \cite{[LS]}, Goresky-MacPherson isomorphism is independent of the choice of generic points, in that case one can obtain a complete combinatorial description of the intersection product, furthermore one can describe the integral cohomology ring structure of $W \setminus \bigcup \mathcal{A}$.
However, it is not easy for real arrangements to satisfy $(\geq 2)$ condition. There are many cases for real arrangements such that the intersection of two subspaces decreases by only one dimension, and this is the key reason why real arrangements are much more difficult to deal with than complex arrangements.
In our concerning case, it is a pity that $F_{\mathbb{Z}_2^m}(\mathbb{R}^m,n)$ doesn't satisfy the $(\geq 2)$ condition, so we can't get rid of the influence of the choice of generic points. But if we consider the mod 2 cohomology ring instead of the integral cohomology ring, we can overcome this barrier. Thus we can get the following lemma easily.
\begin{lem}
Under $\mathbb{Z}_2$-coefficient, Goresky-MacPherson isomorphism and intersection products depend only on the intersection poset $P$.
\end{lem}
\begin{proof}
For arbitary two generic points $x^\mathit{u}, \tilde{x}^\mathit{u}$, let $a\in C_k(\Delta\Delta[W,\mathit{u}])$, $\phi^{x^\mathit{u}}(a)$ and $\phi^{\tilde{x}^\mathit{u}}(a)$ differ merely by an orientation. Thus $\phi^{x^\mathit{u}}_*=\phi^{\tilde{x}^\mathit{u}}_* ( mod \quad2)$. It means the Goresky-MacPherson isomorphism and intersection product of homology groups are independent of the choice of generic points over $\Z_2$ coefficient.
\end{proof}
Furthermore, the intersection products can be depicted as follows:
\begin{thm}
For an arrangement $\mathcal{A}$ with intersection poset $P$, under $\mathbb{Z}_2$-coefficients, the intersection product is given by the combinatorial data as follows.
\begin{tabular}{ccc}
$H_k(\Delta\Delta[W,\mathit{u}])\bigotimes H_l(\Delta\Delta[W,\mathit{v}])$&$\longrightarrow$&$ H_{k+l}(\Delta\Delta[W,\mathit{u}\vee\mathit{v}])$
\\$a\bigotimes b $&$ \longmapsto $&$\Big\{\begin{matrix}
\vee_*(a\times b) & \mathit{u}+\mathit{v}=W
\\0 & otherwise
\end{matrix}$
\end{tabular}
\end{thm}
\begin{proof}
Immediately get by Prop 3.1, Prop 3.2 and Prop 3.3 via below sequence of maps
$H_*(\Delta\Delta[W,\mathit{u}])\bigotimes H_*(\Delta\Delta[W,\mathit{v}])\longrightarrow H_*(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)\otimes H_*(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$\\
\hspace*{5.4cm}$\overset{\bullet}{\longrightarrow}H_*(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$\\
\hspace*{5.4cm}$\overset{\cong}{\longleftarrow}\underset{\mathit{w}\in [W,\top]}{\bigoplus}H_*(\Delta\Delta[W,\mathit{w}])$
\end{proof}
In [11], Yuzvinsky construct differential graded algebra based on atomic complex to describe the rational cohomology ring of complement sapce of complex subspace arrangement. In this paper, we adapt his construction with different definition of order of elements.
Since $H_k(\Delta\Delta[W,\mathit{u}])\triangleq H_k([W,\mathit{u}],(W,\mathit{u}]\cup[W,\mathit{u}))$
Let $\sigma\in C_k(\Delta\Delta[W,\mathit{u}]), \sigma\sim (W<\mathit{u}_1<\mathit{u})$, $\mathit{u}_1\in C_{k-2}(\triangle(W,\mathit{u}))$
and $(W<\mathit{u}_1<\mathit{u})\sim (W<\mathit{u}_2<\mathit{u}) \Longleftrightarrow \mathit{u}_1\sim \mathit{u}_2 $
$\therefore H_k(\Delta\Delta[W,\mathit{u}])\cong \widetilde{H}_{k-2}(\triangle(W,\mathit{u})) \qquad k\geq 2$
$H_1(\Delta\Delta[W,\mathit{u}])=\bigg \{ \begin{matrix}
\mathbb{Z}_2 & \text{if u is an atom}
\\0 & otherwise
\end{matrix}$
~\\
Now we determine the homology of $\triangle(W,\mathit{u})$.
Let $\mathcal{A}=\{A_1,\dots,A_p\}$ be an subspace arrangement, which are called atoms. $P$ be the intersection poset.
$\sigma \subset \mathcal{A}$. $\sigma=\{A_{i_1},\dots,A_{i_k}\}$, $\vee(\sigma)=A_{i_1}\cap\dots\cap A_{i_k}$.
Construct atomic complex $A(P)=\{\sigma\subset \mathcal{A}|\vee(\sigma)<\top \}$ ($\top \triangleq \bigcap \mathcal{A} $ ).
By Lemma 2.1 in \cite{[Yu]} , $A(P)$ is homotopy equivalent to order complex $\triangle(W,\top)$
For $\mathit{u}\in P$, define $\mathcal{A}_u=\{A_i\in \mathcal{A}|\mathit{u}\subset A_i\}$. Its corresponding intesection poset is denoted by $P_\mathit{u}$. Then $A(P_\mathit{u})\simeq \triangle(W,\mathit{u})$
\begin{defn}
The relative atomic (chain) complex $D=D(P)$ is the free abelian group on all subsets $\sigma=\{A_{i_1},\ldots,A_{i_p} \} \subset \mathcal{A}$, $dim(\sigma)=|\sigma|$.with its differential
\begin{center}
$\partial:$
\begin{tabular}{rcl}
$ C_n$&$ \rightarrow $&$ C_{n-1}$
\\$\sigma$&$\mapsto $&$\sum (-1)^j \sigma \setminus{A_{i_j}}$
\end{tabular}.
\end{center}
where the summation is taken over index j such that $\vee(\sigma\setminus{A_{i_j}})=\vee(\sigma)$.
\end{defn}
In fact, reative atomic complex $D$ can be represented as the direct sum of complexes. Let $\sum(\mathit{u})$ denotes the simplicial complex whose simplices are all the subsets of $\mathcal{A}_\mathit{u}$, denote $\overline{A(\mathit{u})}=\sum(\mathit{u})/A(P_\mathit{u})=\{\sigma\subset{\mathcal{A}_\mathit{u}}| \vee(\sigma)=\mathit{u}\}$, let $D(\mathit{u})=C(\overline{A(\mathit{u})})$. Obviously, $\sum(\mathit{u})$ is acyclic. The following lemma is immediate by easy calculation of simplicial homology group.
\begin{lem}
\begin{enumerate}
\item $\widetilde{H}_p(D(\mathit{u}))\simeq \widetilde{H}_{p-2}(A(P_\mathit{u}))$ \item $D=\underset{\mathit{u}\in P}{\bigoplus}D(\mathit{u})$
\end{enumerate}
\end{lem}
So $\underset{\mathit{u}\in P}{\bigoplus}H_k(\Delta\Delta[W,\mathit{u}])\cong \underset{\mathit{u}\in P}{\bigoplus}\widetilde{H}_{k-2}(\triangle(W,\mathit{u}))\cong \underset{\mathit{u}\in P}{\bigoplus}\widetilde{H}_{k-2}(A(P_\mathit{u}))$
$\cong \underset{\mathit{u}\in P}{\bigoplus} \widetilde{H}_kD(\mathit{u})\cong \widetilde{H}_k(D)\cong H_k(D) \qquad k\geq 2$
When k=1, $\underset{\mathit{u}\in P}{\bigoplus}H_1(\Delta\Delta[W,\mathit{u}])\cong H_1(D)$
Combine them together, $\underset{\mathit{u}\in P}{\bigoplus}H_k(\Delta\Delta[W,\mathit{u}])\cong H_k(D)$ for $k\geq 1$
~\\
Now turn to the product, look at $D(\mathit{u})$, it is generated by $\sigma\in \mathcal{A}_\mathit{u}, s.t. \vee(\sigma)=\mathit{u}$
Recall two maps (2.1) and (2.2).
$H_k(\Delta\Delta[W,\mathit{u}])\xrightarrow{\pi_!^\mathit{u}\circ \phi_*^{x^\mathit{u}}}H_{d(\mathit{u})+k}(W, \bigcup \mathcal{A}\cup\mathcal{C}B_\epsilon)\xleftarrow[\cong]{\cap[W]\circ \mathit{i}^*}H^{d(W)-d(\mathit{u})-k}(W\setminus \bigcup \mathcal{A})$
To describe the cup product, we are to define a new differential graded algebra $\widetilde{D}$ .
\begin{defn}
differential graded algebra $\widetilde{D}$ is the free abelian group on all subsets $\sigma=\{A_{i_1},\ldots,A_{i_p} \} \subset \mathcal{A}$ let $deg(\sigma)=d(W)-\abs{\sigma}-d(\vee(\sigma))$, with its differential
\begin{center}
$\delta:$
\begin{tabular}{rcl}
$ C^n$&$ \rightarrow $&$ C^{n+1}$
\\$\sigma$&$\mapsto $&$\sum (-1)^j \sigma \setminus{A_{i_j}}$
\end{tabular}.
\end{center}
where the summation is taken over index j such that $\vee(\sigma\setminus{A_{i_j}})=\vee(\sigma)$.
and define the multiplication on algebra $\widetilde{D}$ as following: $\sigma \circ \tau=\bigg \{ \begin{matrix}
\sigma\cup\tau & if \vee(\sigma)+\vee(\tau)=W
\\ 0 & otherwise
\end{matrix}$
\end{defn}
Thus according to Goresky-MacPherson isomorphism and Alexander Duality, we get our result
\begin{thm}
there is a ring isomorphism $H^*(M(\mathcal{A});\Z_2)\cong H^*(\widetilde{D};\Z_2)$
\end{thm}
\begin{proof}
We have proved $\underset{\mathit{u}\in P}{\bigoplus}H_k(\Delta\Delta[W,\mathit{u}])\cong H_k(D)$ as group,
together with Goresky-MacPherson isomorphism:
$$\sum_{\mathit{u}\in[W,\top]}\pi_!^\mathit{u}\circ \phi_*^{x^\mathit{u}}:\bigoplus_{\mathit{u}\in[W,T]}H_*(\Delta\Delta[W,\mathit{u}])\longrightarrow H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$$
we get $H_*(W,\bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)\cong H_*(D) $ as groups.
Since we have
$$H^k(W \setminus \bigcup \mathcal{A})\xrightarrow[\cong]{i^*}H^k(B_\epsilon \setminus\bigcup\mathcal{A})\xrightarrow[\cong]{\cap[W]}H_{d(W)-k}(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$$
And $H^*(\widetilde{D})$ is dual to $H_*(D)$, so $H^*(W \setminus \bigcup \mathcal{A})\cong H^*(\widetilde{D})$ as group.
Now we turn to product. Intersection product in Theorem 4.2 corresponds to cup product. In $H^*(\widetilde{D})$, the product defined in $H^*(\widetilde{D})$ dual to intersection product in $H_*(W, \bigcup\mathcal{A}\cup\mathcal{C}B_\epsilon)$ , therefore agree with cup product. so there is a ring isomorphism $H^*(W \setminus \bigcup \mathcal{A})\cong H^*(\widetilde{D})$.
\end{proof}
The shortcoming of rational model method is that we can not read the generators and relations explictly from the differential graded algebra. In my opinion, this shortcoming comes from the complexity of real arrangements.
\section{Example}
We take $F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2)$ as an example to verify the Theorem 4.6.
Because $F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2)=\C^2\setminus \mathcal{A} \qquad \mathcal{A}=\{H_1,H_2,H_3,H_4\}$ where
\hspace*{1cm} $H_1=\{((x_1,y_1),(x_2,y_2))\in \C \times \C | x_1=x_2, y_1=y_2\}$
\hspace*{1cm} $H_2=\{((x_1,y_1),(x_2,y_2))\in \C \times \C | x_1=x_2, y_1=-y_2\}$
\hspace*{1cm} $H_3=\{((x_1,y_1),(x_2,y_2))\in \C \times \C | x_1=-x_2, y_1=-y_2\}$
\hspace*{1cm} $H_4=\{((x_1,y_1),(x_2,y_2))\in \C \times \C | x_1=-x_2, y_1=y_2\}$
The differential graded algebra $\widetilde{D}$ is stated as following:
\hspace*{0.5cm}$deg(\emptyset)=4-0-4=0$
\hspace*{0.5cm}$\delta(\emptyset)=0$
~\\
\hspace*{0.5cm}$deg(\sigma)=d(W)-\abs{\sigma}-d(\vee(\sigma))$
\hspace*{0.5cm}$deg(1)=deg(2)=deg(3)=deg(4)=4-1-2=1$
\hspace*{0.5cm}$\delta(1)=\delta(2)=\delta(3)=\delta(4)=0$
~\\
\hspace*{0.5cm}$deg(12)=deg(14)=deg(23)=deg(34)=4-2-1=1$
\hspace*{0.5cm}$deg(13)=deg(24)=4-2-0=2$
\hspace*{0.5cm}$\delta(12)=\delta(23)=\delta(34)=\delta(14)=\delta(24)=\delta(13)=0$
~\\
\hspace*{0.5cm}$deg(123)=deg(124)=deg(234)=deg(134)=4-3-0=1$
\hspace*{0.5cm}$\delta(123)=\delta(134)=13, \hspace*{0.5cm}\delta(124)=\delta(234)=24$
~\\
\hspace*{0.5cm}$deg(1234)=4-4-0=0$
\hspace*{0.5cm}$\delta(1234)=123+124+134+234$
Through easy caculation, we choose (1),(2),(3),(4),(12),(14),(23),(24),(123)+(134) as the generators of $H^1(F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2),\Z_2)$, $(\emptyset)$ as the generators of $H^2(F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2),\Z_2)$, thus $H^k(F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2),\Z_2)=\bigg \{ \begin{matrix}
0&k\geq 2\\
\Z_2^9& k=1\\
\Z_2& k=0
\end{matrix}$, where products vanish.
On the other hand, since $F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2)=\C^2\setminus \mathcal{A}$, it can deforme retract to a torus with eight points removed, which has homotopy type of the wedge of nine circles. Thus, the caculation of $H^k(F_{\mathbb{Z}_2^2}(\mathbb{R}^2,2),\Z_2)$ agrees in two different ways ---geometric and combinatoric.
\renewcommand{\refname}{reference}
|
2,877,628,088,556 | arxiv | \section{Introduction}
Over the last decade, {a series of cosmological $N$-body
simulations called} the Horizon Run (HRs) simulations have served as
a testbed for cosmological models through comparisons with the
observed large-scale distribution of galaxies.
The first Horizon Run (HR1) was performed in 2008 and published in
2009 \citep{kim09}.
The simulation box size was $L_{\rm box} = 6592 ~h^{-1}{\rm
Mpc}${,} and the number of evolved particles was $N_{\rm p}
= 4120^3$.
The initial power spectrum was calculated by a fitting function from
\citet{einsenstein98}, adopting a standard $\Lambda$ cold dark matter
($\Lambda$CDM) cosmology in a concordance with WMAP 5-year
observations \citep{dunkley09}.
It produced eight non-overlapping all-sky lightcone data of halos and
subhalos up to $z=0.6$.
We studied the non-linear gravitational effects on the baryonic
acoustic oscillation (BAO) peak by measuring the changes {in}
the peak position and amplitude.
In 2011, we performed even bigger simulations called Horizon Run 2 and
3 (HR2 and HR3, respectively; \citealt{kim11}).
By adopting the same cosmological model as in HR1, the initial power
spectra of HR2 and HR3 were generated from the CAMB package
\citep{lewis00}.
The simulated galaxy distributions have been extensively exploited to
measure both the expected distribution of the largest structures for
testing cosmic homogeneity \citep{park12} and the cosmic topology for
constraining the non{-}linear gravitational effect on the halo density
field \citep{choi13,kim14,speare15}.
All the previous HRs have mean particle separations larger than $1
~h^{-1}{\rm Mpc}${, which has been sufficient for many} {cosmological}
tests.
With much success in quantifying the non-linear gravitational effects
on large-scale structures, recently we extended our research focus to
galaxy formation studies.
To model galaxies in simulations, we employed the subhalo-galaxy
one-to-one correspondence model and abundance matching between subhalo
mass function and the observed galaxy luminosity or stellar mass
function \citep{kim08}.
Most {characteristics} of observed galaxy distributions (in terms of
luminosity functions and one-point density distributions) are well
reproduced by the model while observed abundances of {galaxy clusters}
are not properly recovered from subhalos.
The underpopulation of simulated {galaxy clusters} may come from the
inefficient subhalo findings in cluster regions (\citealt{muldrew11};
for the various subhalo finding comparisons see \citealt{onions12})
or from the spatial decoupling between subhalos and galaxies{. The}
latter may survive the tidal disruption longer due to more compact
sizes through baryonic dissipation \citep{weinberg08}.
In the $\Lambda$CDM cosmology, dark matter halos form hierarchically
through the merger of smaller structures.
These merger events can trigger star formation and drive galaxy
formation and evolution \citep{kauffmann04, blanton07}.
{The merger history of galaxies} has extensively been studied in
semi-analytic models (SAMs; \citealt{cole94,kauffmann97, delucia04,
baugh06, lee14}) {for} the last two decades.
In {SAMs}, the gas heating and cooling rate are tabulated{,}
and the resulting star formation and supernovae feedback effects are
implemented with some parametric prescriptions.
Those parameters are fine-tuned to reproduce the correlation functions
and/or luminosity functions of observed galaxies. Even though {SAMs
have achieved} a great success in reproducing some observables,
{they require the introduction of a large number of parameters that
are not necessarily physical}.
Another well-known empirical galaxy model, the {halo occupation
distribution (HOD;} \citealt{berlind02,zheng05, zheng09}) modeling,
has been adopted to match the inner part of the galaxy correlation,
which is attributed to satellite pairs inside a virialized halo.
To distribute satellite galaxies in a halo{,} they empirically measure
he probability number distribution of satellites from observations.
The HOD is simpler than SAMs, and widely used for the comparison
between observed galaxies and simulated halos.
The galaxy-subhalo correspondence (or the abundance matching;
\citealt{kim08,trujillo-gomez11,rodriguez-puebla13, reddick13,
klypin15}) model is positioned between the two aforementioned models.
It is much simpler than SAMs but based on more physical processes than
the HOD.
It originally models the satellite galaxy distribution from subhalo
catalogs.
{Satellite galaxies} {in a galaxy cluster} originally formed {\it in
situ} isolated {and} merged into {the} cluster {afterward}.
{While} falling into the potential well of {the} cluster, they
experience {a} drag force by dynamical friction
\citep{zhao04}{,} and they inevitably show spiraling inward
orbital motions.
{After a certain time,} they finally merge into the central galaxy.
{Although it seems} reasonable to assume {the presence of} a satellite
galaxy inside a subhalo as long as there is no galaxy-halo
decoupling{,} it has been noted that some satellite galaxies may not
have a host subhalo \citep{gao04, guo11, guo14, wang14}.
This could be tested by extensive hydrodynamical simulations to
investigate the segregation between satellite {galaxies} and {their}
dark matter host \citep{weinberg08}.
However, hydrodynamical simulations are {expensive} to run and still
require much effort to reduce ambiguities in astrophysical processes
and numerical artifacts.
On the other hand, \cite{hong15} {recently} showed that if most
bound particles are {used} instead of subhalos {in the modeling}, {it
is possible to identify such satellites without hosting subhalo.}
Therefore, we performed a new
{simulation} in our series, the Horizon
Run 4 (HR4).
This simulation, with improved spatial and mass resolutions with
respect to the previous runs, retains a large number of particles.
It is well-suited to study galaxy formation by producing merger
trees.
The outline of the paper is as follows{.}
In Section 2 {and 3,} we describe the simulation specifics and outputs
{of HR4, respectively}.
{Mass function, shape and spin of virialized halos} are dealt
with in Section 4.
The analysis of two-point correlation functions and {mass accretion
history} are given in Section 5 and 6, respectively.
Summary and discussions are following in Section 7.
\section{GOTPM Code and Simulation}
\subsection{Initial Conditions \& Parallelism}
The simulation was run with an improved version of the GOTPM code
\citep{dubinski04}.
The input power spectrum is calculated by the CAMB package, and the
initial positions and velocities of the particles are calculated by
applying the second-order Lagrangian perturbation theory (2LPT) method
proposed by \cite{jenkins10}.
The gravitational force is evaluated through splitting the Newtonian
force law {into long- and short-range forces} (for the Newtonian and
Relativistic
relations, see \citealt{rigopoulos15, hwang12}).
The long-range forces are calculated {from} the Poisson equation in
Fourier space {for} the density mesh built by the Particle-Mesh (PM)
method{.}
{The} short-range {forces are} measured with the Tree method.
We parallelized the GOTPM code implementing MPI and OpenMP with a
one-dimensional domain decomposition ($z$-directional slabs).
We adopt a dynamic domain decomposition, which sets the number of
particle in each domain to be equal within one percent.
Accordingly, the slab width changes during the simulation run.
By using a dynamic domain decomposition, one can easily identify the
neighborhoods of a domain and establish communications between them.
On the other hand, slab domains usually have greater surface-to-volume
ratios than ordinary cubic domains (e.g., the orthogonal recursive
bisection, \citealt{dubinski96}), and so it has large communication
size between domains.
\subsection{Non-recursive Oct-Sibling Tree}
We have employed a non{-}recursive oct-sibling tree {(OST)} for the
tree-force update.
The {OST} is a structured tree of particles and nodes with mutual
connections established by sibling and daughter pointers.
Each particle has one sibling pointer, and each node has two pointers:
one for its daughter and the other for the next sibling.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{ost}
\caption{Example of the Oct-Sibling Tree structure. Boxes and circles
represent nodes and particles, respectively. The black and blue
arrows are daughter and sibling pointers, respectively.
Each node has a daughter and sibling pointers while each particle has
only a sibling pointer.
}
\label{ostfig}
\end{figure}
First, we create a top-most node encompassing all particles.
We define it as the zero tree level, and its sibling pointer is
directed to the null value (Fig. \ref{ostfig}).
From the top-most node, we recursively divide {each} node into eight
equal-sized cubic subnodes by increasing the tree level by one.
If a sibling subnode contains more {particle{s} than a {pre}defined
number}, we divide the node further by increasing the tree level by
one.
The daughter pointer of the node is directed to the first sibling
subnode{,} and the other subnodes are linked by their
sibling pointers.
If the node does not have any subnode, we make a chain of particles
linked by their sibling pointers, and we set the start and end of the
chain connected to the previous and next sibling nodes (or possibly
particles), respectively.
The last sibling at each local tree level is set to have its sibling
pointer directed to the mother's next sibling if it exits. If not, we
also recursively climb the local tree until we find the next sibling
of the current tree line.
The advantage of the OST over the traditional oct{-}tree is a
smaller number of pointers it employs and {the needlessness of} a
recursive tree walk, which requires additional costs for such a
stacking process to temporarily store information of the current
recursive depth.
The algorithm \ref{ost} is a pseudocode for the non-recursive tree walk
with the OST.
The tree walk is taken until the running pointer, $p$, encounters the
null value{.}
During the tree walks the opening of a node is determined by the
\algo{Open} function.
The tree-force update is done either by \algo{GroupForce} or
\algo{ParticleForce} depending on the data type addressed by the
pointer ($p\rightarrow {\bf type}$).
These three functions play a pivotal role in tree walks. \algo{Open}
decides whether to go further into one deeper level (opening the node
and going down to its daughter) or jump to the next sibling under the
opening condition that $\theta > \theta_c$, with $\theta_c$ the
predefined opening threshold.
The \algo{GroupForce} function calculates the gravitational force from
the group of particles using the multipole expansion while
\algo{ParticleForce} calculates the gravitational force from the
particle, $p$.
Thanks to its cost efficiency, this kind of pseudocode is {widely
applied} to our analysis tools such as a percolation method
(Friend-of-Friend halo finding), peak findings in a Spherical
Overdensity halo identification, and the two-point correlation
measurement.
\begin{figure}
\begin{minipage}{1\columnwidth}
\begin{pseudocode}{GotpmTreeWalk}{p}
\WHILE p \neq \bf{NULL} \DO
\BEGIN
\IF (p\rightarrow {\bf type}) = {\rm NODE} \THEN
\BEGIN
\IF {\bf Open}(p) = {\rm YES} \THEN
\BEGIN
p := p\rightarrow {\bf daughter}
\END
\ELSE
\BEGIN
\boldsymbol{call}~~ {\bf GroupForce}(p)\\
p := p\rightarrow {\bf sibling}
\END
\END
\ELSE
\BEGIN
\boldsymbol{call} ~~{\bf ParticleForce}(p)\\
p := p\rightarrow {\bf sibling}
\END
\END\\
\label{ost}
\COMMENT{$p$ is a running pointer.}
\end{pseudocode}
\end{minipage}
\end{figure}
\subsection{Position Accuracy in GOTPM}
One of the key factors to determine the resolution of Lagrangian codes
is the spatial accuracy or, more specifically, the number of
significant digits involved in the particle position.
Usually, a single-precision floating-point type has been applied to
save the position of a particle because the four-byte single precision
is sufficient for small simulations.
However, as the number of particles in simulations increases, the
position accuracy from single-precision begins to deteriorate.
Since the roundoff error of a single precision variable $\mathcal{A}$
is $\varepsilon_{\rm roundoff}(\mathcal{A}) \sim 10^{-7}
\mathcal{A}$, the maximum roundoff error of the single-precision
position with respect to the mean particle separation is
\begin{equation}
\varepsilon_{\rm roundoff}\left(\frac{r_{\rm max}}{d_{\rm mean}} \right)
\sim 10^{-7} \frac{L_{\rm box}}{L_{\rm box} / N_{\rm p}^{1/3}}
\sim 10^{-7} N_{\rm p}^{1/3}.
\end{equation}
For example, if the total number of particles is $6300^3$ as in the
HR4, the maximum roundoff position error lies at the level of a few
sub-percent of the mean particle separation, or $\varepsilon_{\rm
roundoff}(r_{\rm max} / d_{\rm mean}) \sim 10^{-3}$.
On the other hand, in the HR4 as well as in the HR2 \& 3, we
separate the position of a particle {($\boldsymbol r$)} into two
vectors as
\begin{equation}
{\boldsymbol r} = {\boldsymbol L} + {\boldsymbol d},
\label{precision}
\end{equation}
where $\boldsymbol L$ {and $\boldsymbol d$ are} the Lagrangian
position {and displacement from the {Lagrangian} position}
of a particle{, respectively}{. We} set the particle index by a
row-major order in the {Lagrangian} configuration and, therefore, it
does not require additional memory space to compute $\boldsymbol L$.
{Since the displacement of the simulated particle over the entire HR4
simulation run is less than ten times of the mean particle
separation ($d_{\rm max} \lesssim 10 d_{\rm mean}$), the maximum
position error in the HR4 is
\begin{equation}
\varepsilon_{\rm roundoff}\left( \frac{r_{\rm max}}{d_{\rm mean}} \right) \sim
10^{-7} \frac{d_{\rm max}}{d_{\rm mean}} \sim 10^{-6} .
\end{equation}
In this way, we significantly enhanced the accuracy of the particle
position without using any additional memory space.
}
\subsection{Simulation Specifics}
The HR4 was performed on the supercomputer of Tachyon-II installed at
KISTI (Korea Institute of Science and Technology Information).
We used 8,000 CPU cores over 50 straight days from late November in
2013 to early February in 2014.
Even with several system glitches over the allocated time period, we
succeeded to complete the simulation in about 50 days for
the gravitational evolution of $6300^3$ particles in a periodic cubic box
of a side length $L_{\rm box} = 3150 ~h^{-1}{\rm Mpc}$.
The starting redshift is $z_i=100$, which is chosen for {particles}
not to overshoot one grid cell spacing \citep{lukic07} in setting the
initial conditions.
{This high initial redshift, combined with 2LPT, ensures an accurate
power spectrum and mass function measurement at $z=0$
\citep{benjamin14}.}
The simulation took 2000 steps to reach the final epoch of $z_0=0$.
The mean particle separation is set to $d_{\rm mean} = 0.5 ~h^{-1}{\rm
Mpc}$ and the corresponding force resolution is $0.1 d_{\rm mean}$.
We adopted a standard $\Lambda$CDM cosmology in concordance
with WMAP 5-year.
This choice of cosmology was made for consistency with various
observations including SDSS as well as the previous HRs.
Specifically, the matter, baryonic matter, and dark energy densities
are $\Omega_{m,0} = 0.26$, $\Omega_{b,0}=0.044$, and
$\Omega_{\Lambda,0} = 0.74$, respectively.
The current Hubble expansion is $H_0=100~h$ km/s/Mpc, where $h=0.72$.
The amplitude of the initial matter perturbations is scaled for an
input bias factor, $b_8\equiv 1/\sigma_8=1.26$, where
\begin{equation}
\sigma_8^2 = {1\over 2\pi^2} \int k^2 P(k) |W(kR_8)|^2 {\rm d}k,
\end{equation}
{and $R_8 \equiv 8 h^{-1}{\rm Mpc}$.}
Here, {we used} the spherical top-hat filter $W(x) \equiv 3(x\sin
x-\cos x)/x^3$ in $k$-space.
{The particle mass is $m_{\rm p} \simeq 9\times 10^9
~h^{-1}{\rm M_\odot}$, and } the minimum mass of halos {with} 30
member particles is about $M_s \simeq 2.7\times 10^{11} ~h^{-1}{\rm
M_\odot}$.
Figure~\ref{fig1} shows the evolution of the non{-}linear
matter power spectrum obtained during the simulation run at several
redshifts.
The dotted lines are the expected linear power spectra, while the solid
lines are the simulated matter power spectra at the same redshift.
The typical non-linear evolution effect can easily be seen on small
scales, where the amplitude of the power spectrum is greater than
the linear prediction due to the gravitational clustering.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{pk}
\caption{Matter power
{spectra from the HR4 simulation ({\it solid}) and from linear
theory ({\it dotted}).
{\it Color}: $z = 100$ ({\it black}), 3.6 ({\it blue}), 0.32 ({\it
red}), and 0 ({\it magenta}).}
\label{fig1} }
\end{figure}
Figure~\ref{HR4} shows a part of the density map of the HR4 at $z=0$,
where a cluster develops at the center through the mergers of several
neighboring overdensity clumps. One may clearly see void regions
(painted in dark blue) with a size of a few tens of $h^{-1}$Mpc.
Some overdense blobs are embedded in the connection of multiple
filamentary structures.
\begin{figure*}[tp]
\centering
\includegraphics[width=17cm]{HR4}
\caption{Simulation density slice map at $z=0$.
High-density regions are {painted with bright color}.
The width of the slice is $7 ~h^{-1}{\rm Mpc}$.
The {two} subfigures are arranged for cascaded zoom-in views of a
cluster at the center of the box in the bottom part of the figure.
We put a scaling bar on the bottom of each panel.
\label{HR4} }
\end{figure*}
\section{Outputs}
{In this section, we describe the main products of the HR4
simulation. They are available at
\url{http://sdss.kias.re.kr/astro/Horizon-Run4/}.}
\subsection{Snapshot and Past Lightcone Space Data}
We have saved snapshot data of particles at twelve redshifts:
$z=0$, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 1, and 4.
Each data set contains the particle position, velocity, and eight-byte
integer ID index.
\begin{figure*}[t!]
\centering
\includegraphics[width=18cm]{fig}
\caption{
{\it Bottom}: Distribution of galaxies with $\mathcal{M}_r < -22.35$
and $0.45<z<0.6$ in the BOSS-CMASS volume-limited sample
\citep{choi15}.
Galaxies are selected from a strip with $-2^\circ \leq $ dec. $\leq
2^\circ$ and $130^\circ \le $ R.A. $\le 230^\circ$ out of the entire
BOSS survey area.
{\it Top}: The `galaxies' in the mock BOSS-CMASS survey performed in
the HR4 simulation.
The absolute magnitude of the observed galaxies is calculated with the
reference redshift of $z=0.55$ and is used to produce the sample with
a constant number density.
The HR4 PSB subhalos are selected as galaxies in accordance with the
galaxy-subhalo correspondence model, and the galaxy number density at
a given redshift is matched with the observed one by adjusting the
low-mass cutoff.
\label{figure} }
\end{figure*}
{To generate the past lightcone space data,} we put an
artificial observer at the origin $(x,y,z) =(0,0,0)$ of the simulation
box.
At each time step, we calculate the comoving distance from the
observer using
\begin{equation}
d_{\rm c} = {c \over H_0} \int_0^z {1\over E(z^\prime)} {\rm
d}z^\prime ,
\end{equation}
where
\begin{multline}
\label{eq:Ez}
E(z) \equiv\\
\sqrt{\Omega_{m,0} (1+z)^3 +
(1-\Omega_{m,0}-\Omega_{\Lambda,0})(1+z)^2 + \Omega_{\Lambda,0}}.
\end{multline}
Then, we search for particles located in a comoving shell, whose inner
and outer boundaries at the $i$-th step are $d_{{\rm c},i}-\Delta
d_{{\rm c},i}/2$ and $d_{{\rm c},i}+\Delta d_{{\rm c},i+1}/2${,
respectively,} where $\Delta d_{{\rm c},i+1} \equiv (d_{{\rm c},i+1}
- d_{{\rm c},i})/2$.
We {utilize} the periodic boundary conditions by copying the
simulation box to extend the {all-sky} past lightcone space
data up to $r=3150 ~h^{-1}{\rm Mpc}${, which corresponds to $z
\simeq 1.5$}.
Due to the finite step size, several undesirable events may be
encountered.
If a particle crosses the shell boundary between two neighboring time
steps, it can be missed or be counted twice in the lightcone space
data.
Therefore, we set a buffer zone laid upon both sides of the shell.
The width of the buffer zone is determined to be equal to the maximum
displacement taken by a particle in a time step.
Using these buffer zones, we can catch these crossing events.
A crossing particle can simultaneously be detected in two contacting
shells or two adjacent buffer zones, and we simply merge the
duplicated particle by averaging position and velocity in the
lightcone space data.
{In both the snapshot and past lightcone data,} we apply the
Ordinary Parallel {Friend-of-Friend} (OPFOF), a parallel
version of {FoF} code to identify virialized halos.
The standard percolation length is simply adopted as $l_{\rm link}=0.2
d_{\rm mean}$.
The halo position and peculiar velocity are given as the average
position and velocity of the member particles.
Then we apply the {physically self-bound (PSB)}
{sub}halo finding method \citep{kim and park 2006} to identify
subhalos embedded in the FoF halo.
It employs a negative total energy criterion and spherical tidal
boundaries to discard particles from subhalo candidates.
As a representative example, Figure~\ref{figure} compares the
volume-limited galaxy sample from BOSS-CMASS with $r$-band magnitude
limit $\mathcal{M}_r < -22.35$ at $0.45 \leq z \leq 0.6$ ({\it
bottom}) with the mock galaxy sample from the HR4 built by the PSB
subhalo-galaxy correspondence model \citep[{\it top};][]{kim08}.
\subsection{Halo Merger Data}
To build the merger trees we detect halos and subhalos at 75
equally-spaced sparse time steps from $z = 12$ to 0.
The step size is set to be comparable to the rotational period
(i.e. dynamical timescale) of Milky-Way-size galaxies.
Halo merger trees are then built by tracing the gravitationally most
bound member particles (MBPs) of halos.
If a halo does not contain any former MBP, we select a new MBP among
the member particles of the halo.
If one MBP is found, we assume the halo to be a direct
{descendant} of the halo.
If the halo hosts multiple former MBPs, we treat the halo as a merger
remnant and those ancestor MBPs (or halos) are linked to the remnant
creating a halo merger tree.
These merger trees will be extensively used to build mock galaxies and
to compare with observations \citep{hong15}.
Of course, due to the halo mass resolution of the HR4 ($M_s=2.7\times
10^{11} ~h^{-1}{\rm M_\odot}$), we are unable to resolve mergers of
sub-Milky-Way-mass (sub)halos.
\section{Properties of FoF Halos}
\subsection{Multiplicity Function}
The multiplicity function is defined as
\begin{equation}
f(\sigma,z) \equiv {M\over \rho_b(z)} {{\rm d} n(M,z) \over {\rm d}\ln
\left[ 1/\sigma(M,z) \right]} ,
\end{equation}
where $n(M,z)$ is the cumulative halo mass function at $z$,
$\rho_b${$(z)$} is the background matter density, and
{$\sigma(M,z)$} is the density fluctuation measured on the
mass scale of $M$.
For a given power spectrum $P(k)$, the density fluctuation is
estimated as
\begin{equation}
\sigma^2(M,z) = {D_1^2(z)\over 2\pi^2} \int k^2 P(k) |W(kR(M,z))|^2
{\rm d}k,
\end{equation}
where
\begin{equation}
R(M,z) \equiv \left( { 3 M \over 4\pi \rho_b(z)}\right)^{1/3},
\end{equation}
and
$D_1(z)$ is the growing mode {of} the linear growth factors computed
as
\begin{equation}
D_1(z) = {5\over2}\Omega_{m,0} E(z)\int_{z}^\infty {(1+z^\prime) {\rm
d}z^\prime\over E^3(z^\prime)}.
\end{equation}
\begin{table*}[tp]
\centering
\caption{Description of fitting models of the multiplicity
function \label{tabmultiplicity}}
\begin{tabular}{lll}
\toprule
Model & $f(\sigma, z)$ & Parameters \\
\midrule
\cite{sheth99}$^{*}$ &
$\displaystyle A \sqrt{{2 \over \pi}} \chi \left( 1+\chi^{-2p} \right)
\exp\left[-{\chi^2 \over 2}\right]$ &
$(A, p, q) = (0.3222, 0.3, 0.707, 0.3)$ \\
\citet{jenkins01} &
$\displaystyle A \exp\left(-\abs{\ln \sigma^{-1}+a}^b\right)$ &
$(A,a,b) = (0.315,0.61,3.8)$\\
\cite{warren06} &
$\displaystyle A \left( {\sigma^{-a} + b }\right)
\exp{\left[-{c\over\sigma^2}\right]}$ &
$(A, a, b, c) = (0.7234, 1.625, 0.2538, 1.1982)$ \\
\cite{tinker08}$^{\dagger}$ &
$\displaystyle A \left( {\sigma^{-a} + b} \right) \exp\left[-{c\over
\sigma^2}\right]$ &
$(A, a, b, c) = (0.745, 1.47, 0.250, 1.19)$ \\
\cite{crocce10}$^{\dagger}$ &
$\displaystyle A \left( {\sigma^{-a} + b }\right)
\exp{\left[-{c\over\sigma^2}\right]}$ &
$(A, a, b, c) = (0.58, 1.37, 0.30, 1.036)$ \\
\cite{manera10}$^{* \dagger}$ &
$\displaystyle A \sqrt{{2 \over \pi}} \chi \left( 1+\chi^{-2p} \right)
\exp\left[-{\chi^2 \over 2}\right]$ &
$(A, p, q) = (0.3222, 0.248, 0.709)$ \\
\cite{bhattacharya11}$^*$ &
$\displaystyle A\sqrt{2\over \pi} \chi^r (1+\chi^{-2p})
\exp\left[-{\chi^2 \over2}\right]$ &
$(A, p, q, r) = (0.333, 0.807, 0.788, 1.795)$ \\
\cite{angulo12} &
$\displaystyle A\left({B\over\sigma} +1\right)^q \exp\left[-{C\over
\sigma^2}\right]$ &
$(A, B, C, q) = (0.201, 2.08, 1.172, 1.7)$ \\
\cite{watson13} &
$\displaystyle A \left({\sigma^{-a} + b } \right) \exp\left[-{c \over
\sigma^2}\right]$ &
$(A, a, b, c) = (0.589, 2.163, 0.479, 1.210)$ \\
\bottomrule
\end{tabular}
\tabnote{
$^{*}$ $\chi \equiv \sqrt{q} \delta_{\rm c} / \sigma$, where
$\delta_{\rm c} = 1.686$ is the density contrast at the collapse epoch
{in an Einstein-de Sitter universe}. \\
$^{\dagger}$ Only the case at $z = 0$ is given here.}
\end{table*}
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{fofresc}
\caption{%
{\it Bottom}: Multiplicity functions {of HR4 FoF halos} at $z=0$, 0.5,
1, 1.99, 4, and 5.
The solid curve is proposed by \citet{bhattacharya11}, B11.
{\it Top}: Fractional deviations of the simulated ({\it symbols}) and
modeled ({\it lines}) multiplicity functions from B11.
\label{fofresc}
}
\end{figure}
Figure~\ref{fofresc} shows the multiplicity function from the HR4 as
well as a number of previous fitting models of the multiplicity function
(see Table~\ref{tabmultiplicity} and references therein).
Top panel shows the deviations of multiplicity functions with respect
to the model described in \citet{bhattacharya11}, hereafter B11.
For clarity, in the cases of \cite{crocce10} and \cite{manera10} we
only show the fitting function obtained at $z=0$.
All the previous fitting functions significantly deviate from each
other at high mass scales.
This may be produced by the exponential cut off producing large noises
in fitting the steep {slope}.
From the simulated multiplicity function, we can clearly see the
redshift change, and therefore a single functional form may not be
sufficient.
For large values of $\ln (1/\sigma)$, the redshift evolution of
multiplicity functions is substantial \citep{lukic07}, and the overall
amplitude seems to increase as the redshift decreases.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{exefit2}
\caption{
Redshift-dependent $\chi$- and amplitude-corrections in Equation
\ref{kimeq} showing the best fit to the HR4 FoF multiplicity
functions.
The dotted lines are the analytic fitting functions as shown in
Equations (\ref{chiz}) and (\ref{psiz}).
\label{exefit}
}
\end{figure}
We fit the simulated multiplicity function with a variant of the B11
function with an amplitude changing with redshift as
\begin{eqnarray}
f_{\rm Kim}(\chi_{\rm L},z) \equiv \varphi(z)f_{\rm B11}(\chi_{\rm
L}(M,z) -\chi_s(z)).
\label{kimeq}
\end{eqnarray}
Here {$\chi_{\rm L}(M,z) \equiv \sqrt{q} \delta_{\rm c} /
\sigma(M,z)$}, where $\delta_{\rm c} = 1.686$ is the density
contrast at the collapse epoch in an Einstein-de Sitter universe, $q$
is a fitting parameter in the B11 function (see
Table~\ref{tabmultiplicity}), and $\chi_s(z)$ and $\varphi(z)$ are
redshift-dependent $\chi$- and amplitude-corrections, respectively.
The value of $\delta_\text{c}$ in an Einstein-de Sitter Universe is
1.686, and slightly depends on the cosmology.
However, for consistency with previous work, we will use this constant
value of 1.686 \citep[e.g.,][]{bhattacharya11}.
We fit the simulated multiplicity function with the least-square
minimization and obtain the empirical fitting function as
\begin{eqnarray}
\label{chiz}
\chi_s(z) &=& 0.09 \tanh^2( 0.9z ) + 0.01 \\
\label{psiz}
\varphi(z) &=& \exp\left(-{z\over 10}\right) + 0.025.
\end{eqnarray}
Figure \ref{exefit} shows the redshift evolution of $\chi_s(z)$ (left)
and $\varphi(z)$, respectively.
At high redshifts, the HR4 simulation underpopulates halos compared to
B11, while the HR4 has more halos than B11 in the recent epoch.
Also, as we move to higher redshift, $\chi_s (z)$ increases and reaches
about 0.098 at very high redshift.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{fofrescfinal}
\caption{
Simulated and modeled multiplicity functions with respect to {our new
fitting model ($f_{\rm Kim}(\sigma, z)$).}
Same symbol and color conventions as in Figure \ref{fofresc}.
\label{fofrescfinal}
}
\end{figure}
Figure~\ref{fofrescfinal} shows the scatter of the simulated
multiplicity function ({symbols}) over {our} fitting
model ($f_{\rm Kim}$) with various former models (lines) for
comparison.
While other fitting models match the simulated multiplicity function
only at small scales ($\ln (1/\sigma) < 0.3$), our new fitting model
agrees with the HR4 on most scales in the redshift range between $z =
5$ and 0, within a 2.5\% fluctuation level.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{qs}
\caption{Shape distributions of FoF halos in various mass samples
shown in the $q$-$s$ diagram at $z=0$.
We mark iso-density contours enclosing 25\% halos around peak
distributions.
\label{qs}
}
\end{figure}
The origin of the redshift evolution of the multiplicity function is
not clearly known but it might be partly explained by the following
arguments.
First, even the 2LPT may be somewhat insufficient to generate accurate
initial conditions of the simulation.
Such effect would be avoided by introducing higher-order
approximations (for comparison between the Zel{'}dovich
approximation and 2LPT, see \citealt{crocce06}).
However, the target redshifts to measure the halo mass function are
sufficiently lower ($z\lesssim 5$; see the discussions made by
\citealt{tatekawa07}) compared to the initial epoch of $z_i=100$ and,
consequently, the redshift evolution may not be caused by numerical
transients.
Second, it may be due to the limitation of the linear perturbation
theory or the linear growth model in calculating
{$\sigma(M,z)$} after the {nonlinear} gravitational
clustering begins to enter.
The multiplicity function {assumes} that there is no other redshift
dependence than the matter fluctuation, $\sigma(M,z) = D(z)\sigma(M)$.
But as the non{-}linear growth becomes significant at lower redshifts,
one should consider the effect of non{-}linear gravitational evolution
of density fields.
\subsection{ Halo Shape }
\subsubsection{Structure}
The shape tensor $S_{ij}$ of an FoF halo is defined
\begin{equation}
S_{ij} = \sum_{k}^{N_m} ({x^k_i - {\bar x}_i })( {x^k_j -{\bar x}_j}),
\end{equation}
where $i$ and $j$ are structural axes, ${\bar x}$ is the position of
the center of the halo mass and $N_m$ is the number of member particles.
The three eigenvalues of the shape tensor $r_3 \leq r_2 \leq r_1$ are
respectively the {lengths of the} minor, intermediate, and
major axes of the corresponding ellipsoid.
The prolateness and sphericity of a halo are defined as
\begin{eqnarray}
q & \equiv & { r_2 \over r_1 } \\
s & \equiv & { r_3 \over r_1 } .
\end{eqnarray}
A halo is respectively defined as prolate, oblate, or spherical if
\begin{eqnarray}
&& q \ll 1, \\
&& q \simeq 1, s \ll 1, \\
&& s \simeq 1 .
\end{eqnarray}
Figure~\ref{qs} shows the probability distributions of ($q$, $s$) with
a contour containing 25\% of halos around the peak distribution in
four different mass samples of FoF halos.
For halo samples more massive than $2 \times 10^{12}~h^{-1} {\rm
M_\odot}$, we can clearly see that halos become more prolate as the
mass increases, in agreement with theoretical predictions
\citep[e.g.][]{rossi11}.
Less massive halos with $M_s \leq M < 5 \times 10^{11}~h^{-1} {\rm
M_\odot}$ have their distribution substantially shifted to the
lower-right corner in this diagram, i.e., are more oblate than more
massive samples.
This may be fully explained by the particle discreteness effect, which
will be described {in the next section}.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{comsphericity_cbp}
\caption{Resolution dependence of the roundness parameter at $z=0$
for HR4 ({\it blue}), GR1 ({\it green}), and GR2 ({\it magenta})
containing 25\% of halos around peak probabilities.
{\it Bottom}: {Roundness parameter as a function of halo mass.}
A vertical bar marks the mass scale equivalent to
the mass of 1000 particles {($10^3 m_{\rm p}$)} combined.
{The best-fitting functions of $\mathcal{R}(M)$ from halos
with lower-mass cutoff $10^3 m_{\rm p}$ ($\mathcal{R}_{1000}^{\rm
fit}$; {\it yellow}) and $5\times 10^3 m_{\rm p}$
($\mathcal{R}_{5000}^{\rm fit}$; {\it red dash}) are shown.}
{\it Top}: Deviation of the roundness parameter from $\mathcal{R}^{\rm
fit}_{5000}$ with respect to the halo mass.
Here we do not show the distribution below the mass of {$10^3
m_{\rm p}$}.
\label{comsphericity}
}
\end{figure}
\subsubsection{Roundness}
We now investigate the FoF halo shape from a different angle.
First, we define the roundness as
\begin{equation}
\mathcal{R} \equiv \sqrt{qs} = \sqrt{{r_2 r_3 \over r_1^2}} .
\end{equation}
To measure the resolution effects on halo shape, we ran two additional
higher-resolution simulations called Galaxy Run 1 (GR1) and Galaxy Run
2 (GR2).
These simulations used $2048^3$ particles.
We employed the same cosmological model but different simulation box
sizes ($L_{\rm box}^{\rm GR1} = 512{~h^{-1}{\rm Mpc}}$ \& $L_{\rm
box}^{\rm GR2} = 256 ~h^{-1}{\rm Mpc}$).
The mean particle separations of GR1 and GR2 are $d_{\rm mean}=0.25$
and $0.125~h^{-1}{\rm Mpc}$,
respectively, while the corresponding force resolutions are changed to
$0.025$ and $0.0125 {~h^{-1}{\rm Mpc}}$ accordingly.
Therefore, the mass and force resolutions are quite enhanced with
respect to the HR4.
Figure~\ref{comsphericity} shows the distribution of $\mathcal{R}$ of
FoF halos at $z=0$.
In each simulation, at scales of $M \gg 10^3 m_{\rm p}$,
$\mathcal{R}$ tends to be independent of the simulation resolution.
On the other hand, at small mass scales ($M \lesssim 10^3 m_{\rm p}$)
each simulation seems to underestimate $\mathcal{R}$, probably due to
the small number of particles.
\cite{hoffmann14} examined the discreteness effect on a modeled halo
for a given shape
and found that the required number of particles should not be less
than 1000 for a reliable shape determination (see Fig. A2 in their
paper).
From our three simulations we found a fitting formula of the roundness
parameter as a function of the halo mass for massive halos with $M
\geq 10^3 m_{\rm p}$:
\begin{multline}
{\mathcal{R}}^{\rm fit}_{1000} (M) = a_\mathcal{R} \log_{10} \left({ M
\over 10^7~h^{-1}{\rm M_\odot} }\right) \\
\times \exp \left[ -b_\mathcal{R} \log_{10} \left({ M \over
10^{7}~h^{-1}{\rm M_\odot} }\right) \right] ,
\label{R1000}
\end{multline}
where $(a_{\mathcal{R}}, b_{\mathcal{R}}) = (0.55, 0.28)$ (yellow line
in Figure~\ref{comsphericity}).
One should note that this fitting is only valid for $M\gtrsim
1.4\times 10^{11} ~h^{-1}{\rm M_\odot}$,
which is set by the combined mass of 1000 particles of the GR2.
We do not find any turn-around mass scale of $\mathcal{R}$ in the
available mass range in this study.
If we only consider halos with $M \geq 5 \times 10^3 m_{\rm p}$, the
distribution of $\mathcal{R}$ follows
\begin{equation}
\mathcal{R}^{\rm fit}_{5000} (M) = A_{\mathcal{R}} \log_{10} \left({ M
\over 10^{12}~h^{-1}{\rm M_\odot} } \right) + B_{\mathcal{R}},
\label{R5000}
\end{equation}
where $(A_{\mathcal{R}}, B_{\mathcal{R}}) = (-0.07, 0.68)$ (red dashed
line in Figure~\ref{comsphericity}).
Similar to the case of $\mathcal{R}^{\rm fit}_{1000}$, the above fitting
is only valid for $M \gtrsim 7 \times 10^{11}~h^{-1}{\rm M_\odot}$.
Both $\mathcal{R}^{\rm fit}_{1000}$ and $\mathcal{R}^{\rm fit}_{5000}$
describe well the change of $\mathcal{R}$, but they diverge below $M =
7 \times 10^{11}~h^{-1}{\rm M_\odot}$, the mass scale of about $5
\times 10^3 m_{\rm p}$ of GR2.
Therefore, we may need a simulation with a higher mass resolution than
GR2 to see which fitting formula describes the roundness parameter of
low-mass halos.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{sphericity_red}
\caption{Change of peak position in the $\mathcal{R}$ distribution at
several redshifts in the HR4.
The lower bound of the $x${-}axis corresponds to $10^3 m_{\rm p}$.
\label{sphericity_red}
}
\end{figure}
Figure~\ref{sphericity_red} shows the redshift evolution of
$\mathcal{R}$ in the HR4.
At higher redshift, halos tend to {have a smaller value of $\mathcal{R}$}
for a given mass.
However, it is important to note that this tendency does not guarantee
a possible shape evolution of virialized halos because halos also grow
in mass with time.
\subsection{Halo Orientations}
In this section{,} we study the angle between the halo
rotational and structural axes.
The directional angle between them is calculated as
\begin{equation}
\theta_i = \cos^{-1} |\hat{\boldsymbol{J}} \cdot
\hat{\boldsymbol{r}}_i | ,
\end{equation}
where $\hat{\boldsymbol J}$ is the normalized rotational axis and
$\hat{\boldsymbol r}_i$ is the unit vector in the direction of the
structural axis $i$.
We define the probability distribution function of the directional
angles
\begin{equation}
p^\prime(\theta) \equiv { {\rm d} P(\theta) \over {\rm d} \cos \theta} ,
\end{equation}
where $P(\theta)$ is the cumulative probability of a directional angle
greater than $\theta$.
Then, for a random orientation $p^\prime(\theta)$ is uniform over the
angle of $0^\circ \le \theta < 90^\circ$.
Figure~\ref{Orien} shows the relations between the rotation and halo
axes as a function of halo mass.
The rotational axis tends to be orthogonal to the major axis (bottom
panel), which means that halos tend to swing around their major axis.
Moreover, from the upper two panels, it can be noted that the
rotational axis is more aligned with the minor axis than the
intermediate axis. This alignment becomes stronger as the halo mass
increases.
In addition, we find that this tendency still holds for low-mass halos
with ${M \lesssim} 10^3 m_{\rm p}$, implying that the halo rotation is less
affected by the mass resolution limit than the halo shape.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{Orien}
\caption{Orientations of {the} rotational axis with respect to
the major ({\it bottom}), intermediate ({\it middle}), and minor
({\it top panel}) axes at $z=0$.
Probability contours are drawn around the peak position at each mass
bin enclosing 25\%, 50\%, and 75\% of halos respectively.
Most halos are positioned at $\theta_1\simeq 90^\circ$, $\theta_2
\simeq 0^\circ$, and $\theta_3 \simeq 0^\circ$.
\label{Orien}
}
\end{figure}
\section{Two-Point Correlation Function}
In this section, we implement the effects of redshift-space distortions
on the clustering of mock galaxies and measure the change of clustering in
the radial ($\pi$) and tangential ($\sigma$) directions.
In the 3-dimensional space, the radial separation between two points
(${\boldsymbol r}_1$ and ${\boldsymbol r}_2$) is defined as
\begin{equation}
\pi \equiv \frac{\left|{\boldsymbol d}_{12} \cdot {\boldsymbol
R}_{12}\right|} { \left|{\boldsymbol R}_{12}\right|},
\end{equation}
where ${\boldsymbol R}_{12} \equiv ({\boldsymbol r}_1+{\boldsymbol
r}_2)/2$ and {${\boldsymbol d}_{12} \equiv {\boldsymbol r}_1 -
{\boldsymbol r}_2$.
The tangential distance between them is simply obtained with
\begin{equation}
\sigma =\sqrt{ {\boldsymbol d}_{12}\cdot{\boldsymbol d}_{12} -
\pi^2}.
\label{sigma}
\end{equation}
The correlation function of a point set can easily be calculated by
Hamilton's method \citep{hamilton93}:
\begin{equation}
\xi(\sigma,\pi) = {\boldsymbol{DD}(\sigma,\pi)
{\boldsymbol{RR}}(\sigma,\pi) \over {\boldsymbol{DR}}(\sigma,\pi)^2}
- 1,
\end{equation}
Here,
$\boldsymbol{DD}$ is the number of pairs {of} real points,
$\boldsymbol{DR}$ is the number of cross pairs between the real and
random points, and $\boldsymbol{RR}$ is the number of pairs
{of} random points at the two-dimensional
separations of $\sigma$ and $\pi$.
We use the PSB subhalo catalog from the HR4 snapshot at $z = 0$ as our
mock galaxy sample.
By adopting the far-field approximation and using the periodic
boundary condition of the HR4 simulation, we produce the
redshift-space distortion in the $x$-direction,
\begin{equation}
{ x}^\prime = { x} + {{ v_x}\over H_0},
\label{xdistort}
\end{equation}
where $H_0$ is the Hubble parameter at $z=0$ and ${ v_x}$ is the
peculiar velocity along the $x${-}axis.
Since we adopt the far-field approximation, $\pi$ is the position
difference in the $x${-}axis and $\sigma$ is the
separation in the $y$-$z$ plane.
We then construct a mass-limited mock galaxy sample with PSB subhalos
satisfying $M \geq 2.60 \times 10^{12}~h^{-1}{\rm M_\odot}$.
The average number density of the mass-limited PSB subhalo sample is
${\bar n} = 1.48 \times 10^{-3}~h^3{\rm Mpc^{-3}}$, which is
comparable to} the number density of the volume-limited sample of the
SDSS Main galaxies with absolute magnitude limit of $\mathcal{M}_r
-5\log_{10} h <-21$ \citep{choi10}.
Figure~\ref{corr} shows the effects of redshift-space distortions on
the correlation map.
The left panel shows the correlation of our mock galaxy sample in real
space while the effects of redshift-space distortion are applied in
the right panel.
The shape of $\xi(\pi,\sigma)$ is distorted along the
line{-}of{-}sight ({LoS,} or in the
$\pi${-}direction).
At the very center, the finger-of-god effect can be seen as spikes
stretching along the $\pi${-}direction (for a better view
around the center, see Figure~\ref{corr_zoom}).
On the other hand, on larger scales, the correlation function along
the {LoS} contracts to the smaller scale.
\begin{figure*}[tpb]
\centering
\subfigure[real space]{ \includegraphics[width=232pt]{hr4psb_nopv} }
\subfigure[redshift space]{ \includegraphics[width=232pt]{hr4psb_pv} }
\caption{Correlation functions of mock galaxies measured without ({\it
left}) and with ({\it right})
redshift-space distortion effects.
The radius of each circular region is $130 ~h^{-1}{\rm
Mpc}${,} and the solid circle marks
the BAO peak position ($r_{\rm peak} \simeq 107 ~h^{-1}{\rm Mpc}$).
The color bar marks the correlation in logarithmic spacing.
\label{corr}
}
\end{figure*}
\begin{figure}[pb]
\centering
\includegraphics[width=8.4cm]{hr4psb_inner_pv}
\caption{%
Same as the right panel of Figure~\ref{corr}, but zoomed in to clearly
show the finger-of-god effect.
\label{corr_zoom}
}
\end{figure}
The position of the BAO peak in real space can be estimated from the
linear correlation function
\begin{equation}
\xi_{\rm linear}(r) \equiv {1\over 2\pi^2} \int k^2 P(k) {\sin(kr)
\over kr} {\rm d}k.
\end{equation}
For the WMAP 5-year standard $\Lambda$CDM cosmology, the BAO
peak in real space is located at $r_{\rm peak} \simeq 107~h^{-1}{\rm
Mpc}$, shown as a solid circle in Figure~\ref{corr}.
Figure~\ref{bao} shows the two-point correlation functions for
different values of the directional cosine to the {LoS}
direction $\mu$ in real space ({\it top}) and redshift space ({\it
bottom panel}).
In real space, the correlation function around the BAO peak is
independent of the directional angle ($\theta$) because of the
isotropic distribution.
On the other hand, the correlation functions measured in redshift
space are increased as $\theta$ increases, because galaxy pairs are
stretched along the {LoS}.
It is worth to note that the BAO peak in the tangential direction
($\theta = 90^\circ$) cannot be detected.
Moreover, the correlation functions along the {LoS} has a peak
with a height nearly zero while correlation functions for
$\theta<30^\circ$ are less than zero on scales below the peak
position.
Figure~\ref{baoavg} shows the average correlation function over the
directional cosine,
\begin{equation}
\xi(r) \equiv \int_0^1 \xi(r,\mu) {\rm d}\mu .
\end{equation}
In both real and redshift spaces, the BAO peak from the HR4 is broadened
and shifted toward small scales compared to a simple estimation of the
linear correlation function of biased objects
\begin{equation}
\xi_{\rm linear, bias}(r) = b^2 \xi_{\rm linear}(r),
\end{equation}
where $b = 1.14$ is the bias factor.
This is due to {the} nonlinear gravitational evolution of
galaxies.
In redshift space{,} the BAO peak is further
broadened{,} and it is hard to clearly find the position of
the BAO peak.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{bao}
\caption{Correlation functions between PSB halo pairs separated along
the directions of
$\theta=10$, 20, 30, 40, 50, 60, 70, 80, and 90 degrees in real space
({\it top})
and redshift-distorted space ({\it bottom panel}).
In the bottom panel, the top-most line is the correlation of
$\theta=80^\circ$, and the correlation function increases as $\theta$
decreases.
The dotted line is the linear prediction with a bias factor
$b=1.14$.
As the direction cosine ($\mu \equiv \cos\theta$) increases, the noise
of the correlation functions is decreasing because the number of pairs
along the given direction increases ($N_{\rm pair} \propto \mu$ ).
The correlation functions along $\theta=90^\circ$ {are}
not shown due to big noise.
\label{bao}
}
\end{figure}
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{baoavg}
\caption{Correlation function averaged over the directional cosine.
The thick solid line is the averaged value of the correlation
functions in real space ({\it top}) and redshift space ({\it
bottom panel}).
The dotted line is the linear prediction with bias $b=1.14$.
\label{baoavg}
}
\end{figure}
\section{Mass Accretion History}
We use merger trees to study the mass accretion history of halos in
several mass samples.
We define the mass accumulation history as
\begin{equation}
\Psi(M_0,z) \equiv {M(z) \over M_0},
\end{equation}
where $M_0$ is the final halo mass at $z=0$.
The half-mass epoch ($z_{1/2}$) is defined as the time when
$M(z_{1/2}) = M_0/2$.
We measure the evolution of the halo mass along the major descendant trees
and show the results in Figure~\ref{mergingdist}.
It can be seen that the half-mass redshift tends to decrease as the
final halo mass increases.
For example, {low-mass} halos with $10^{12} \le M_0/h^{-1}{\rm
M_\odot} < 3 \times 10^{12}$ {tend to} have {their
half-mass around} $z_{1/2} \simeq 1$ on average,
while {more massive} halos with $10^{14} \leq M_0/h^{-1}{\rm
M_\odot} < 5 \times 10^{14}$ {tend to} have {a later
half-mass epoch} $z_{1/2} \simeq 0.5$.
This result is consistent with the observations that galaxy clusters
formed relatively recently (in terms of the epoch when a cluster
obtains half of the current mass) while individual satellite galaxies
seem to form at relatively higher redshift.
\begin{figure}[tp]
\centering
\includegraphics[width=8.5cm]{mergingdist}
\caption{Evolution of halo mass with redshift for several mass samples.
In the top panel, we show the change of $\psi$ with redshift,
while $\Psi$ is shown in the bottom panel.
Lines and shaded regions mark the mean and $1\sigma$ distributions of mass history.
For each sample, we cut the data below the mass resolution of the simulation.
\label{mergingdist}
}
\end{figure}
We empirically fit the log-linear function of redshift to $\Psi(z)$ as
\begin{equation}
\Psi(M_0,z) = \exp \left[-\psi(M_0) z \right],
\label{maccrete}
\end{equation}
and found a best-fit relation as
\begin{equation}
\psi(M_0) \simeq 0.32 \log_{10} \left({ M_0 \over 10^{12}~h^{-1}{\rm M_\odot} }\right) + 0.56.
\end{equation}
This fitting function reproduces the distribution for $10^{12} \le
M/h^{-1}{\rm M_\odot} <5\times 10^{15}$ quite well at an early epoch ($z
\gtrsim 0.7$).
On the other hand, at low redshifts ($z\lesssim 0.7$) the accelerated
expansion driven by dark energy begins to overpower the
gravitational attraction and, therefore, halo mergers are suppressed.
The sharp increase of $\psi$ near the current epoch is caused by a
numerical noise.
Note that \citet{dekel13} also found an exponential form of mass
growth, although their mass accretion rate ($\psi(10^{12}~h^{-1}{\rm
M_\odot}) = 0.76$) is slightly higher than ours
($\psi(10^{12}~h^{-1}{\rm M_\odot}) = 0.56$).
We want to point out that the specific mass accretion rate per unit
redshift interval, defined as
\begin{equation}
\left|{{\rm d}M\over M {\rm d}z}\right| = \left|{M_0\over M} {{\rm d}
\Psi \over {\rm d}z}\right| \approx \psi(M_0),
\end{equation}
is roughly constant with redshift and depends only on the current sample mass.
Then the specific mass accretion rate per unit physical time can be calculated as
\begin{eqnarray}
\Upsilon_M (M_0,z) &\equiv&
\left|{{\rm d}M\over M {\rm d}t}\right| \\
&=& { \psi(M_0) \over H_0} {E(z)\over 1+z }.
\end{eqnarray}
We now introduce a star formation efficiency, which is defined as the
ratio of mass accretion rates between the stellar and total masses of
halos as
\begin{equation}
b_{\star} (M_0,z) \equiv {\Upsilon_\star \over \Upsilon_M},
\end{equation}
where $\Upsilon_\star \equiv {\rm d}M_\star/M_\star {\rm d}t$ is the specific stellar mass accretion rate.
As a simple case, we assume that the stellar mass evolution of a halo
is fully determined by the evolution of its total mass.
In this case, the spectral indices of stellar mass-to-total mass,
stellar mass accretion rate-to-stellar mass, and star formation
efficiency-to-total mass, respectively defined as
\begin{eqnarray}
\gamma(M_0, z) &\equiv& { {\rm d}\ln M_\star \over {\rm d}\ln M} \\
\beta(M_0, z) &\equiv& { {\rm d}\ln \Upsilon_\star \over {\rm d}\ln M_\star} \\
\epsilon(M_0, z) &\equiv& { {\rm d}\ln b_\star \over {\rm d}\ln M},
\end{eqnarray}
are fully determined by the redshift and the final halo mass.
By applying the galaxy-subhalo correspondence model to relate between
halo mass and galaxy luminosity, \cite{kim08} showed that
stellar luminosity (or stellar mass if a constant $M_\star/L_\star$ is
assumed) shows a good relation to the halo mass with a power-law
index $\gamma \sim 0.5$ for the SDSS main galaxy sample when $M
\gtrsim 5\times 10^{11} h^{-1}{\rm M_\odot}$.
A similar slope was reported by \cite{kravtsov14} from the BCG samples.
On the other hand, \cite{abramson14} reported $\beta\sim -0.3$ for
SDSS DR7 galaxies with $9.5 \le \log_{10} (M_\star/h^{-1} M_\odot) \le
11.5$.
The spectral index of the stellar mass accretion rate-to-total mass
can be expressed as a combination of the above spectral indices:
\begin{eqnarray}
\eta(M_0, z) &\equiv& { {\rm d}\ln \Upsilon_\star \over {\rm d}\ln M} \\
&=& \beta(M_0, z) \gamma(M_0, z) \\
&=& \epsilon(M_0, z) + {1\over \psi (M_0)}\left[
{{\rm d}\ln E(z) \over {\rm d}z} -1 \right],
\end{eqnarray}
where $E(z)$ was defined in Equation~(\ref{eq:Ez}).
The effect of the parameters on the relative star formation efficiency is
shown in Figure~\ref{sbias}.
As can be seen from the figure, the relative star formation efficiency
is higher{,} or the $\eta$ is getting smaller for more massive
halos.
\begin{figure}[tp]
\centering
\includegraphics[width=8.4cm]{bias}
\caption{Relative star formation efficiency scaled with the current
efficiency, $b_{\star0}$.
Clockwise from the lower-left panel, the halo masses are
$M_0=10^{12}~h^{-1}{\rm M_\odot}$, $M_0=10^{12}~h^{-1}{\rm M_\odot}$,
$M_0=10^{13}~h^{-1}{\rm M_\odot}$, and $M_0=10^{15}~h^{-1}{\rm
M_\odot}$, respectively.
In the legend, we list the values of $\eta$ from the bottom
curve.
\label{sbias}
}
\end{figure}
\section{Summary\label{sec:con}}
{We ran a new cosmological $N$-body simulation called the Horizon Run 4 (HR4) simulation.
By adopting a standard $\Lambda$CDM cosmology in concordance with WMAP
5-year observations, the HR4 simulates a periodic cubic box of a side
length, $L_{\rm box} = 3150 h^{-1}{\rm Mpc}$ with $6300^3$ particles.
With its wide range of mass and length scales, the HR4 can}
provide the cosmology community with
a competitive data set for the study of cosmological models
and galaxy formation in the context of large-scale environments.
{The main products of the HR4 are as follows.
First, we saved the snapshot data of the particles within the whole
simulation box at 12 different redshifts from $z = 4$ to 0.
We also built a past lightcone space data of particles that covers the
all-sky up to $z \simeq 1.5$.
They} can be used to study the evolution of the gravitational
potential and the genus topology as well as large-scale weak lensing
analysis.
Moreover, we constructed the merger trees of Friend-of-Friend halos
from $z = 12$ to 0 with their gravitationally most bound member
particles.
They can be used to study galaxy formation and bridge the gap between
theoretical models and observed galaxy distributions.
{We tested the HR4 in various aspects, including the
mass/shape/spin distributions of FoF halos, two-point correlation
functions of physically self-bound subhalos, and mass evolution of
FoF halos.
The results of our test are summarized as follows:}
\begin{enumerate}
\item We found that the abundance of massive FoF halos in the HR4 is
substantially different from various fitting functions given in
{the} previous literature. We also found strong evidence for a
redshift dependence of the mass
function.
We proposed a new fitting formula of the multiplicity function that
reproduces the redshift changes of amplitude and shape of multiplicity
functions within about 5 \% errors.
\item We confirmed the finding of previous studies that FoF halos tend
to rotate around the minor axis.
\item The two-point correlation function measured in real space is
isotropic.
However, due to the non-linear evolution of galaxies, the location of
the baryonic acoustic oscillation peak is shifted toward smaller
scale than the prediction from the linear correlation function.
On the other hand, in redshift space the BAO peak can be seen only in
the two-point correlation function along the perpendicular direction,
with a much{-}broadened width and increased height.
We emphasize{d} that it is important to use massive simulation data to
study the non-linear evolution of BAO features and the connection
between observations and cosmological models.
\item We found that more massive halos tend to have steeper mass
histories, and the mass accretion rate per unit redshift is roughly
constant during early epoch before dark energy domination.
By adopting simple power-law models for the stellar mass and star
formation efficiency, we found that massive halos tend to have a
higher star formation efficiency.
\end{enumerate}
{All aforementioned main products of the HR4 are available at
\url{http://sdss.kias.re.kr/astro/Horizon-Run4/}.}
\acknowledgments
This work was supported by the
Supercomputing Center/Korea
Institute of Science and Technology {Information with}
supercomputing resources including technical support
(KSC-2013-G2-003).
The authors thank Korea Institute for Advanced Study for providing
computing resources (KIAS Center for Advanced Computation) for this
work.
The authors also thank the referee, Graziano Rossi, for the thorough
review and constructive suggestions that lead to an improvement of the
paper.
|
2,877,628,088,557 | arxiv | \section{Overview}
Recently, a family of non-supersymmetric dualities between Chern-Simons-matter theories in $2+1$-dimensions has been conjectured \cite{Aharony:2015mjs}. Due to the fact that one side of the duality contains bosons and the other has fermions, such identifications have been termed ``3d bosonization'' or ``Aharony's dualities''. Schematically, these dualities state
\begin{subequations}
\label{eq:aharony_schematic}
\begin{align}
SU(k)_N\,\text{with \ensuremath{N_f} \ensuremath{\phi}}\qquad & \leftrightarrow\qquad U(N)_{-k+\frac{N_f}{2}}\,\text{with \ensuremath{N_{f}} \ensuremath{\psi}},\label{eq:u ferm tw}\\
U(k)_{N}\,\text{with \ensuremath{N_{f}} \ensuremath{\phi}}\qquad & \leftrightarrow\qquad SU(N)_{-k+\frac{N_{f}}{2}}\,\text{with \ensuremath{N_{f}} \ensuremath{\psi}}\label{eq:u scalar tw}
\end{align}
\end{subequations}
where $\phi$ are self-interacting scalars, $\psi$ are free fermions, and ``$\leftrightarrow$'' means the theories share an IR fixed point. These dualities are subject to the flavor bound $N_f\leq k$.
The strongest evidence for such dualities is based on studies where the level ($k$) and the rank ($N$) are taken to be much greater than one (but $k/N$ is held fixed). In this limit observables are under perturbative control \cite{Giombi:2011kc, Aharony:2011jz, Aharony:2012nh, Jain:2014nza, Inbasekar:2015tsa, Minwalla:2015sca, Gur-Ari:2016xff} and one can confirm that many observables on both sides of the duality such as the operator spectrum, free energy, and correlation functions match to leading order. Additionally, one can deform away from the IR fixed point by including relevant operators in the Lagrangian. This procedure yields topological field theories (TFTs) which are level-rank dual and hence equivalent.
Surprisingly, further evidence for these dualities arises in the exact opposite regime, where $N=k=N_f=1$. In this case, \eqref{eq:aharony_schematic} reduces to
\begin{subequations}
\label{eq:abelian_aharony}
\begin{align}
\text{Wilson-Fisher scalar} \qquad & \leftrightarrow\qquad U(1)_{-1/2}+\text{fermion},\\
U(1)_{1}+\text{scalar}\qquad & \leftrightarrow\qquad \text{free fermion}.
\end{align}
\end{subequations}
These ``Abelian dualities'' have been used to derive an entire web of related dualities \cite{Karch:2016sxi, Seiberg:2016gmd}, within which is the well-known bosonic particle-vortex duality \cite{Peskin:1977kp, Dasgupta:1981zz} and its recently discovered fermionic equivalent \cite{Son:2015xqa}. The methodology used in deriving this web of Abelian dualities has been extended to Abelian and non-Abelian linear quivers \cite{Karch:2016aux, Jensen:2017dso} to generate even more novel dualities, although these are often limited in scope due to flavor bounds.
More recently, a generalization of Aharony's dualities has been discovered \cite{Benini:2017aed,Jensen:2017bjo} where each side of the duality has fermions and scalars,
\begin{equation}
SU(N)_{-k+\frac{N_{f}}{2}}\,\text{with \ensuremath{N_{s}} \ensuremath{\phi}\ and \ensuremath{N_{f}} \ensuremath{\psi}}\qquad\leftrightarrow\qquad U(k)_{N-\frac{N_{s}}{2}}\,\text{with \ensuremath{N_{f}} \ensuremath{\Phi}\ and \ensuremath{N_{s}} \ensuremath{\Psi}}.\label{eq:master tw}
\end{equation}
Note that this duality reduces to \eqref{eq:u ferm tw} and \eqref{eq:u scalar tw} when $N_f=0$ and $N_s=0$, respectively. We will refer to this duality as the ``master duality'' since \eqref{eq:aharony_schematic} can be recovered as a special case. Novel to the master duality is the fact the scalar and fermionic matter on each side of the duality interact with one another through a quartic term and each type of matter is subject to its own flavor bound.
Said dualities have application toward the half-filled fractional quantum Hall effect as well as surface states of topological insulators \cite{Son:2015xqa, wang2015dual, Metlitski:2015eka, Seiberg:2016gmd}. Further support for these dualities include deformations from supersymmetric cases \cite{Jain:2013gza, Gur-Ari:2015pca, Kachru:2016rui, Kachru:2016aon}, derivation from an array of $1+1$ dimensional wires \cite{mross2016explicit}, a matching of global symmetries and 't Hooft anomalies \cite{Benini:2017dus}, consistency checks of the dualities on manifolds with boundaries \cite{Aitken:2017nfd, Aitken:2018joi}, and support from Euclidean lattice constructions \cite{Chen:2017lkr, Chen:2018vmz, Jian:2018amu}.
In this work we use the master duality and methods similar to those developed in \cite{Jensen:2017dso} to derive novel Bose-Bose dualities between non-Abelian linear quivers. We argue that these dualities can be viewed as a natural generalization of the bosonic particle-vortex duality to non-Abelian gauge groups since the quivers share many of the qualitative features present in the particle-vortex duality.
Of particular interest is the application of these dualities to $2+1$-dimensional defects in Yang-Mills theory on $\mathbb{R}^4$, which will be the focus of the latter half of this paper. It has recently been shown that there is a mixed 't Hooft anomaly between time-reversal symmetry and center symmetry at $\theta=\pi$ \cite{Gaiotto:2017yup}.
This is rooted in the fact that $SU(N)$ YM theory is believed to have $N$ distinct vacua associated to $N$ branches of the theory. Such branches are individually $2\pi N$ periodic and correspond to $SU(N)/\mathbb{Z}_N$ gauge theories. This seems to contradict the long held belief that $\theta$ is $2\pi$ periodic in $SU(N)$ YM theory, but the conflict is resolved since the vacua interchange roles under a $2\pi$ transformation. More specifically, if one tracks the true ground state of the theory, one changes branches in a single $2\pi$ period. Thus, as theta is varied from, say $\theta=0$ to $2\pi n$, the theory traverses several vacua. However, this changes when one couples the one-form center symmetry to a background (two-form) gauge field. In this case one cannot consistently choose the coefficient of the counterterm, sometimes referred to as the ``discrete theta angle'', to make the theory non-anomalous. Since this counterterm changes as one traverses branches, a spatially varying $\theta$ angle gives rise to domain walls separating regions with distinct discrete theta angle. Using anomaly inflow arguments, the effective field theory living on the interface is found to be a Chern-Simons gauge theory (see \cite{Gaiotto:2017yup, Gaiotto:2017tne} for more details).
Although anomaly considerations require a non-trivial theory to live on the interface, they alone do not fully fix the theory. Among others, $[SU(N)_{-1}]^n$ or $SU(N)_{-n}$ would be consistent choices.\footnote{Note that we are changing the direction of the $\theta$ gradient relative to \cite{Gaiotto:2017tne} and so have negative levels for our Chern-Simons theories. This is in order to conform to the conventions of \cite{Jensen:2017xbs} for the stringy embeddings.} The authors of \cite{Gaiotto:2017tne} argue that, at least at $n \ll N$, $[SU(N)_{-1}]^n$ is the appropriate description for slowly varying theta (meaning that $|\nabla \theta| \ll \Lambda$ where $\Lambda$ is the strong coupling scale of the confining gauge theory), whereas $SU(N)_{-n}$ is appropriate for a sharp interface such as a discrete jump by $2 \pi n$ at a given location. If these are indeed the correct descriptions this suggests that there is a phase transition as one smooths out a given jump in $\theta$. If this phase transition is second order, the transition point would be governed by a CFT which is most easily realized as a Chern-Simons-matter theory. In any case, this CFT can serve as a parent theory from which topological field theories, describing either the slowly varying as well as the sharp step, can be realized as massive deformation.
\begin{figure}
\centering
\includegraphics[scale=0.7]{thetawall_su.png}
\caption{Parent Chern-Simons matter theory at the phase transition for the special case $n=5$. Nodes represent gauge theories with the associated Chern-Simons term and links represent matter bifundamentally charged under the gauge groups on the adjascent nodes.} \label{fig:parentcft}
\end{figure}
The conjectured CFT between the two extreme phases is schematically
\begin{align}
\label{eq:su_quiver_ym}
[SU(N)_{-1}]^n + \text{bifundamental scalars}
\end{align}
which was used in ref. \cite{Gaiotto:2017tne} to explain the transition between two different vacua of $\left(3+1\right)$-dimensional Yang-Mills. This parent CFT is based on a quiver gauge theory as displayed in Fig. \ref{fig:parentcft}. Each node depicts a $SU(N)_{-1}$ Chern-Simons gauge theory, the links connecting them represent bifundamental scalar fields, $Y$. The theory has two obvious massive deformations: we can give all the scalars a positive or a negative mass squared. In the former case the scalars simply decouple and we are left with the $[SU(N)_{-1}]^n$ TFT appropriate for slowly varying theta, in the latter case the gauge group factors get Higgsed down to the diagonal subgroup and we find the $SU(N)_{-n}$ associated with the steep defect. There are also mixed phases, where some of the $Y$ have negative and some positive mass squared.
In this work, we propose a theory dual to \eqref{eq:su_quiver_ym} which is supported by both 3d bosonization of non-Abelian linear quivers and holographic duality. The proposed ``theta wall'' duality is
\begin{align}
\label{eq:u_quiver_ym}
[SU(N)_{-1}]^n + \text{bifundamental scalars} \qquad\leftrightarrow\qquad U(n)_{N} + \text{adjoint scalars}.
\end{align}
We will see that this is a special case of the more general quiver dualities derived in Sec. \ref{sec:quivers} which do not include matter in the adjoint. This is a special feature of \eqref{eq:u_quiver_ym}, owed to the fact that when all ranks of the $SU$ quiver theory are equal, the $U$ quiver contains nodes which are confining. With the careful addition of interactions in the proposed theories, mass deformations on either side of the duality yield TFTs which are level-rank dual to each other.
The paper is outlined as follows. In Sec. \ref{sec:review} we review the master duality and establish the conventions we use for the rest of the paper. Sec. \ref{sec:quivers} contains our derivation of the non-Abelian linear quiver dualities, including the details of how such dualities should be viewed as generalization of the particle-vortex duality. We then specialize to quivers applicable to theta interfaces in $3+1$-dimensional $SU(N)$ Yang-Mills theory in Sec. \ref{sec:thetawalls}. Subsections \ref{sec:thetawall_3d} and \ref{sec:thetawall_holo} contain the 3d bosonization and holographic support for such dualities, respectively. In Sec. \ref{sec:conclusion} we discuss our results and conclude. The appendix contains several details of our construction of the non-Abelian quivers.
As we were finalizing this work, we were made aware of \cite{Argurio:2018uup} which studies domain walls in different phases of the Witten-Sakai-Sugimoto model. This has some overlap with Sec. \ref{sec:jensen}, particularly regarding the nature of domain walls in the pure YM sector.
\section{Review of 3d Bosonization}
\label{sec:review}
We begin by reviewing 3d bosonization and establish conventions we will use throughout this paper. The most general form of 3d bosonization, the so-called master bosonization duality \cite{Benini:2017aed,Jensen:2017bjo}, is a conjecture that the following two Lagrangians share the same IR fixed point\footnote{Here we follow the conventions outlined in ref. \cite{Aitken:2018joi}. We have dropped all gravitational Chern-Simons terms since they are not relevant for our purposes. Note there is a slight difference in convention in the sign of the BF term and the $\tilde{A}_2$ coupling on the $U$ side of the duality. However since the difference always amounts to an even number of sign changes the TFTs still match under mass deformations. Additionally, the flux attachment procedure picks up two minus signs from this effect as well, meaning the quantum numbers of the baryon and monopole operators still match.}
\begin{subequations}
\label{eq:master_dual}
\begin{align}
\mathcal{L}_{SU} & =\left|D_{b^{\prime}+B+\tilde{A}_{1}+\tilde{A}_{2}}\phi\right|^{2}+i\bar{\psi}\Dslash_{b^{\prime}+C+\tilde{A}_{1}}\psi+\mathcal{L}_{\text{int}}-i\left[\frac{N_{f}-k}{4\pi}\text{Tr}_{N}\left(b^{\prime}db^{\prime}-i\frac{2}{3}b^{\prime3}\right)\right]\nonumber \\
& -i\left[\frac{N}{4\pi}\text{Tr}_{N_{f}}\left(CdC-i\frac{2}{3}C^{3}\right)+\frac{N(N_{f}-k)}{4\pi}\tilde{A}_{1}d\tilde{A}_{1}\right],\\
\mathcal{L}_{U} & =\left|D_{c+C}\Phi\right|^{2}+i\bar{\Psi}\Dslash_{c+B+\tilde{A}_{2}}\Psi+\mathcal{L}_{\text{int}}^{\prime}-i\left[\frac{N}{4\pi}\text{Tr}_{k}\left(cdc-i\frac{2}{3}c^{3}\right)-\frac{N}{2\pi}\text{Tr}_{k}(c)d\tilde{A}_{1}\right] \label{eq:master_LU}
\end{align}
\end{subequations}
with the mass identifications $m_\psi\leftrightarrow -m_\Phi^2$ and $m_\phi^2\leftrightarrow m_\Psi$. Our definitions of fields are shown in Table \ref{tab:master_notation}. We will use uppercase letters for background gauge fields, lowercase for dynamical gauge fields, and Abelian fields carry a tilde. This duality is subject to the flavor bound $(N_f,N_s)\leq(k,N)$, but excludes the case $(N_f,N_s)=(k,N)$.\footnote{There are proposals for dualities describing the phase structure of these theories slightly beyond the bounds \cite{Komargodski:2017keh}, but such cases will not be relevant for this work.} Our notation for covariant derivatives is
\begin{subequations}
\begin{align}
\left(D_{b^{\prime}+B+\tilde{A}_{1}+\tilde{A}_{2}}\right)_{\mu}\phi & =\left[\partial_{\mu}-i\left(b_{\mu}^{\prime}\mathds{1}_{N_s}+B_{\mu}\mathds{1}_{N}+\tilde{A}_{1\mu}\mathds{1}_{N N_s}+\tilde{A}_{2\mu}\mathds{1}_{N N_s}\right)\right]\phi,\\
\left(D_{b^{\prime}+C+\tilde{A}_1}\right)_{\mu}\psi & =\left[\partial_{\mu}-i\left(b_{\mu}^{\prime}\mathds{1}_{N_f}+C_{\mu}\mathds{1}_{N}+\tilde{A}_{1\mu}\mathds{1}_{N N_f}\right)\right]\psi,\\
\left(D_{c+C}\right)_{\mu}\Phi & =\left[\partial_{\mu}-i\left(c_{\mu}\mathds{1}_{N_f}+C_{\mu}\mathds{1}_{k}\right)\right]\Phi,\\
\left(D_{c+B+\tilde{A}_{2}}\right)_{\mu}\Psi & =\left[\partial_{\mu}-i\left(c_{\mu}\mathds{1}_{N_s}+B_{\mu}\mathds{1}_{k}+\tilde{A}_{2\mu}\mathds{1}_{k N_s}\right)\right]\Psi.
\end{align}
\end{subequations}
The interaction terms are
\begin{subequations}
\label{eq:master_ints}
\begin{align}
\mathcal{L}_{\text{int}} & =\alpha\left(\phi^{\dagger a_{c}a_{s}}\phi_{a_{c}a_{s}}\right)^{2}-C \left(\bar{\psi}^{a_{c}a_{f}}\phi_{a_{c}a_{s}}\right)\left(\phi^{\dagger b_{c}a_{s}}\psi_{b_{c}a_{f}}\right)\label{eq: master ints 1 tw}\\
\mathcal{L}_{\text{int}}^{\prime} & =\alpha\left(\Phi^{\dagger a_{c}a_{f}}\Phi_{a_{c}a_{f}}\right)^{2}+C' \left(\bar{\Psi}^{a_{c}a_{s}}\Phi_{a_{c}a_{f}}\right)\left(\Phi^{\dagger b_{c}a_{f}}\Psi_{b_{c}a_{s}}\right)\label{eq: master ints 2 tw}
\end{align}
\end{subequations}
where $a_c,b_c$ are indices associated with the color symmetries; $a_f,b_f$ with the $SU(N_f)$ symmetry; and $a_s,b_s$ with the $SU(N_s)$ symmetry. $C$ and $C^\prime$ coefficients of the associated interactions which we will later fix. The quartic scalar terms will henceforth be implied anytime a scalar is present, but we will make note of the scalar/fermion interaction terms when they exist.
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c||c|c|c|c|}
\cline{2-7}
\multicolumn{1}{c|}{}& \multicolumn{2}{c||}{Gauge Fields} & \multicolumn{4}{c |}{Background Fields} \\
\hline
\textbf{Symmetry} &$SU(N)$& $U(k)$ &$SU(N_s)$ & $SU(N_f)$& $U(1)_{m,b}$& $U(1)_{F,S}$
\tabularnewline
\hline
\textbf{Field} & $b^\prime_\mu$ & $c_\mu$ & $B_\mu$ & $C_\mu$& $\tilde{A}_{1\mu}$& $\tilde{A}_{2\mu}$
\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Various gauge fields used in the master duality. Dynamical fields are denoted by lowercase letters while background fields by uppercase. $\tilde{A}_{1\mu}$ is associated with the monopole/baryon number $U(1)$ symmetry also present in Aharony's dualities. $\tilde{A}_{2\mu}$ is associated to the $U(1)$ symmetry which couples to the additional fermion/scalar matter in the master duality. \label{tab:master_notation}}
\end{table}
As mentioned in the introduction, Aharony's dualities \eqref{eq:aharony_schematic} can be found by taking the $N_s=0$ and $N_f=0$ limits of \eqref{eq:master_dual}. For example, Aharony's duality \eqref{eq:u ferm tw} is the $N_f=0$ limit and is an IR duality between Lagrangians
\begin{subequations}
\label{eq:aharony_dual}
\begin{align}
\mathcal{L}_{SU} & = \left| D_{b^\prime+B+\tilde{A}_1}\phi \right|^2 -i\left[-\frac{k}{4\pi}\text{Tr}_{N}\left(b^{\prime}db^{\prime}-i\frac{2}{3}b^{\prime 3}\right)-\frac{N k}{4\pi}\tilde{A}_{1}d\tilde{A}_{1}\right],\label{eq:mdb lsu-1-1}\\
\mathcal{L}_{U} & = i\bar{\Psi}\Dslash_{c+B}\Psi-i\left[\frac{N}{4\pi}\text{Tr}_{k}\left(cdc-i\frac{2}{3}c^{3}\right)-\frac{N}{2\pi}\text{Tr}_{k}(c)d\tilde{A}_1 \right]\label{eq:mdb lu-1-1}
\end{align}
\end{subequations}
which are subject to the flavor bounds $N_s\leq N$.
We will use the $\eta$-invariant convention where a positive mass deformation for the fermion will not change the level of the Chern-Simons term. When compared to the often employed convention where an ill defined naive Dirac operator gets augmented with an half integer Chern-Simons term this means we replace \cite{Witten:2015aba,Witten:2015aoa}:
\begin{align}
i\bar{\psi}\Dslash_A \psi - i \left[- \frac{N_f}{8\pi} \text{Tr}_N \left( AdA-i\frac{2}{3}A^3 \right)\right] \qquad \to \qquad i\bar{\psi}\Dslash_A \psi.
\end{align}
We will continue to denote fermion half-levels when specifying the Chern-Simons theory.
\section{Non-Abelian Linear Quiver Dualities}
\label{sec:quivers}
We now turn to constructing linear quivers using the master duality. As explained in the introduction, we are ultimately motivated by the theta wall construction that leads to \eqref{eq:u_quiver_ym}, but we will derive dualities for a far more general case. We will begin with recasting the 3d bosonization derivation of bosonic particle-vortex duality in a way that highlights the relation to the non-Abelian quivers.
\subsection{Bosonic Particle-Vortex Duality}
\label{sec:boson_pv}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{particle_vortex2.png}
\par\end{centering}
\caption{Derivation of the bosonic particle-vortex duality as a duality between two-node linear quiver theories. On the left-hand side, we have represented each side of Aharony's Abelian dualities as a two-node quiver. The filled yellow circle represents the color gauge group while the empty circle represents promoted global symmetries (which for the case of $SU(1)$ are placeholders). The equation numbers corresponding to the two-node quivers are shown in red. Since the two fermionic theories are the same, one can perform a matching to arrive at a duality between three two-node quiver theories, the top and bottom of which are the XY and Abelian Higgs models, respectively. \label{fig:pv_dual}}
\end{figure}
To derive the bosonic particle-vortex duality we will use 3d bosonization techniques similar to those used in refs. \cite{Karch:2016sxi,Seiberg:2016gmd}. We then show how one can reinterpret the derivation in terms of a two-node quiver. This will be the simplest non-trivial case of the far more general quivers we derive in Sec. \ref{sec:building_quivers}. We will drop tildes from Abelian gauge fields in this subsection since the distinction is not necessary.
Recall the bosonic particle-vortex duality states that, at low energies, the XY model is dual to the Abelian Higgs model \cite{Peskin:1977kp, Dasgupta:1981zz},
\begin{align}
\label{eq:pv_dual}
\mathcal{L}_\text{XY}=\left|D_{A_{1}}\phi\right|^{2}\qquad\leftrightarrow\qquad\mathcal{L}_\text{AH}=\left|D_{c}\Phi\right|^{2}-i\left[-\frac{1}{2\pi}c d A_1\right].
\end{align}
The mapping of the phases is such that positive mass deformations on one end maps to a negative deformation on the other end, $m_\Phi^2\leftrightarrow -m_\phi^2$.
In order to derive \eqref{eq:pv_dual} we start by taking the Abelian limit of Aharony's dualities, \eqref{eq:abelian_aharony}. In particular, take the $N = k = N_f =1$ and $N_s = 0$ limit of \eqref{eq:master_dual}, which yields the ``scalar + $U(1)_1$ $\leftrightarrow$ free fermion'' duality,
\begin{subequations}
\label{eq:scalar+flux}
\begin{align}
\mathcal{L}_{SU} & =i\bar{\psi}\Dslash_{A_1}\psi \\
\mathcal{L}_{U} & =\left|D_{c}\Phi\right|^{2}-i\left[\frac{1}{4\pi}cdc-\frac{1}{2\pi}c d A_1\right],
\end{align}
\end{subequations}
with $m_\psi\leftrightarrow -m_\Phi^2$. Meanwhile, the ``fermion + $U(1)_{-1/2}$ $\leftrightarrow$ WF scalar'' duality is obtained by taking the $N=k=N_s=1$ and $N_f=0$ limit,
\begin{subequations}
\label{eq:pvd_dual2}
\begin{align}
\mathcal{L}_{SU} & =\left|D_{A_{1}}\phi\right|^{2}-i\left[-\frac{1}{4\pi}A_1 d A_1\right],\label{eq:pvd xy}\\
\mathcal{L}_{U} & =i\bar{\Psi}\Dslash_{c}\Psi-i\left[\frac{1}{4\pi} cdc-\frac{1}{2\pi} c d A_1\right],\label{eq:pvd ferm+flux}
\end{align}
\end{subequations}
with $m_\phi^2\leftrightarrow m_\Psi$.
Deriving the bosonic particle-vortex duality from the above two dualities is straightforward. Note that we already have the XY model in \eqref{eq:pvd xy} up to the additional background Chern-Simons term. Hence, we should look for another bosonic theory dual to \eqref{eq:pvd ferm+flux}. To do so, add $-i\left[\frac{1}{4\pi}A_1 d A_1-\frac{1}{2\pi} A_1 d B_1\right]$ to each side of \eqref{eq:scalar+flux} and promote the $U(1)$ background field to be dynamical, $A_1\to a_1$. This gives the dual theories
\begin{subequations}
\label{eq:pv_dual3}
\begin{align}
\mathcal{L}_{SU}^\prime & =i\bar{\psi}\Dslash_{a_1}\psi -i\left[\frac{1}{4\pi}a_1 da_1-\frac{1}{2\pi}a_1 d B_1\right] \label{eq:pv_ferm2}\\
\mathcal{L}_{U}^\prime & =\left|D_{c}\Phi\right|^{2}-i\left[\frac{1}{4\pi}cdc-\frac{1}{2\pi}cda_1+\frac{1}{4\pi}a_1 d a_1-\frac{1}{2\pi}a_1 d B_1\right].\label{eq:pv_scalar3}
\end{align}
\end{subequations}
Since the action is quadratic in the newly promoted $a_1$ field we can integrate it out, which imposes the constraint $a_1 = c+B_1$. Plugging this in, we find
\begin{align}
\mathcal{L}_{U}^\prime =\left|D_{c}\Phi\right|^{2}-i\left[-\frac{1}{2\pi}cdB_1-\frac{1}{4\pi}B_1 d B_1\right].\label{eq:pvd_ah}
\end{align}
After relabeling the dynamical field in \eqref{eq:pv_ferm2} as $a_1\to c$ and changing the background field $B_1 \to A_1$, we see \eqref{eq:pv_ferm2} matches \eqref{eq:pvd ferm+flux}, and thus \eqref{eq:pvd_ah} is dual to \eqref{eq:pvd xy}. Canceling the common background Chern-Simons term, we arrive at the usual particle-vortex duality, \eqref{eq:pv_dual}. Note we get the relative mass flipping between the two ends of the duality since there is only a relative sign flip between $\psi$ and $\Phi$ masses.
We would now like to recast the derivation we just performed to motivate generalization to a two-node linear quiver. Fig. \ref{fig:pv_dual} schematically shows how we would like to view the derivation. Each of our dual theories in \eqref{eq:pvd_dual2} and \eqref{eq:pv_dual3} can be viewed as a two-node linear quiver, with the matter bifundamentally charged under the two nodes which it connects.
This is motivated by the fact that in Aharony's dualities \eqref{eq:aharony_schematic}, each matter field is fundamentally charged under both a dynamical gauge field and background global flavor symmetry. If we were to promote said flavor symmetry to be dynamical, the matter becomes a bifundamental and thus admits a natural description as a two-node quiver. This looks rather trivial since $SU(N)$ gauge groups for $N=1$ are nonsensical, but will generalize nicely for $N\geq 2$. For the Abelian case we will use $SU(1)$ as a placeholder for symmetries that can be gauged in the more general case.
To see this on the Abelian Higgs side, we will first shift the dynamical gauge field, $c\to c+a_1$, so that \eqref{eq:pv_scalar3} becomes
\begin{align}
\mathcal{L}_{U}^{\prime\prime} =\left|D_{c+a_1}\Phi\right|^{2}-i\left[\frac{1}{4\pi}c d c-\frac{1}{2\pi}a_1 dB_1\right].\label{eq:pvd_ah_alt}
\end{align}
In this form the scalar is bifundamentally charged under two $U(1)$ gauge groups, which represent the two nodes in the quiver theory. The dual to the Abelian Higgs model, \eqref{eq:pv_ferm2}, couples to a single dynamical $U(1)$ gauge field, $a_1$. This was previously the flavor symmetry but was promoted to a gauge symmetry in moving from \eqref{eq:scalar+flux} to \eqref{eq:pv_dual3}. As mentioned above, the gauge field belonging to the second node is absent only because we are working in the Abelian limit of Aharony's dualities. On the XY model end of the duality \eqref{eq:pvd xy}, $\phi$ couples to two $SU(1)$ fields, so it has no gauge couplings at all.\footnote{For our purposes here, we are ignoring the possibility of gauging the $U(1)$ global symmetry since its properties are well established in the particle-vortex duality as a global symmetry.}
The upshot of recasting the derivation in this form is that it readily generalizes to more complicated two-node quivers. One can use the more general Aharony's dualities to perform very similar steps as was done in the Abelian case. We'll see particle-vortex duality generalizes to a duality of the form
\begin{multline}\label{eq:NonAbPV}
SU(N_1)_{-k_1}\times SU(N_2)_{-k_2} + \text{bifundamental scalar} \\
\qquad\leftrightarrow\qquad U(k_1)_{N_1-N_2}\times U(k_1+k_2)_{N_2} + \text{bifundamental scalar}.
\end{multline}
The bosonic particle-vortex duality is then just the $N_1=N_2=k_1=1$ and $k_2=0$ case. In Sec. \ref{sec:checks} we present further evidence of this interpretation by matching the spectrum of particles and vorticies in \eqref{eq:NonAbPV} in a manner similar to the Abelian case. Before we do this we demonstrate how we can systematically construct the non-Abelian quivers for an arbitrary number of nodes. This requires the use of the master duality when the number of nodes is greater than two.
\subsection{Building Non-Abelian Linear Quiver Dualities}
\label{sec:building_quivers}
Following the discussion in the previous subsection, our strategy in deriving dual descriptions of quiver gauge theories is to start with the master duality and gauge global symmetries on both sides of the duality in order to arrive at a duality for the resulting product gauge group. Since in a quiver gauge theory the gauge group associated with a given node sees the gauge groups associated with the neighboring nodes as global flavor symmetries, this roughly speaking amounts to dualizing the quiver one node at a time. While not a proof, this procedure suggests the resulting theories are dual. This basic idea had previously been pursued in ref. \cite{Jensen:2017dso} using Aharony's duality, but the flavor bounds put severe limitations on the quivers that were amendable to this analysis. In particular, the most interesting case with equal rank gauge groups on each node was out of reach. We will see that the master duality will help overcome many of these limitations.
To streamline the derivation it is helpful to follow ref. \cite{Jensen:2017dso} and rearrange BF terms to group the $SU(N)$ and $U(1)$ global symmetries together. Additionally, a key ingredient in matching this analysis to the existing particle-vortex duality will be the global $U(1)$ symmetries on either side of the duality. As such, we will be especially careful in keeping track of the global symmetries at every step.
We start by recalling how ref. \cite{Jensen:2017dso} derived their quiver transformations and generalize their method to the master duality. Starting from \eqref{eq:aharony_dual}, one can use the fact the $U(k)$ field can be separated into its Abelian and non-Abelian parts, i.e. $c=c^{\prime} +\tilde{c}\mathds{1}_{k}$, to perform a shift on the Abelian portion, $\tilde{c}\to\tilde{c}+\tilde{A}_{1}$. This allows one to rewrite $\mathcal{L}_{U}$ as
\begin{equation}
\mathcal{L}_{U}=i\bar{\Psi}\Dslash_{c+B+\tilde{A}_1}\Psi-i\left[\frac{N}{4\pi}\text{Tr}_{k}\left(cdc-i\frac{2}{3}c^{3}\right)-\frac{N k}{4\pi}\tilde{A}_{1}d\tilde{A}_{1}\right].
\end{equation}
Canceling the overall factor of $i\frac{N k}{4\pi}\tilde{A}_{1}d\tilde{A}_{1}$ on either side of the duality and defining the new $U(N_s)$ background field
$G_{\mu}\equiv B_{\mu}+\tilde{A}_{1\mu}\mathds{1}_{N_s}$, \eqref{eq:aharony_dual} becomes
\begin{subequations}
\begin{align}
\mathcal{L}_{SU} & = \left| D_{b^\prime+G}\phi \right|^2 -i\left[-\frac{k}{4\pi}\text{Tr}_{N}\left(b^{\prime}db^{\prime}-i\frac{2}{3}b^{\prime 3}\right)\right]\\
\mathcal{L}_{U} & =i\bar{\Psi}\Dslash_{c+G}\Psi-i\left[\frac{N}{4\pi}\text{Tr}_{k}\left(cdc-i\frac{2}{3}c^{3}\right)\right].
\end{align}
\end{subequations}
The procedure used in \cite{Jensen:2017dso} to derive new dualities is to promote the non-Abelian $U(N_s)$ global symmetry to be dynamical. Since both the $\phi$ and $\Psi$ matter is charged under $G$, this turns the matter into bifundamentals. Schematically, we denote the promoted duality as
\begin{equation}
SU(N)_{-k}\times U(N_s)_0 \qquad \leftrightarrow\qquad U(k)_{N-N_s/2}\times U(N_s)_{-k/2}.\label{eq:aharony dual 2 tw}
\end{equation}
This is subject to the flavor bound $N \geq N_s$.
In promoting the $U(1)$ global symmetry to a gauge symmetry, we get another $U(1)$ global symmetry which couples to the new gauge current on either side of the duality. If we wanted to make the coupling to the new background gauge field $\tilde{B}_1$ explicit, we would add a $-i\frac{1}{2\pi}\tilde{A}_1d\tilde{B}_1$ term to each side of the duality. This is completely analogous to the procedure preformed in ref. \cite{Karch:2016sxi}, where a new BF term was included with each promotion to represent the new $U(1)$-monopole symmetry on each side of the duality.
Below, we will sometimes apply this duality to strictly $SU$ gauge fields, in which case it is advantageous to only gauge the $SU$ part of the flavor symmetry, so that \eqref{eq:aharony dual 2 tw} becomes
\begin{equation}
SU(N)_{-k}\times SU(N_s)_0\qquad \leftrightarrow\qquad U(k)_{N-N_s/2}\times SU(N_s)_{-k/2}.\label{eq:aharony dual 4}
\end{equation}
Note that in this form of the duality each side retains the original global $U(1)$ symmetries and we do not obtain the additional global $U(1)$ as above.
We could apply the same procedures to the case where the $SU$ side contains the fermion and the $U$ side contains the scalar, where we would then find
\begin{align}
SU(N)_{-k+N_f/2}\times U(N_f)_{N/2}\qquad\leftrightarrow\qquad U(k)_{N}\times U(N_f)_0, \label{eq:aharony dual tw}
\end{align}
which matches the result found in ref. \cite{Jensen:2017dso} up to an overall shift in the level of the background term. This case is considered in more detail in Appendix \ref{appendix:duals}.
Now let us perform similar manipulations to the master duality in \eqref{eq:master_dual}. Since the Chern-Simons terms on the $U$ side are identical to \eqref{eq:mdb lu-1-1}, performing the same manipulations, \eqref{eq:master_LU} becomes
\begin{equation}
\mathcal{L}_{U}=\left|D_{c+\tilde{A}_{1}+C}\Phi\right|^{2}+i\bar{\Psi}\Dslash_{c+\tilde{A}_{1}+B+\tilde{A}_{2}}\Psi+\mathcal{L}_{\text{int}}^{\prime}-i\left[\frac{N_{1}}{4\pi}\text{Tr}_{k_{1}}\left(cdc-i\frac{2}{3}c^{3}\right)-\frac{N_{1}k_{1}}{4\pi}\tilde{A}_{1}d\tilde{A}_{1}\right].
\end{equation}
Again, we cancel the common $\tilde{A}_{1}$ Chern-Simons terms on either side of the duality. It will also be convenient to perform a shift to move the $\tilde{A}_2$ fields onto the $\psi$ and $\Phi$ matter, so we take $\tilde{A}_1\to\tilde{A}_1-\tilde{A}_2$ on either side of the duality. We could now combine the $U(1)$ and $SU(N_f)$ global symmetries into the definition of $E_{\mu}=C_{\mu}+\tilde{A}_{1\mu}\mathds{1}_{N_f}$ as we did in \eqref{eq:aharony dual 2 tw}. However, we will hold off on doing this since it is more convenient to keep the two global symmetries separate for our purposes. This leaves us with a duality of the form
\begin{subequations}
\begin{align}
\mathcal{L}_{SU} & =\left|D_{b^{\prime}+B+\tilde{A}_1}\phi\right|^{2}+i\bar{\psi}\Dslash_{b^{\prime}+ C + \tilde{A}_1-\tilde{A}_{2} }\psi+\mathcal{L}_{\text{int}}-i\left[\frac{N_f-k}{4\pi}\text{Tr}_{N}\left(b^{\prime}db^{\prime}-i\frac{2}{3}b^{\prime3}\right)\right]\nonumber \\
& -i\left[\frac{N}{4\pi}\text{Tr}_{N_f}\left(CdC-i\frac{2}{3}C^{3}\right)+\frac{N N_f}{4\pi}(\tilde{A}_1-\tilde{A}_2)d(\tilde{A}_1-\tilde{A}_2)\right],\\
\mathcal{L}_{U} &= \left|D_{c+C + \tilde{A}_1-\tilde{A}_{2}}\Phi\right|^{2}+i\bar{\Psi}\Dslash_{c+B+\tilde{A}_1}\Psi+\mathcal{L}_{\text{int}}^{\prime}-i\left[\frac{N}{4\pi}\text{Tr}_{k}\left(cdc-i\frac{2}{3}c^{3}\right)\right].
\end{align}
\end{subequations}
At this point, we have two choices with how to treat the global symmetry associated with $\tilde{A}_2$. The first choice is to simply leave it as a global symmetry and gauge only the $SU(N_f)$ flavor symmetry associated with $C$.
In this form, each side of the master duality retains the $U(1)$ global symmetries associated with $\tilde{A}_1$ and $\tilde{A}_2$. Alternatively, we could also gauge the global symmetry associated with $\tilde{A}_2$. The latter of these cases will be useful for our purposes in this paper, so we define a $U(N_f)$ gauge field $G_{\mu}\equiv C_{\mu}-\tilde{A}_{2\mu}\mathds{1}_{N_f}$ to which $\psi$ and $\Phi$ couple. After gauging the $U(N_f)$ and $SU(N_s)$ global symmetries, this leaves us with
\begin{equation}
SU(N)_{-k+N_f/2}\times U(N_f)_{N/2}\times SU(N_s)_0 \quad\leftrightarrow\quad U(k)_{N-N_s/2}\times U(N_f)_0 \times SU(N_s)_{-k/2}.\label{eq:master dual tw}
\end{equation}
Similar to Aharony's duality, in gauging the $U(N_f)$ symmetry which is associated with $G$, we pick up an additional monopole $U(1)$ symmetry on either side of the duality which couples to the newly gauged $\tilde{A}_2$ field. We will denote the background gauge field associated with said symmetry by $\tilde{B}_2$. For completeness, we consider the master duality with all global symmetries gauged in Appendix \ref{appendix:duals}.
Note that we can modify either of the above dualities by adding additional background flavor levels to either side of the duality before promotion. As a reminder, these dualities are subject to the flavor bound $k\geq N_f $ and $N\geq N_s$, but $\left(k,N \right)\ne\left(N_f,N_s\right)$. In the $N_f=0$ and $N_s=0$ limits, \eqref{eq:master dual tw} reduces to \eqref{eq:aharony dual 4} and \eqref{eq:aharony dual tw}, respectively, with appropriate relabeling.
\subsubsection*{Four-Node Example}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{linear_quiver.png}
\par\end{centering}
\caption{Dualizing a linear quiver. Red nodes are $SU$ gauge groups and black nodes are $U$ groups. Black (red) links are bifundamental bosons (fermions). Applying Aharony's duality to the leftmost link turns the scalar into a fermion. Then, applying the master duality repeatedly moves said fermion across the quiver until it reaches the final link where Aharony's duality can again be used to turn the fermion into a boson. \label{fig:linear quiver tw}}
\end{figure}
Let us now use the dualities we've defined to dualize a four-node quiver. Walking through this construction will make generalization to the $n$-node case straightforward. We begin with the $SU$ side of the theory
\begin{equation}
\text{Theory A:}\qquad SU\left(N_{1}\right)_{-k_{1}}\times SU\left(N_{2}\right)_{-k_{2}}\times SU\left(N_{3}\right)_{-k_{3}}\times SU\left(N_{4}\right)_{-k_{4}}.\label{eq:linear SU4 tw}
\end{equation}
This theory has the bifundamental scalars which have charges as given in Table \ref{tab:Charges-of-the bifund linear tw}. In what follows we will assume that $N_{1}\geq N_{2}\geq N_{3}\geq N_{4}$ as well as $k_{i}\geq0$ for $i=1,2,3,4$. Although this is not the most general case, below we will find this is required to avoid flavor bounds to get to the desired $U$ theory. Each of the bifundamentals is charged under a global $U(1)$ symmetry which rotates its overall phase, giving this side of the duality a $[U(1)]^3$ global symmetry.
We will denote the bifundamental scalars living between nodes $j$ and $j+1$ by $Y_{j,j+1}$ and $X_{j,j+1}$ on the $SU$ and $U$ side of the duality, respectively. The masses of the $U$ bifundamentals are denoted by $m_{j,j+1}$ while we will use $M_{j,j+1}$ for those on the $SU$ side.
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Theory A} & $SU(N_1)_{-k_1}$ & $SU(N_2)_{-k_{2}}$ & $SU(N_3)_{-k_{3}}$ & $SU(N_4)_{-k_{4}}$\tabularnewline
\hline
$Y_{1,2}$ & $\square$ & $\square$ & $1$ & $1$\tabularnewline
\hline
$Y_{2,3}$ & $1$ & $\square$ & $\square$ & $1$\tabularnewline
\hline
$Y_{3,4}$ & $1$ & $1$ & $\square$ & $\square$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Charges of the bifundamental matter in our linear quivers. $\square$ denotes the matter transforms in the fundamental representation of the corresponding gauge group. \label{tab:Charges-of-the bifund linear tw}}
\end{table}
Before embarking on deriving the duality, note that the uniform mass deformations of (\ref{eq:linear SU4 tw}) are given by
\begin{subequations}
\label{eq:su_deforms}
\begin{align}
(\text{A1})\qquad M_{i,i+1}^{2}>0:\qquad & SU\left(N_{1}\right)_{-k_{1}}\times SU\left(N_{2}\right)_{-k_{2}}\times SU\left(N_{3}\right)_{-k_{3}}\times SU\left(N_{4}\right)_{-k_{4}}\\
(\text{A2})\qquad M_{i,i+1}^{2}<0:\qquad & SU\left(N_{1}-N_{2}\right)_{-k_{1}}\times SU\left(N_{2}-N_{3}\right)_{-k_{1}-k_{2}}\times SU\left(N_{3}-N_{4}\right)_{-k_{1}-k_{2}-k_{3}}\nonumber \\
& \times SU\left(N_{4}\right)_{-k_{1}-k_{2}-k_{3}-k_{4}}.
\end{align}
\end{subequations}
Here we have been careful to account for which gauge group is Higgsed by each bifundamental scalar. The bifundamental scalar $Y_{i,i+1}$ has $N_{i}\times N_{i+1}$ components. Below we will always view the smaller of the two gauge groups to be associated with the ``flavor'' symmetry of the bifundamentals. As such, if we assume the Higgsing to be maximal, the Higgsing can be thought of as acting on the ``color'' gauge group, i.e. the group with the larger rank, while leaving the flavor group unchanged.\footnote{For example, for a bifundamental coupled to $SU(N_1)_{c}$ and $SU(N_2)_{f}$ with $N_{1}\geq N_{2}$, what occurs can be best understood by first splitting $ SU(N_1)_c \times SU(N_2)_f \to SU(N_1-N_2)_c\times SU(N_2)_c\times SU(N_2)_f$. Since the bifundamental is maximally Higgsed in the $SU(N_2)$ subgroup, the unbroken part of the two $SU(N_2)$ factors is their diagonal, leaving $SU(N_1-N_2)_c\times SU\left(N_{2}\right)_{\text{diag}}$. Thus, saying the flavor group is unchanged is a merely a convenient relabeling. Also note that the Chern-Simons level of the new flavor group will be the sum of the original flavor and color Chern-Simons levels.} Since to meet flavor bounds below we have assumed $N_{1}\geq N_{2}\geq N_{3}\geq N_{4}$, this means vacuum expectation value takes the form
\begin{equation}
\langle Y_{i,i+1}^{a_{i}a_{i+1}} \rangle\propto\left(\begin{array}{c}
\mathds{1}_{N_{i+1}}\\
0
\end{array}\right)^{a_{i}a_{i+1}}\label{eq:su vev tm}
\end{equation}
with $a_i,b_i$ and $a_{i+1}, b_{i+1}$ gauge indices to $SU(N_i)$ and $SU(N_{i+1})$, respectively. Reassuringly, no gauge group acquires a negative rank with this Higgsing pattern.
Returning to our derivation of the non-Abelian linear quiver dualities, we will now show five theories are dual to one another,
\begin{equation}
\text{Theory A}\quad\leftrightarrow\quad\text{Theory B}\quad\leftrightarrow\quad\cdots\quad\leftrightarrow\quad\text{Theory E},
\end{equation}
by sequentially dualizing each node from left to right, see Fig. \ref{fig:linear quiver tw}. To begin, we apply Aharony's duality \eqref{eq:aharony dual 4} to the first node to obtain
\begin{equation}
\text{Theory B:}\qquad U\left(k_{1}\right)_{N_{1}-N_{2}+\frac{N_{2}}{2}}\times SU\left(N_{2}\right)_{-k_{2}-k_{1}+\frac{k_{1}}{2}}\times SU\left(N_{3}\right)_{-k_{3}}\times SU\left(N_{4}\right)_{-k_{4}}.
\end{equation}
Flavor bounds are satisfied so long as $N_{1}\geq N_{2}$. This turns the bifundamental scalar on the link between nodes one and two into a bifundamental fermion. The $U(1)$ global symmetry of the first bifundamental becomes a monopole symmetry for the Abelian part of the gauge field which lives on the first node. This will be a common theme as we sequentially step through nodes and the details are shown in Appendix \ref{appendix:sym}.
Dualizing the $SU(N_2)$ node is where we will need to use something new. We could try applying Aharony's dualities \eqref{eq:aharony dual 2 tw} or \eqref{eq:aharony dual tw} to the second node. However, one will inevitably run into flavor bound issues since nodes with links on two sides require a $SU(N_{i-1}+N_{i+1})$ flavor symmetry, which exceeds the $SU(N_i)$ color symmetry for the cases we are interested in here.
Notice that since the master duality has two types of matter it has two \emph{separate }flavor symmetries, each subject to its own flavor bound. This is useful for dualizing the nodes with two links and, furthermore, has the correct matter content since node two in Theory B has both a bifundamental scalar and fermion attached to it. However, the master duality is quite a bit different from Aharony's original dualities in that it requires additional interactions terms between the scalars and fermions on a given side of the duality, as given in \eqref{eq:master_ints}. Let us consider how we could introduce such interactions terms and how they affect the theories we are considering.
\subsubsection*{Including Bifundamental Interactions}
The interaction we need in order to apply the master duality in theory B is
\begin{align}
\text{Theory B:}\qquad & C_{2}^{(B)}\left(\bar{\psi}_{1,2}^{a_{2}a_{1}}Y_{a_{2}a_{3}}^{2,3}\right)\left(Y_{2,3}^{\dagger b_{2}b_{3}}\psi_{b_{2}a_{1}}^{1,2}\right).\label{eq:master matter int B tw}
\end{align}
Here $C_{2}^{(B)}$ is the coefficient of the interaction on the second node in theory B and we have not yet committed to its sign or magnitude, but we will do so later by matching TFTs. In what follows, it will be useful to associate each interaction term with one of the interior nodes of the quiver.
In order to give rise to \eqref{eq:master matter int B tw}, we must backtrack slightly since a similar interaction should also then be present in theory A in its dualized form. The exact matching of the interactions between theories A and B is quite subtle and requires auxiliary field techniques that were originally introduced in the large $N$ and $k$ literature \cite{Minwalla:2015sca}. Here we will only give a schematic overview. The full details of this matching are given in Appendix \ref{appendix:int}.
Recall that the purpose of the interaction term in the master duality is to ensure that when the scalars acquire a vacuum expectation value we also gain an additional mass term for the fermions \cite{Benini:2017aed,Jensen:2017bjo}. This was vital for matching the phases and TFTs on either side of the duality. Importantly, regardless of the sign of the mass deformation, the fermions never condense and thus there is no opportunity for a fermion condensate to influence the mass of the scalars through the same interaction term. If the fermions did condense, this would yield a very different looking phase diagram than that found in refs. \cite{Benini:2017aed,Jensen:2017bjo}.
Here we identify the fermions with scalars, which \emph{can} condense when their quadratic term goes negative. In order to match the TFT phase diagrams of theories A and B under mass deformations, we need to make sure the interaction term does not allow the $Y_{1,2}$ condensate to influence the $Y_{2,3}$ mass. In Appendix \ref{appendix:int} we derive an interaction term which has the desired properties: this term will cause a nonzero vacuum expectation value for the $Y_{2,3}$ bifundamental to give a positive or negative mass to $Y_{1,2}$, depending on the sign of the coefficients $C_{2}^{(A)}$. However, the opposite effect cannot occur: $Y_{1,2}$ acquiring a vacuum expectation value \emph{cannot} influence the mass of $Y_{2,3}$. More generally, for the $SU$ side of the duality, the vacuum expectation value of a link can only affect nodes/links to its \emph{left}. The interaction term is unidirectional
as it is for the original master duality.
We will \emph{schematically} denote the interactions we add to Theory A as
\begin{align}
\text{Theory A:}\qquad & C_{2}^{\left(A\right)}\left(Y_{1,2}^{\dagger a_{2}a_{1}}Y_{a_{2}a_{3}}^{2,3}\right)\left(Y_{2,3}^{\dagger b_{2}a_{3}}Y_{b_{2}a_{1}}^{1,2}\right)\label{eq:master matter int A tw}
\end{align}
with the understanding that the true interaction is as given in \eqref{eq:su_ints}. Eq. \eqref{eq:master matter int A tw} is equivalent to \eqref{eq:su_ints} if we simply ignore the fact that when the $Y_{1,2}$ acquires a vacuum expectation value the interaction term gives a mass to $Y_{2,3}$, so we will do so henceforth for brevity. An analogous interaction term is added to node three as well since it will be needed when stepping from theory C to D.
Having introduced the necessary interaction term, we subsequently apply the master duality \eqref{eq:master dual tw} to the second and third nodes, this give
\begin{align}
\text{Theory C:}\qquad & U(k_1)_{N_{1}-N_{2}}\times U(k_{1}+k_{2})_{N_{2}-\frac{N_{3}}{2}}\times SU(N_{3})_{-k_{3}-k_{1}-k_{2}+\frac{k_{1}+k_{2}}{2}}\times SU(N_4)_{-k_{4}}\\
\text{Theory D:}\qquad & U(k_1)_{N_{1}-N_{2}}\times U(k_{1}+k_{2})_{N_{2}-N_{3}}\times U(k_{1}+k_{2}+k_{3})_{N_{3}-\frac{N_{4}}{2}}\nonumber \\
& \times SU(N_4)_{-k_{4}-k_{1}-k_{2}-k_{3}+\frac{k_{1}+k_{2}+k_{3}}{2}}.
\end{align}
These in turn require the flavor bounds $N_{2}\geq N_{3}, k_{2}\geq 0$ and $N_{3}\geq N_{4}, k_{3}\geq 0$, respectively.\footnote{More precisely, this should exclude the double saturation cases where $N_2=N_3$ and $k_2=0$ or $N_3=N_4$ and $k_3 =0$. We will not make note of such special cases henceforth since they will not be relevant for our purposes.} Each application of the master duality changes a boson link to a fermion link and vice versa, effectively driving the single fermion link down the quiver, see Fig. \ref{fig:linear quiver tw}. As with the duality relating theories A and B, the application of the master duality above changes the global $U(1)$ symmetry across the duality. Specifically, it changes the $U(1)$ global symmetry under which $Y_{2,3}$ was charged to a monopole-like symmetry which couples to the Abelian part of gauge field on the second node. A completely analogous transformation occurs for the baryon number symmetry of $Y_{3,4}$. The details of how this occurs are shown in Appendix \ref{appendix:sym}.
Finally, to arrive at the desired dual theory we again apply Aharony's duality \eqref{eq:aharony dual tw} to the last node.
Flavor bounds require $k_{4}\geq0$. This ultimately gives
\begin{equation}
\text{Theory E:}\qquad U\left(k_{1}\right)_{N_{1}-N_{2}}\times U\left(k_{1}+k_{2}\right)_{N_{2}-N_{3}}\times U\left(k_{1}+k_{2}+k_{3}\right)_{N_{3}-N_{4}}\times U\left(k_{1}+k_{2}+k_{3}+k_{4}\right)_{N_{4}}.
\end{equation}
Note that the fourth node does not pick up a monopole-like global symmetry for its Abelian gauge field. This is related to the fact we have one more node than bifundamentals and is also a feature of the dualities in ref. \cite{Jensen:2017dso} and ABJM theory \cite{Aharony:2008ug}.
Following the mass identifications through the dualities, we see $M_{i,i+1}^2\leftrightarrow -m_{i,i+1}^2$. The uniform mass deformations of Theory E are
\begin{subequations}
\begin{align}
(\text{E1})\qquad m_{i,i+1}^{2}<0:\qquad & U\left(k_{1}\right)_{N_{1}}\times U\left(k_{2}\right)_{k_{2}}\times U\left(k_{3}\right)_{N_{3}}\times U\left(k_{4}\right)_{N_{4}}\\
(\text{E2})\qquad m_{i,i+1}^{2}>0:\qquad & U\left(k_{1}\right)_{N_{1}-N_{2}}\times U\left(k_{1}+k_{2}\right)_{N_{2}-N_{3}}\times U\left(k_{1}+k_{2}+k_{3}\right)_{N_{3}-N_{4}} \nonumber \\
& \times U(k_{1}+k_{2}+k_{3}+k_{4})_{N_{4}}
\end{align}
\end{subequations}
which, reassuringly, are level-rank dual to the phases considered above in \eqref{eq:su_deforms}. Here once again some care is required for the Higgs phase. Since we are assuming all $k_{i}\geq0$ in order to meet the flavor bounds above, the maximal Higgsing vacuum expectation value is, using block matrix notation,
\begin{equation}
\langle X_{i,i+1}^{a_{i}a_{i+1}}\rangle\propto\left(\begin{array}{cc}
\mathds{1}_{K_{i}} & 0\end{array}\right)^{a_{i}a_{i+1}}.\label{eq:u vev tw}
\end{equation}
where $a_{i},a_{i+1}$ are gauge indices to $U(K_{i})$ and $U(K_{i+1})$, respectively and we have defined the shorthand
\begin{equation}
K_{j}\equiv\sum_{i=1}^{j}k_{i}.\label{eq:big K def tw}
\end{equation}
Of course, as we apply all the aforementioned dualities the matter interaction terms are changing as well. We also end up with the interaction term \eqref{eq:u_ints} between adjacent bifundamental scalars for theory E. The interaction is such that a bifundamental scalar vacuum expectation can only affect nodes/links to its \emph{right} now. \footnote{This is most obvious to see when moving from Theory C to Theory D. In theory C, the $X_{1,2}$ bifundamental can influence the mass of the fermion on the $2,3$ link, but not vice versa. Hence, in Theory D the interaction between $X_{1,2}$ and $X_{2,3}$ should obey the same rule to get a matching of TFTs.} The analog of \eqref{eq:master matter int A tw}, which is a schematic stand-in for \eqref{eq:u_ints}, is
\begin{equation}
\text{Theory E:}\qquad C_{2}^{\left(E\right)}\left(X_{1,2}^{\dagger a_{2}a_{1}}X_{a_{2}a_{3}}^{2,3}\right)\left(X_{2,3}^{\dagger b_{2}a_{3}}X_{b_{2}a_{1}}^{1,2}\right).\label{eq:U theory ints tw}
\end{equation}
where now we ignore the fact that when $X_{2,3}$ acquires a vacuum expectation value it gives a mass to $X_{1,2}$.
\subsubsection*{Effect of Interactions}
We would now like to show that these interaction terms are vital for a matching the mass deformed TFTs. Although we found a matching between phases for the completely gapped/Higgsed phases above, these were very special cases. In order to observe the expected partial gapping/Higgsing behavior to apply to theta walls, we need to carefully treat the interactions.
First it will be helpful to specialize to a particular sign and magnitude of interaction terms coefficients, $C_I^{(A)}$ and $C_I^{(E)}$ for $I=2,3$ (i.e. all internal nodes). Specifically for the purposes of matching onto the phases of \eqref{eq:su_quiver_ym}, we take $C_{I}^{(A)}<0$ and $C_{I}^{(E)}<0$.
\footnote{We must choose the coefficients of the interactions to be the same sign for a matching of TFTs. To see this, first note that we use the master duality once on each internal node, and under the master duality the interaction term flips sign \cite{Benini:2017aed,Jensen:2017bjo}. An additional sign flip comes from the application of the master duality for the node to the left of an internal node, which sign flip when changing the fermions in the interaction to bosons.} We also assume $|C_{I}^{(E)}|\to \infty$ such that $|C_{I}^{(E)}|\gg|m_{i,i+1}^{2}|$. Although not considered in \cite{Benini:2017aed,Jensen:2017bjo}, it is straightforward to check that a very large interaction term on one side of the master duality implies a very small interaction term on the other side of the duality.\footnote{To see this, let us specialize to the notation used in \cite{Jensen:2017bjo}. Here we saw that by changing the sign of $c_{4}$ and $c_{4}^{\prime}$, one could change the location of the ``singlet critical line''. Smoothly changing $c_{4}\to-c_{4}$ causes the line to move from phase IV to phase III. Meanwhile, changing $c_{4}^{\prime}\to-c_{4}^{\prime}$ to move from phase IV' to phase I'. Thus, for example, we can shrink the size of region IVb and IVb' by decreasing the magnitude of $c_{4}^{\prime}$ and increasing the magnitude of $c_{4}$. In the limit $c_{4}^{\prime}\to0$, we must take $\left|c_{4}\right|\to\infty$. } Hence, $C_{I}^{(A)}\to 0$ so the hierarchy $M_{i,i+1}^{2}\gg C_{I}^{(A)}>0$ holds for all mass deformations. In such a limit we can effectively ignore the interaction terms on the $SU$ side of the duality.
The choice of magnitudes above has the added effect of simplifying the analysis of the interaction terms and the TFT structure. It is possible to derive quiver theories for more general interaction coefficients and still find matching TFTs, but we leave such analysis for future work.
Let us now consider the effect of the interaction terms on the $U$ side of the duality. The two interaction terms of theory E are given by
\begin{equation}
C_{2}^{\left(E\right)}\left(X_{1,2}^{\dagger a_{2}a_{1}}X_{a_{2}a_{3}}^{2,3}\right)\left(X_{2,3}^{\dagger b_{2}a_{3}}X_{b_{2}a_{1}}^{1,2}\right)+C_{3}^{\left(E\right)}\left(X_{2,3}^{\dagger a_{3}a_{2}}X_{a_{3}a_{4}}^{3,4}\right)\left(X_{3,4}^{\dagger b_{3}a_{4}}X_{b_{3}a_{2}}^{2,3}\right).\label{eq:four node ints tw}
\end{equation}
Consider the case when $X_{1,2}$ acquires a nonzero vacuum expectation value as in \eqref{eq:u vev tw}. This term breaks $U(K_2)$ down to $U(K_{2}-K_{1})$, which is the usual effect of the Higgsing. Additionally, the first interaction term \eqref{eq:four node ints tw} becomes
\begin{equation}
-\left(X_{1,2}^{\dagger a_{2}a_{1}}X_{a_{2}a_{3}}^{2,3}\right)\left(X_{2,3}^{\dagger b_{2}a_{3}}X_{b_{2}a_{1}}^{1,2}\right)\propto-\left(\begin{array}{cc}
\mathds{1}_{K_{1}} & 0\\
0 & 0
\end{array}\right)_{b_{2}}^{a_{2}}X_{2,3}^{\dagger b_{2}a_{3}}X_{a_{2}a_{3}}^{2,3}.\label{eq:x12 breaking tw}
\end{equation}
Hence the vacuum expectation value of $X_{1,2}$ shows up as a negative mass deformation for the first $K_{1}$ components of $X_{2,3}$ and thus \emph{also} breaks the $U(K_{3})$ to $U(K_{3}-K_{1})$. As such, except for the $X_{1,2}$ link, each bifundamental can acquire a mass deformation from two different sources: its explicit mass term as well as the interaction terms to its left. As an example, let us assume $m^2_{2,3}=m^2_{3,4}=0$ but $m^2_{1,2}< 0$. Then $X_{2,3}$ acquires a vacuum expectation value from \eqref{eq:x12 breaking tw}. The interaction term between $X_{2,3}$ and $X_{3,4}$ also means $X_{3,4}$ gets a negative mass shift,
\begin{equation}
-\left(X_{2,3}^{\dagger a_{3}a_{2}}X_{a_{3}a_{4}}^{3,4}\right)\left(X_{3,4}^{\dagger b_{3}a_{4}}X_{b_{3}a_{2}}^{2,3}\right)\propto-\left(\begin{array}{cc}
\mathds{1}_{K_{1}} & 0\\
0 & 0
\end{array}\right)_{b_{3}}^{a_{3}}X_{3,4}^{\dagger b_{3}a_{4}}X_{a_{3}a_{4}}^{3,4}.\label{eq:x23 x34 int tw}
\end{equation}
breaking $U(K_{4})$ to $U(K_{4}-K_{1})$ and giving the first $K_{1}$ components an additional negative mass deformation. If there are no other mass term deformations, this effect cascades to the \emph{right} across the entire quiver.
Now let us consider how this changes if $X_{2,3}$ also had a mass deformation. A negative mass deformation would further break down the $U(K_{3})$ subgroup to $U(K_{3}-K_{2})$, and this could also propagate down the quiver, as we just discussed. Positive mass deformations are slightly more tricky since we need to consider them in two regimes. First consider the case where the negative mass deformation from \eqref{eq:x12 breaking tw} is larger than that of the mass term for $X_{2,3}$. Then the results considered above are unchanged, $X_{1,2}$ is still partially broken. When the mass term has a larger positive mass contribution than that of \eqref{eq:x12 breaking tw}, $X_{2,3}$ is completely gapped. This means none of the components have a nonzero vacuum expectation value, and thus the interaction \eqref{eq:x23 x34 int tw} contributes no mass to $X_{3,4}$. In other words, if the mass term for $X_{2,3}$ is large enough, it can stop of propagation of $X_{1,2}$'s breaking down the quiver. We have avoided this case by assuming $\left|C_{I}^{\left(E\right)}\right|\gg\left|m_{i,i+1}^{2}\right|$, thereby forbidding large positive mass deformations from blocking the propagation of the breaking down the quiver.
What about when $X_{2,3}$ acquires a negative mass deformation but the $X_{1,2}$ and $X_{3,4}$ mass terms are untouched? By the same reasoning above, $X_{3,4}$ will also acquire a negative mass shift to its first $K_{2}$ components via the interaction terms, causing the breaking of $U(K_3)$ and $U(K_4)$ to $U(K_3-K_2)$ and $U(K_4-K_2)$, respectively. However, as we have been careful to argue in Appendix \ref{appendix:int}, $X_{2,3}$'s vacuum expectation value should not be able to influence nodes/links to its left. Hence $X_{1,2}$ is unaffected.
We are now in the position to consider mass deformations which are partially Higgsed. That is, not all bifundamental masses are taken to be the same sign. Specifically, consider the case where the bifundamentals $Y_{1,2}$ and $Y_{3,4}$ are Higgsed and $Y_{2,3}$ is gapped (corresponding via mass identifications to $X_{1,2}$ and $X_{3,4}$ being gapped and $X_{2,3}$ being Higgsed). This yields the phases
\begin{subequations}
\begin{align}
(\text{A3})\quad:\quad & SU(N_{1}-N_{2})_{-k_{1}}\times SU(N_{2})_{-k_{1}-k_{2}}\times SU(N_{3}-N_{4})_{-k_{3}}\times SU(N_{4})_{-k_{3}-k_{4}}\label{eq: partial Higgs A tw}\\
(\text{E3})\quad:\quad & U(k_1)_{N_{1}-N_{2}}\times U(k_{1}+k_{2})_{N_{2}}\times U(k_3)_{N_{3}-N_{4}}\times U(k_{3}+k_{4})_{N_{4}}\label{eq:partial Higgs B tw}
\end{align}
\end{subequations}
which are level-rank dual to one another! Note that the interactions are vital for us to reach this conclusion. We have used the fact that $X_{2,3}$ acquiring a vacuum expectation value breaks the $U(K_2)$ subgroup of $U(K_3)\to U(k_3)_{N_{3}-N_{4}}\times U(K_2)_{N_{3}-N_{4}}$ and also, due to the interaction term, $U(K_4)_{N_{4}}\to U(k_3+k_4)_{N_{4}}\times U(K_2)_{N_{4}}$. Without such terms we would have found the $U(K_{4})_{N_{4}}$ group unbroken, yielding the TFT
\begin{equation}
(\text{E}\bar{\text{3}})\qquad:\qquad U(k_1)_{N_{1}-N_{2}}\times U(k_{1}+k_2)_{N_{2}-N_{4}}\times U(k_3)_{N_{3}-N_{4}}\times U(K_4)_{N_{4}}
\end{equation}
which is clearly not level-rank dual to \eqref{eq: partial Higgs A tw}.
\subsubsection*{Generalization to $n$ Nodes}
Now let us generalize this prescription to an arbitrary number of nodes. For $n$ nodes, there is a duality between the following two theories:
\begin{subequations}
\label{eq:general_n}
\begin{align}
\text{Theory A:}\qquad & SU(N_{1})_{-k_{1}}\times\prod_{i=2}^{n}\left[SU(N_{i})_{-k_{i}}\times\text{bifundamental }Y_{i-1,i}\right]\\
\text{Theory B:}\qquad & \prod_{i=1}^{n-1}\left[U(K_{i})_{N_{i}-N_{i+1}}\times\text{bifundamental }X_{i,i+1}\right]\times U(K_{n})_{N_{n}}
\end{align}
\end{subequations}
where flavor bounds require $k_i\geq 0$ and $N_1 \geq N_2 \geq \ldots \geq N_n$. As with the above case, these theories can be shown to be dual by systematically applying Aharony's duality \eqref{eq:u ferm tw} to the first node, the master duality to every two-link node, and then Aharony's other duality \eqref{eq:u scalar tw} to the last node. The master duality is not needed for the two-node case.
Implied above are interaction terms on the $U$ side of the duality of the form of \eqref{eq:u_ints}. Equivalently, we can use the schematic interaction
\begin{align}
\mathcal{L}^{(B)}\supset & \sum_{I=2}^{n-1}C_{I}^{(B)}\left(X_{I-1,I}^{\dagger a_{I}a_{I-1}}X_{a_{I}a_{I+1}}^{I,I+1}\right)\left(X_{I,I+1}^{\dagger b_{I}a_{I+1}}X_{b_{I}a_{I-1}}^{I-1,I}\right)
\end{align}
with the understanding that such interactions can only give mass terms to the link on their left. Here, $a_{i}$, $b_{i}$ the gauge indices of the $i$th node and $C_{I}^{(B)} \to -\infty$ so that $|C_{I}^{(B)}|\gg m^2_{i,i+1}$. In this limit, on the $SU$ side of the duality the interaction terms are very small and have no effect on the mass deformed phases, so we ignore them.
To summarize the interaction behavior on the $U$ side: a bifundamental scalar $X_{j,j+1}$ acquiring a nonzero vacuum expectation value affects nodes/links to the right but \emph{not }to the left. Namely, it causes all bifundamental scalars (i.e $X_{i,i+1}$ with $i>j$) to acquire a similar vacuum expectation for the first $K_{j}$ components. This in turn causes a breaking of all gauge groups nodes $i>j$ to $U(K_{i}-K_{j})$. Note this effect can compound, so if $X_{j,j+1}$ and $X_{\ell,\ell+1}$ acquire a vacuum expectation value from their respective mass deformations, the gauge group on node $i>j>\ell$ undergoes breaking $U(K_i)\to U(K_i- K_j) \times U(K_j-K_\ell) \times U(K_\ell)$.
As mentioned earlier, other dualities which flow to other TFTs can be constructed by changing the sign/magnitude of the interaction terms, but such considerations are left for future work.
\subsection{Self-Consistency Checks}\label{sec:checks}
Returning to the bosonic particle-vortex duality, it should now be clear the derivation we outlined in Sec. \ref{sec:boson_pv} is the two-node case of the more general non-Abelian linear quivers with values
\begin{align}
N_1=N_2=k_1 & = 1,\qquad k_2 = 0,
\end{align}
which is shown in Fig. \ref{fig:pv_dual}. Note this saturates all flavor bounds and carries the minimum value of parameters without being completely trivial, so the particle-vortex duality can be thought of as the simplest case of an infinite class of $2+1$ dimensional Bose-Bose dualities. Additionally, it is clear that the derivation of the two-node quiver requires no master duality since there are no nodes connected to two links.
Another helpful tool in analyzing the more general non-Abelian quiver dualities as well as comparing them to the holographic dualities in Sec. \ref{sec:jensen} will be comparing the spectrum of the two theories. To this end, let us briefly review how the spectra of the particle-vortex duality match on either side of the duality.
First consider the case when $\Phi$ acquires a vacuum expectation value in \eqref{eq:pvd_ah} through a negative mass deformation. It is well known the breaking of the $U(1)$ gauge symmetry gives rise to vortex solutions of finite mass charged under $B_1$ flux. Since there is no dynamical Chern-Simons term on this end, there is no funny business with flux attachment or alternative vortex solutions. These vortices carry flux charge under the broken $U(1)$ gauge group which can be seen by looking at the asymptotic behavior of the gauge field.
Now consider the Abelian-Higgs model but instead in the form which is more amenable to matching onto the non-Abelian quivers (i.e. \eqref{eq:pvd_ah_alt}). In this form we have two $U(1)$ gauge fields, one of which is redundant and can be integrated out. When $\langle\Phi\rangle \sim v$, it forces the breaking of $U(1) \times U(1)\to U(1)_{A}$, where $U(1)_{A}$ is the subgroup where the two $U(1)$ transformations act oppositely on $\Phi$, leaving it invariant. Again, the breaking of a $U(1)$ symmetry ensures that there are vortex solutions which are charged under the flux of the broken symmetry. In this case, it corresponds to a nonzero winding of both $a_1$ and $c$ at spatial infinity, since the broken $U(1)$ group is where they are set equal to one another (i.e. $U(1)_\text{diag}$). For the vortex solution where $a_1=c$ energy contributions from the Chern-Simons terms drop out, as they should since they weren't present in \eqref{eq:pvd_ah}. Since there is nonzero $a_1$ flux the vortex is charged under the background $B_1$ field. Also note that the vortex has finite mass proportional to the vacuum expectation value of the scalar. As expected, we reach the same conclusions when working from \eqref{eq:pvd_ah}, albeit in a slightly more complicated manner.
Due to the mass identification, the phase where $\Phi$ has a vacuum expectation value should be identified with the phase where $\phi$ is simply gapped. The $U(1)$ global symmetry is unbroken and $\phi$ excitations of \eqref{eq:pvd xy} are charged under the $B_1$ field, which are identified with the vortices on the opposite side of the duality.
Meanwhile, for mass deformations where $\phi$ obtains a vacuum expectation value and $\Phi$ is gapped, the $U(1)$ global symmetry on both sides of the duality is broken. This is straightforward to see on the $\phi$ side of the duality and is made clear on the $\Phi$ side by rewriting the photon using the Abelian duality, $F_{\mu\nu}\sim\epsilon_{\mu\nu\rho}\partial^\rho \sigma$. Since the $U(1)$ global symmetry is broken, we expect Goldstone bosons on either side of the duality. For the $\phi$ field, we have massless angular excitations. In this phase the photon remains gapless and it is identified with the Goldstone boson of the $\Phi$ side.
We claim the above results completely generalize to the $n$-node quiver case. Across the duality we have established the fact that the global $U(1)$ symmetry of the $X_{i,i+1}$ bifundamental is identified with the monopole number symmetry of the $i$th node. We begin with the side of the duality where a $U(1)$ global symmetry is unbroken, which corresponds to a positive mass deformation on the $SU$ side and a negative mass deformation on the $U$ side. We will focus on the behavior of a single bifundamental since generalization is straightforward.
When we gap the $Y_{i,i+1}$ bifundamental on the $SU$ side, on the $U$ side this should cause the $X_{i,i+1}$ bifundamental to acquire a vacuum expectation value as a result of the $M_{i,i+1}^2\leftrightarrow-m_{i,i+1}^2$ mass mapping.
Let's take a closer look at the breaking term to account for degrees of freedom. Schematically the interaction term can be written in the form
\begin{align}
V(X_{i,i+1})\sim{\rm Tr\,}\left[\left(X_{i,i+1}\right)_{a_{i}a_{i+1}}\left(X_{i,i+1}^{\dagger}\right)^{b_{i}a_{i+1}}-v^{2}\delta_{a_{i}}^{b_{i}}\right]^2.
\end{align}
With the gauge freedom we can take $\langle X_{i,i+1}\rangle$ to be of the from of \eqref{eq:u vev tw}. This causing the breaking of
\begin{align}
U(K_i)\times U(K_{i+1})\to U(K_i)_{\text{diag}}\times U(k_{i+1}),
\end{align}
corresponding to an overall broken $U(K_i)$ gauge symmetry. Each $X_{i,i+1}$ field has $2K_i K_{i+1}$ total degrees of freedom. Within the $U(K_{i})$ subspace, there are $K_i^2$ flat directions corresponding to ``angular'' excitations, which are consumed by the broken $U(K_i)$ gauge fields to become (two-component) massive ``W-bosons''. The remaining $K_i^2$ scalar degrees of freedom represent ``modulus'' excitations in directions of the potential which are not flat and are thus analogous to Higgs bosons. Additionally, these modes are now adjoint particles since they are charged under $U(K_i)_{\text{diag}}$. This leaves $2K_i k_{i+1}$ degrees of freedom, which acquire a mass through the double trace-like interaction term that is present for Wilson-Fisher scalars. Hence all bifundamental scalar particles are gapped, as they should be.
As with the particle-vortex duality, we would like to show vortices on the $U$ side should be identified with the gapped particles on the $SU$ side of the duality. The gapped $Y_{i,i+1}$ particles are charged under the unbroken $U(1)$ symmetry and carry baryon number. Meanwhile, when the $X_{i,i+1}$ particles acquire a vacuum expectation value, the breaking of the corresponding $U(1)$ subgroup mean vortices associated to that link now have finite mass and are topologically stable since
\footnote{One might worry that we may be able to form other vortex solutions by winding the other broken subsets, say $U(1)_A\subset U(K_{i+1})\times U(K_{i+2})/U(K_i)_{\text{diag}}$. Note however that the interactions force the vacuum expectation value of the associated $U(K_i)$ subgroup of the $X_{i+1,i+2}$ bifundamental to be effectively infinite. This is distinct from $\langle X_{i,i+1}\rangle$ which is presumed to be proportional to the mass deformation and finite. Thus such vortices are significantly heavier than the vortex formed from a winding of the $X_{i,i+1}$ bifundamental and its corresponding gauge groups. Note for the $X_{i,i+1}$ vortex, no matter the mass deformations of bifundamentals to its left, its $U(k_i)$ subgroup will always have a finite vacuum expectation value, and thus the topologically stable vortex solutions can always have finite energy via a winding of this corresponding subgroup.}
\begin{align}
\pi_1\left(U(K_i)\times U(K_{i+1})/U(K_i)_{\text{diag}}\right)\simeq\mathbb{Z}.
\end{align}
Specifically, the vortex configurations correspond to a winding of the broken $U(1)_A\subset U(K_i)\times U(K_{i+1})/U(K_i)_{\text{diag}}$ gauge group as well as the phase of $\langle X_{i,i+1}\rangle$ at spatial infinity. Since the broken subgroup $U(1)_A$ contains the $i$th node's $U(1)$ factor, through the BF term of that node it coupled to the $\tilde{B}_2$ field.\footnote{Since the $X_{i,i+1}$ vortices contain winding under $U(1)_A$ , which is a subgroup of the $U(1)\times U(1)$ gauge symmetries of the $i$th and $(i+1)$th nodes, one might worry that such a vortex also carries flux under the $(i+1)$th gauge group and is thus charged under the $U(1)$ symmetry of the $(i+1)$th node. However, as explained in Appendix \ref{appendix:sym}, the BF coupling is such that nodes to the right of a bifundamental are only coupled via the unbroken gauge group. Hence, although the vortices carry $U(1)$ flux of the $(i+1)$th node, they are only charged under global symmetry of the $i$th node.} This is the same symmetry the gapped $Y_{i,i+1}$ couple to, and thus the two modes should be identified in a manner analogous to what we saw for the particle-vortex duality.
Unlike the particle-vortex duality, the presence of nonzero Chern-Simons terms in the mass deformed phases means variation with respect to dynamical gauge groups imposes a flux attachment condition on the excitations. That is, particles charged under the respective symmetry must be attached to the vortex excitations. This might be modified slightly due to the breaking of the $U$ gauge group, since the broken gauge degrees of freedom will become massive giving an extra term when varying with respect to the corresponding massive gauge degrees of freedom. We leave such analysis for future projects.
\section{Theta Wall Dualities}
\label{sec:thetawalls}
In this section we consider duals to the Chern-Simons theories found on defects in $3+1$-dimensional $SU(N)$ Yang Mills theory when the $\theta$ angle varies as a function of location. Specifically, we look for a dual to \eqref{eq:su_quiver_ym}.
We begin by reviewing a few essential facts about the expected theta dependence in pure $SU(N)$ gauge theories. Such gauge theories are believed to have multiple vacua related to the physics of the theta angle. In each vacuum, physical quantities are not periodic in theta with period $2 \pi$ but instead with period $2 \pi N$. The physical properties of the system are nevertheless $2\pi$ periodic. As we change theta by a single $2 \pi$ period, the true vacuum of the system changes and the physics in the new vacuum at $\theta= \theta_0 + 2 \pi$ is the same as the physics in the original vacuum at $\theta = \theta_0$.
This picture can be most rigourously established at large $N$. In this limit the vacuum energy as a function of $\theta$ is expected to scale as \cite{Witten:1980sp,Witten:1998uka}
\beq \label{single} E(\theta) = N^2 h(\theta/N) \eeq
for some to be determined function $h$. This appears to be inconsistent with the periodicity requirement
\beq E(\theta) = E(\theta + 2 \pi). \eeq
As claimed above, a single vacuum with energy of the form \eqref{single} is expected to be $2 \pi N$ periodic, not $2 \pi$ periodic. This conundrum can easily be solved by postulating that the theory has a family of $N$ vacua labeled by an integer $K$. In this case the vacuum energy in the $K$th vacuum is given by
\beq
\label{kvac}
E_{K}(\theta) = N^2 h((\theta + 2 \pi K)/N).
\eeq
Most of these vacua are meta-stable, the truly stable vacuum for any given $\theta$ is given by minimizing over $K$:
\beq
E(\theta) = N^2 \min_{K} h((\theta + 2 \pi K)/N).
\eeq
The resulting function $E(\theta)$ has the expected $2 \pi$ periodicity. While the energy of (say) the 0-th vacuum keeps increasing as we increase theta from 0 towards $2 \pi$, the energy of the $K=-1$ vacuum at $\theta=2 \pi$ is exactly the same as the energy of the $0$-th vacuum was at $\theta=0$. One expects that a transition from the $0$-th to the $(-1)$-th vacuum is triggered at $\theta=\pi$. While physics in any given vacuum is $2 \pi N$ periodic, the system as a whole, in its true vacuum, is $2 \pi$ periodic.
We are now in a position to discuss the physics of theta interfaces and domain walls. Let us first turn to the case of interfaces. Starting with a confining gauge theory (pure Yang-Mills in this case), one can introduce interfaces across which the theta angle changes by an integer multiple of $2\pi$,
\beq
\Delta \theta = 2 \pi n.
\eeq
The theory is assumed to be everywhere in the true ground state. This means, in particular, that the index labeling the local vacuum state changes by $-n$ units as the theta angle changes by $2 \pi n$. Since the theta angle is a parameter in the Lagrangian, translation invariance is explicitly broken in this theory and we do not expect any Goldstone bosons corresponding to fluctuations of the position of the interface.
As we explained in the introduction, a spatially varying theta gives rise to domain walls on which Chern-Simons theories live. However, anomaly inflow does not constrain the exact Chern-Simons theory. Ref. \cite{Seiberg:2016gmd} has argued that for $|\nabla\theta|\ll \Lambda$ and $|\nabla\theta|\gg \Lambda$, one should expect the TFTs $[SU(N)_{-1}]^n$ and $SU(N)_{-n}$, respectively. Assuming a smooth transition, \eqref{eq:su_quiver_ym} was proposed as a possible CFT to describe the transition between these two extreme cases.
The generic phase of \eqref{eq:su_quiver_ym} is characterized by a partition $\{ n_i \} $ of $n$, that is integers $n_i$ with the property that $\sum_i n_i =n$. Each $n_i$ denotes the number of gauge group factors along the quiver that have been Higgsed down to their diagonal subgroup before we encounter a positive mass squared scalar. For example, $n_1=n$ corresponds to the completely Higgsed $SU(N)_{-n}$ phase associated with the steep wall, $n_i=1$ for $i=1,\ldots,n$ corresponds to the shallow wall with $[SU(N)_{-1}]^n$. The generic phase is given by a TFT based on
\beq
\label{topphase}
\mbox{Phase}\;{ \{ n_i \} }: \quad \quad \prod_{i} SU(N)_{- n_i}.
\eeq
One extra subtlety that arises concerns global symmetries. The scalar fields are bifundamentals under neighboring $SU(N)_{-1}$ gauge group factors. This leaves an overall phase rotation of every single scalar as a global symmetry, for a combined $U(1)^{n-1}$ extra global symmetry from the $n-1$ scalar fields. If these indeed were global symmetries of the parent theory this would lead to unexpected consequences. Most notably, in the fully broken phase the low energy theory on the interface would not just be the topological $SU(N)_{-n}$ Chern-Simons theory we expect, but would in addition contain $n-1$ massless Goldstone bosons as these extra global symmetries are spontaneously broken in the condensed phase. The proposal of \cite{Gaiotto:2017tne} is to add extra terms to the action that break these extra global symmetries so that there are no Goldstone bosons. The simplest option to do so is a $\det(Y)$ term for each link\footnote{Of course any power of $\det(Y)$ would do the job in that it is gauge invariant but charged under the global symmetry. For small values of $N$ we need to make use of this freedom. For example, for $N=1$ $\det(Y)=Y$ and we would simply add linear potentials, whereas for $N=2$ we would be adding mass terms. Instead we should add $\det(Y)^4$ and $\det(Y)^2$ respectively in those two cases.}, which is indeed gauge invariant under all $SU(N)_{-1}$ gauge group factors but is charged under overall phase rotations of $Y$. The quiver gauge theories we discussed in the last section do not have these determinant terms added to the potential. The dualities we derive will most naturally apply to the theory without the determinant terms. To connect to the theory of the theta interfaces we will have to add the extra determinant term as a deformation.
In addition to interfaces a second type of co-dimension one defect we can discuss are domain walls. These are already present in a theory with constant theta. They govern the decay of one of the meta-stable vacua of the theory to the true vacuum. In the idealized case, we can consider the theory in a state where we interpolate between two metastable vacua as we move along a single direction, which we once more chose to be the $x_3$ direction. For simplicity we are only interested in configurations which preserve $2+1$ -dimensional Lorentz invariance, that is we focus on flat domain walls. If the theory starts in the $0$-th vacuum as $x_3 \rightarrow \infty$, we can interpolate to the $n$-th vacuum at $x_3 \rightarrow -\infty$. While the state of the system at large negative $x_3$ is not in the true local ground state, this configuration is meta-stable. The false vacuum has to decay via bubble formation, which is governed by the domain wall tension. It has been argued \cite{Witten:1998uka} that the tension of the wall is of order $N$, a fact that is obvious in the holographic realization of these walls which we will turn to in Sec. \ref{sec:thetawall_holo}. At large $N$ this means that decay of the meta-stable vacuum is $e^{-N}$ suppressed. In addition the difference in vacuum energies will exert a pressure on the domain wall generically causing the wall to move. But since the pressure difference is order $N^0$, whereas the domain wall tension is order $N$, the domain wall can be treated as static in the large $N$ limit.
As far as the anomalies are concerned, the analysis of \cite{Gaiotto:2017tne} generalizes to the case of walls: the gauge theory on the defect should be the same whether we are forced to jump $n$ vacua because of a $2 \pi n$ jump in $\theta$, or whether we study a dynamical wall that interpolates between two $n$-separated vacua in a theory with fixed $\theta$. The main difference appears to be that this time the wall is dynamical with a finite tension. Most notably, this implies that we should have (at large $N$) a massless scalar living on the wall whose expectation value gives the location of the wall. It being the Goldstone boson of broken translations, the scalar has an exact shift symmetry that protects it from becoming massive. While interfaces were characterized by a free function $\theta(x_3)$ and were only loosely characterized into shallow and steep, for walls it is much easier to characterize the moduli space of allowed configurations. We have a total of $n$ discrete jumps from one vacuum to the next. When the walls are widely separated, we should have $n$ separate walls connecting two neighboring vacua each. In this limit, we should have a total of $n$ translational modes as the different basic walls can presumably move independently. The gauge theory living on these widely separated walls should be $[SU(N)_{-1}]^n$ as above together with these decoupled light translational modes. This is indeed what follows from the analysis of Acharya and Vafa in the closely related case of ${\cal N}=1$ supersymmetric gauge theories \cite{Acharya:2001dz} (see also \cite{Dierigl:2014xta}). The other extreme is when all $n$ walls coincide and we have a single wall across which we jump by $n$ vacua, presumably governed by a single $SU(N)_{-n}$ gauge theory and a single translational mode.
To summarize, note that we are still characterizing the phases by partitions $\{ n_i \}$ of $n$ and the gauge theory on the wall is once again governed by the topological field theory \eqref{topphase}. In addition we have the decoupled translational modes. At finite $N$ the walls no longer correspond to static configurations as they will be pushed around by the pressure differences, making them generically much harder to study than the case of interfaces. The reason we discuss them at all as that, at large $N$, they have a very simple holographic realization which we will employ in what follows to check our dualities.
\subsection{Theta Wall Dualities via 3d Bosonization}
\label{sec:thetawall_3d}
We now consider possible duals to \eqref{eq:su_quiver_ym} via 3d bosonization. Fortunately, such a theory can easily be constructed from the non-Abelian linear quiver dualities.\footnote{As touched upon earlier, if one tried to derive such a quiver using only Aharony's dualities, one would inevitably run into violations of the flavor bound. Thus it appears the interactions between links which come from the master duality are a necessity.} Consider the $n$ node linear quiver, \eqref{eq:general_n}, and take
\begin{subequations}
\label{eq:thetawall_values}
\begin{align}
1=k_{1} & =k_{2}=\cdots=k_{n}\\
N=N_{1} & =N_{2}=\cdots=N_{n}.
\end{align}
\end{subequations}
This satisfies all flavor bounds of the derivation given above since $k_{i}\geq0$ and $N=N_{j}\geq N_{j+1}=N$. In this case the dual quiver theories become
\begin{subequations}
\begin{align}
\text{Theory A:}\qquad & \left[SU(N)_{-1}\right]^{n}\times\prod_{p=1}^{n-1}\text{bifundamental }Y_{p,p+1}\\
\text{Theory B:}\qquad & \prod_{p=1}^{n-1}\left[U(p)_0\times\text{bifundamental }X_{p,p+1}\right]\times U\left(n\right)_{N}
\end{align}
\end{subequations}
and the relevant mass deformations for all bifundamentals taken positive/negative are given by
\begin{subequations}
\begin{align}
(\text{A1})\qquad M_{i,i+1}^{2}>0:\qquad & \left[SU(N)_{-1}\right]^{n}\\
(\text{A2})\qquad M_{i,i+1}^{2}<0:\qquad & SU(N)_{-n}\times\prod_{p=1}^{n-1}\left[SU(0)_{-p}\right]\\
(\text{B1})\qquad m_{i,i+1}^{2}<0:\qquad & \left[U(1)_{N}\right]^{n}\\
(\text{B2})\qquad m_{i,i+1}^{2}>0:\qquad & U(n)_N\times\prod_{p=1}^{n-1}\left[U(p)_0\right].
\end{align}
\end{subequations}
The topological sector of Theory A matches the TFTs we set to find at the outset, $SU(N)_{-n}$ and $[SU(N)_{-1}]^n$. In addition both sides have decoupled massless modes that also match. We assume that the non-Abelian part of $U(p)_0$ confines at low energies and is therefore gapped. This implies that the dynamics of the confining gauge group should have no effect on the physics at scales well below the gap. The $U(1)$ part however gives rise to a light photon for every level $0$ unitary gauge group. On the $SU$ side these light photons map to Goldstone bosons. In a theory with $N_s < N$ scalars charged under a $SU(N)$ gauge symmetry a full global $U(N_s)$ flavor symmetry is unbroken as the gauge group is broken to $SU(N-N_s)$ by a scalar vacuum expectation value. The broken gauge generators can be used to compensate any flavor rotation. In the special case of $N_s=N$, which is of interest to us here, the $U(1)$ part of the flavor symmetry however is broken and so we will get a corresponding Goldstone boson. In order to keep track of these light scalars we denote the Goldstone bosons as $SU(0)_{-p}$ theories, which continue to be ``level-rank'' dual to the $U(p)_0$ factors of Theory $(\text{B2})$; either theory denotes a decoupled light scalar mode. Including these factors we see that there is a perfect matching both between the topological sector and the decoupled light modes.
One should note that these extra massless Goldstone bosons are exactly the ones that in the theory of theta interfaces have been eliminated by the $\det(Y)$ potentials. As it stands, our quiver duality applies to \eqref{eq:su_quiver_ym} without these extra determinant terms. Since the global $U(1)$ baryon number symmetries under which $\det(Y)$ is charged map to monopole symmetries on the $U$ side, the corresponding dual operator is a monopole operator. Adding this monopole operator to the theory should lead to confinement of the $U(1)_0$ factors together with their non-Abelian counterparts and hence remove the massless photons associated to these factors from the spectrum, just as we removed their dual Goldstone bosons on the $SU$ side.
Given the fact that most of the gauge group factors on the $U$ side confine, we can further simplify the low energy description of this side of the duality. The confining groups cause the bifundamental matter and antimatter to form ``mesons''. If the matter/antimatter is still charged under some gauge group with nonzero Chern-Simons level, the meson transforms as an adjoint under said gauge group as conjectured in \eqref{eq:u_quiver_ym}.
For phase $(\text{B2})$, there are the adjoints formed from $X_{n-1,n}^{\dagger}X_{n-1,n}$ since the $\left(n-1\right)$th node confines. It is difficult to say if bound states such as $X_{n-1,n}^{\dagger}X_{n-2,n}^{\dagger}X_{n-2,n}X_{n-1,n}$, which are also adjoints under the $U(n)_N$ gauge group, would be stable or if it would split into separate particles $X_{n-2,n}^{\dagger}X_{n-2,n}$ and $X_{n-1,n}^{\dagger}X_{n-1,n}$. If we assume the latter, there is only a single light adjoint scalar charged under the $U(n)$ considered above. \footnote{Note that when the bifundamental scalars on the $SU$ side acquire a negative vacuum expectation value, by assumption this breaks their gauge symmetry down to the common diagonal symmetry group and causes the Higgsed bifundamentals to become adjoint particles. Thus we get gapped adjoint particles on both sides of the duality for phase 2 considered above.} We also would want to conjecture that there are no additional neutral mesons that become light together with the adjoint; such extra light matter is not accounted for on the $SU$ side of the duality. With these dynamical assumptions our quiver duality boils down to the one we advertised in the introduction
\begin{align}
\label{eq:u_quiver_ym_reloaded}
[SU(N)_{-1}]^n + \text{bifundamental scalars} \qquad\leftrightarrow\qquad U(n)_{N} + \text{adjoint scalars}
\end{align}
with a $\det(Y)$ potential for all the bifundamental scalars on the $SU$ side implied.
Unlike the quiver dualities, which we derived from gauging global symmetries, the duality \eqref{eq:u_quiver_ym} only follows upon making extra dynamical assumptions regarding the confining mechanism. We can give extra evidence for this duality by, once again, looking at the phase structure. On the $U$ side the various massive phases are realized by adding mass squared terms that give expectation values to the adjoint scalar (or remove it completely) together with ${\rm Tr\,} X^k$ terms in the potential. We can always chose a gauge in which the scalar expectation value is diagonal, so the generic expectation values is characterized by the $n$ eigenvalues of the scalar expectation value. Due to the presence of the interaction terms, it is possible to have none of the eigenvalues coincide, in which case the gauge group is $[U(1)_N]^n$. But whenever two or more eigenvalues coincide, we do get an enhanced unbroken subgroup. Once again the most general phase is encoded in partitions $\{ n_i \}$ of $n$, where each integer $n_i$ denotes the multiplicity of a given eigenvalue. The generic phase is given by $n_i=1$ for all $i$, whereas the case of $n$ coincident eigenvalues with a single $U(n)_N$ gauge group factor corresponds to $n_1=n$. The generic partition corresponds to
\beq
\label{bulktopphase}
\mbox{Phase}\;{ \{ n_i \} }: \quad \quad \prod_{i} U(n_i)_{N}.
\eeq
Reassuringly, this is exactly the level-rank dual gauge group of what we found for the quiver theory, \eqref{topphase}. In the next section we will give further support for the validity of this duality, at least in the large $N$ limit, using holography.
\subsection{Theta Wall Dualities via Holography}
\label{sec:thetawall_holo}
\label{sec:jensen}
Now we turn to the holographic proof of the duality. Our work will follow closely the stringy embedding of bosonization presented in \cite{Jensen:2017xbs} based on the earlier string theory realization of level-rank duality in \cite{Fujita:2009kw}. In this construction the holographic duality between field theory and supergravity becomes, at low energies, the purely field theoretic bosonization duality.
One starts with a well known holographic pair. The work of \cite{Fujita:2009kw,Jensen:2017xbs} employs the original holographic duality \cite{Maldacena:1997re} between ${\cal N}=4$ super-Yang Mills (SYM) and type IIB string theory on AdS$_5$ $\times$ $S^5$. We then deform the theory in such a way that all, or at least most, degrees of freedom gap out and one is left with a non-trivial topological field theory (in the case of level-rank) or conformal field theory (in the case of bosonization) in the infrared. Following the same deformations in the dual gravity solution one finds that the spectrum of most supergravity excitations also gets gapped out. The only remaining low energy excitations are localized on a probe brane. These probe degrees of freedom in the bulk are found to be related to the boundary degrees of freedom by the desired field theory duality.
\subsubsection*{Review of Holography Applied to 3d Bosonization}
Let us first briefly review the case of level-rank. Starting with ${\cal N}=4$ SYM one can go to $2+1$ dimensions via compactifying the theory on a circle of radius $R$. With anti-periodic boundary conditions for the fermions in the theory, all fermionic Kaluza Klein modes pick up masses of order $1/R$ and the scalars then pick up masses of the same order via loop corrections. At energies below $1/R$ we are left with pure Yang-Mills in $2+1$ dimensions, which is believed to confine. The theory is gapped with gap of order $1/R$. This is not quite yet the theory we want, the IR is trivial rather than a non-trivial Chern-Simons TFT.
To produce the desired Chern-Simons terms we need to introduce the theta angle. Like all coupling constants in the Lagrangian, the theta angle in $3+1$ dimensional gauge theories is usually introduced as a position independent constant, but it can be promoted to a non-trivial background field. What we need here is a theta angle that linearly changes by $2 \pi n$ as we walk around the circle once. Since theta is only well defined modulo $ 2 \pi$ this is consistent as long as $n$ is an integer. The $\theta F \wedge F$ term in the Lagrangian with constant theta gradient can be integrated by parts to turn into a $2+1$ dimensional Chern-Simons term with level $-n$. So in short, ${\cal N}=4$ SYM with anti-periodic boundary conditions for fermions and a constant theta gradient gives rise to a gapped $2+1$ dimensional theory which, at low energies, is well described by an $SU(N)_{-n}$ Chern-Simons theory.
These deformations are easily repeated in the holographic dual. The compactification with anti-periodic boundary conditions for fermions is dual to the cigar geometry of \cite{Witten:1998zw}, that is a doubly-Wick rotated planar Schwarzschild black-hole where the compact time direction of the Euclidean black hole plays the role of the compact spatial directions, whereas one of the directions along the planar ``horizon" becomes the new time direction. Most importantly, the radial coordinate in this cigar geometry truncates at a finite value $r=r_*$ where the compact circle contracts. Consequently this geometry acts as a finite box and so indeed all supergravity fluctuations exhibit a gapped spectrum \cite{Witten:1998zw} with a mass gap of order $1/R$. In order to retain a non-trivial topological sector we still need to implement the spatially varying theta angle. The theta angle is set by the near boundary behavior of the bulk axion field, so we are looking for a supergravity solution where the axion asymptotes to $a \sim n y/R$. Here $y$ denotes the coordinate along the circle direction and $a=n y/R$ is an exact solution to the axion equation of motion in the cigar background. As long as we are only interested in the $n \ll N$ limit we can ignore the backreaction of the axion on the background geometry and $a=ny/R$ appears to be the full solution to the problem. The only remaining issue is that the axion field strength $f_y = \partial_y a = n/R$ in the bulk has to be supported by a source. This source can be introduced by locating $n$ D7 branes, wrapping the entire internal $S^5$, at the tip of the cigar at $r=r_*$. This stack of D7 brane introduces new degrees of freedom in the bulk. The scalar fields corresponding to fluctuations of the D7 away from the tip are massive due to the geometry of the cigar. Like all other geometric fluctuations they have mass of order $1/R$. The only other degree of freedom introduced by the $n$ D7 branes is the worldvolume gauge field. The latter acquires a Chern-Simons term of level $N$ from the Wess-Zumino coupling to the $N$ units of background 5-form flux through the $S^5$. Lo and behold, the low energy description of the holographic bulk dual is simply a $U(n)_N$ Chern-Simons gauge theory living on the D7 branes. Comparing low energy descriptions on both sides, AdS/CFT boiled down to level-rank duality for the emerging TFTs.
The last step in order to derive 3d bosonization rather than level-rank from this construction is to add extra light matter into the theory. This can be easily accomplished using flavor probe branes \cite{Karch:2002sh}. In the construction put forward in \cite{Jensen:2017xbs} an extra probe D5 adds fermionic matter localized on $2+1$ dimensional defects in the $3+1$ dimensional theory. These defects live at points in the circle direction, so at low energies they simply become light fermions coupled to the $SU(N)_{-n}$ Chern-Simons gauge fields. The same probe branes can be argued, from the bulk point of view, to add scalar matter to the dual $U(n)_N$ Chern-Simons gauge theory. Instead of simply giving us level-rank, in this case holography, at low energies, reduces to the basic non-Abelian 3d bosonization duality.
\subsubsection*{Holographic Realization of Theta Walls}
To holographically realize the field theory theta domain walls we just reviewed we need to start with a holographic duality for a confining $3+1$ dimensional theory and then simply once again follow the field theory deformation corresponding to turning on theta in the bulk. The simplest realization of a confining $3+1$ gauge theory with a gravity dual is Witten's black hole \cite{Witten:1998zw}. This is almost the same construction we employed previously, but lifted one dimension up. We start with a 5d gauge theory, maximally supersymmetric YM with gauge group $SU(N)$, and compactify it on a circle with anti-periodic boundary conditions. The dual geometry has once again the basic shape of a cigar, and the explicit supergravity solution is given by
\beq \nonumber
ds^2 = \left ( \frac{u}{L} \right )^{3/2} \left( \eta_{\mu \nu} dx^{\mu} dx^{\nu} + f(u) d y^2 \right)
+ \left ( \frac{L}{u} \right )^{3/2} \left ( \frac{du^2}{f(u)} + u^2 d \Omega_4^2 \right ),
\eeq
\beq
\label{cigar}
e^{\phi} = g_s \left ( \frac{u}{L} \right )^{3/4}, \quad F_4 = dC_3 = \frac{2 \pi N}{V_4} \epsilon_4,
\quad f(u) = 1 - \frac{u_*^3}{u^3}.
\eeq
Here $x^{\mu}$ are the 4 coordinates of $3+1$ dimensional Minkowski space, $y$ is the circle direction we compactified to go from $4+1$ to $3+1$ dimensions. $\phi$ is the dilaton field, $F_4$ the RR 4-form field strength. $\Omega_4$ is the internal 4-sphere, with $d\Omega_4^2$, $\epsilon_4$ and $V_4 = 8 \pi^2/3$ its line element, volume form and volume respectively. The string coupling $g_s$ and the string length $l_s$ are the parameters of the underlying type IIA super-string theory. $L$ sets the curvature radius of the solution, it is determined by Einstein's equations to be $L^3 = \pi g_s N l_s^3$. Last but not least $u_*$ is the location of the tip of the cigar, it is related to the periodicity $2 \pi R$ of the compactification circle by $R = \frac{2}{3} L^{3/2} u_*^{-3/2}$.
The holographic realization of turning on a constant theta angle has been worked out in \cite{Witten:1998uka}. The theta angle is dual to the Wilson line of the bulk RR 1-form $C_{\mu}$ along the compact $y$ direction:
\beq
\label{thetawilson}
\int_{S^1} C = \theta +\ldots
\eeq
where the ellipses denote terms with negative powers of $u$, that is terms that vanish near the boundary.
The Wilson line is gauge invariant modulo $2 \pi \mathbb{Z}$, so theta is indeed an angle.
Using Stokes's law, we can rewrite the condition \eqref{thetawilson} as
\beq
\label{thetaf}
\int_D F = \theta + 2 \pi K.
\eeq
Here $F=dC$ is the field strength associated with the RR one-form and $D$ is the cigar geometry, which has the topology of a disc. Since $\int_D F$ is a well-defined real number whereas $\theta$ is an angle, we have a $2 \pi K$ ambiguity in $F$ where $K$ is an integer. For a given theta there is more than one bulk solution for $F$, characterized by $K$. This is responsible for the multi-branched structure of the allowed ground states which we expect to find. Physics in any given one of the branches is only periodic in $2 \pi N$, the actual periodicity of $\theta$ is $2 \pi$ as it should be. We simply jump to a different branch.
For generic theta it is non-trivial to solve the supergravity solutions subject to the constraint \eqref{thetaf}. But a very simple solution can once more be found \cite{Witten:1998uka,Bartolini:2016dbk} in the probe limit $(\theta + 2 \pi K) \ll N$, or in other words $K/N \ll 1$. In this limit one can neglect the backreaction of the axion on the background geometry. Newton's constant is of order $1/N^2$ in units where the curvature scale $L=1$, whereas the axion action and hence its stress tensor is of order 1 in the large $N$ counting. The only non-trivial equation left to solve is Maxwell's equation for $C_1$ in the background geometry \eqref{cigar} subject to the boundary condition \eqref{thetaf}. The solution is
\beq
\label{csol}
C_1 = \frac{f(u)}{2 \pi R} (\theta + 2 \pi K) dy.
\eeq
The integer $K$ is the bulk manifestation of the $K$-th vacuum. In fact, plugging the solution \eqref{csol} back into the action we find that the vacuum energy density of the $K$-th vacuum has exactly the expected form from \eqref{kvac} with \cite{Bartolini:2016dbk}
\beq
h(\theta/N) = - \frac{2 N^2 \lambda}{3^7 \pi^2 R^4}
\left [ 1 - 3 \left ( \frac{\lambda}{4 \pi^2}
\right )^2 \left ( \frac{\theta + 2 \pi K}{N} \right )^2 \right ]
\eeq
where $\lambda = g_{YM}^2 N = 2 \pi g_s l_s N/R$ is the 't Hooft coupling.
While it is not obvious to us how to realize interfaces in this setup, the holographic dual for a domain wall has already been proposed in \cite{Witten:1998uka}. A jump in vacuum, according to \eqref{thetaf}, requires a jump in $\int_D F$, which in turn requires a source magnetically charged under the RR 1-form. The naturally stringy object carrying the appropriate RR charge is a D6 brane. The D6 brane needs to wrap the entire internal $S^4$ as well as the 3d Minkowski space spanned by $t$, $x_1$ and $x_2$. It is localized in the $x_3$ direction as well as on the cigar geometry $D$. From the induced metric of a D6 sitting at a fixed position $u$ and wrapping $M^{2,1} \times S^4$ is we can infer that the D6 Lagrangian density $e^{- \phi} \sqrt{-g_I}$ reads
\beq
{\cal L} \propto u^{5/2}
\eeq
meaning that the D6 brane experience a potential pulling it to smaller values of $u$: the D6 brane will sink to the tip of the cigar, see Fig. \ref{fig:d6_Branes}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.4]{d6_branes.png}
\par\end{centering}
\caption{Configuration of D6 branes. \label{fig:d6_Branes}}
\end{figure}
Let us first discuss the case of a single D6 brane. Without loss of generality, we can place the D6 at $x_3=0$. If we denote by $D_-$ the cigar/disc spanned by $(y,u)$ at a fixed negative $x_3$ and $D_+$ the cigar/disc at a fixed positive $x_3$, then the analog of the magnetic Gauss' law for the D6 brane reads
\beq
\int_{D_+} F - \int_{D_-} F = 2 \pi.
\eeq
Comparing with \eqref{thetaf} we see that this means that $(\theta + 2 \pi K)$ jumps by $2 \pi$ as we cross, in the field theory, the bulk $x_3$ location of the D6-brane. This also implies that the vacuum energy of the theory jumps across the D6. Furthermore, the D6 brane is clearly dynamical. The $x_3$ position of the D6 brane is a dynamical field. Since the metric is independent of $x_3$ the corresponding worldvolume scalar is massless. These facts together clearly identify the D6 brane as the domain wall between the $j$-th and $(j+1)$-th vacuum \cite{Witten:1998uka} at a fixed theta angle. The general wall in which we jump from the $j$-th vacuum to the $(j+K)$-th simply corresponds to $K$ coincident D6 branes. We can pull apart the stack of $K$ D6 branes to obtain a configuration of walls where the vacuum jumps one unit at a time at well-separated locations in the $x_3$ direction.
It is fairly straightforward to determine the low energy physics in the bulk. The background geometry once more truncates at a finite radial position, $u=u_*$. Correspondingly all supergravity modes are gapped. The only degrees of freedom surviving are the ones localized on the D6 branes. For a stack of $n$ coincident D6 branes, these worldvolume degrees of freedom are a $U(n)$ gauge field as well as 3 adjoint scalars corresponding to motion of the stack into the $u$, $y$ and $x_3$ direction. The $u$ and $y$ fluctuations are massive due to the cigar geometry just as we reviewed above in section \ref{sec:jensen}. The $x_3$ scalar, however, is massless. The worldvolume gauge field picks up a Chern-Simons term of level $N$ from the Wess-Zumino coupling of the worldvolume gauge field to the $N$ units of 4-form flux. So the low energy dynamics in the bulk is governed by a $U(n)_N$ gauge theory with a single massless adjoint representation scalar. Holographic duality implies that this is an equivalent representation of the quiver gauge theory with the additional $n-1$ translational modes associated with the domain walls at least in the large $N$ limit.
Note that this way we almost landed on the duality \eqref{eq:u_quiver_ym}. There is however a small difference. On the quiver side, we have the extra light modes corresponding to the translational motion of the domain walls. As we argued before, we expect these to be present at large $N$. At any finite $N$ the domain walls would no longer be static. The phases of the quiver theory are still given by \eqref{topphase} as long as one accounts for the extra decoupled translational modes. The analogous statement on the $U$ side of the duality is that the adjoint scalar governing the position of the stack of probe branes this time corresponds to a flat direction. The various phases are still parametrized by the $n$ eigenvalues of the scalar matrix $\langle x_3 \rangle$. But this time instead of having to add deformations to the potential we have a moduli space of vacua where we can freely dial the expectation values of $x_3$. The eigenvalues of $\langle x_3 \rangle$ simply correspond to D6 positions and the enhanced gauge symmetries we encounter for coincident eigenvalues simply arise from coincident D6 branes. The gauge groups of the various phases once again are given by \eqref{bulktopphase}, but since the scalar potential was exactly flat this time each gauge group factor comes with an extra massless adjoint. In the generic case where the gauge group is $[U(1)_N ]^n$ these extra massless adjoints map exactly to the $n$ translational modes we identified on the quiver side. Surely the exactly flat potential for the probe scalar is also an artifact of the large $N$ limit and any bulk quantum corrections would lift this flat direction. Furthermore, the phase where the adjoint gets a positive mass squared is not easily realized in the brane picture. Modulo these extra light scalars, matching on both sides, the holographic construction exactly reproduces our conjectured duality
\eqref{eq:u_quiver_ym}.
\section{Discussion and Conclusion}
\label{sec:conclusion}
In this work, we have developed the methodology for dualizing linear quiver gauge theories with bifundamental scalars and argued that they can be viewed as the non-Abelian generalization of particle/vortex duality. Crucial to this is the interaction terms, which couple scalars living on adjacent links and propagate the symmetry breaking pattern down the quiver in a unidirectional manner. This is required to ensure the mass deformed phases are level-rank dual to each other.
We then specialize this general framework to the study of domain walls that arise in $3+1$ dimensional Yang-Mills theory with a spatially varying theta angle. In addition, we embed this special case in string theory and study the duality holographically. We find a novel duality between a theory with bifundamental matter and one with adjoint matter, schematically given by \eqref{eq:u_quiver_ym}. Let us comment on the similarities and differences between these two approaches.
From the setup in ref. \cite{Gaiotto:2017tne}, we expect the bifundamental scalars on the $SU$ side of the duality to interpolate between a smoothly varying and a sharp domain wall/interface. However, the pure field theoretic quiver approach of Sec. \ref{sec:building_quivers} makes opaque the geometric interpretation of a physical wall located in space. The complementary geometric approach of Sec. \ref{sec:jensen} makes this manifest: Higgsing a bifundamental is literally removing a D6 brane (i.e. domain wall) from a stack and moving it to a different physical location in space. Widely separated D6s correspond to the smoothly varying phase, reinforcing our intuition of the theory at small $| \nabla \theta |$.
When the walls on top of one another in the holographic duality we have new light matter one both sides of the duality, but it manifests itself in a very different manner. On the $U$ side the extra matter enhances the gauge symmetry. Meanwhile, on the $SU$ side the bifundamentals just become additional massless scalars. This may seem peculiar given the fact the bifundamental degrees of freedom match quite nicely when the walls are separated (albeit by construction). But this is precisely the behavior we would expect in our 3d bosonization duality. As we learned from the particle-vortex duality generalization, it is not actually the bifundamental degrees of freedom which should match on either side of the duality but rather particles and vortices. The very same mismatch of particle degrees of freedom is present in the bosonic particle-vortex duality as well.
In fact, the matching of the particle and vortex degrees of freedom is very nicely realized in the holographic duality. The dual of the baryons in the bulk are based on the standard holographic construction of the baryon vertex \cite{Witten:1998xy}, very similar to what was found in \cite{Jensen:2017xbs}. Namely, they are D4 branes wrapping the $S^4$ and also extended along the time direction. In order to be neutral, fundamental strings run from the D4 branes to the D6 branes on which they live. Furthermore, the D6 branes dissolve the D4 branes turning them into magnetic flux (it is more energetically favorable and has the same quantum numbers). The attachment of $N$ fundamental strings is analogous to particle/flux attachment. It can be argued that the $N$ fundamental strings cannot end on the same D6 brane. Hence when the D6 branes get separated, the monopoles must also pick up a mass since the fundamental strings must stretch from one D6 brane to another -- providing more evidence that lines of flux can end on domain walls. This is also in nice agreement with the behavior which occurs on the $SU$ side where the bifundamentals are interpreted as strings which stretch from one brane to another and thus both acquire a mass proportional to the separation between branes.
One may wonder whether our quiver dualities can be useful in the context of deconstruction, following the recent work of \cite{Aitken:2018joz}. There it was shown that Abelian quiver dualities can be lifted to dualities in 3+1 dimensions. It would be very interesting to do this in the non-Abelian case. One important ingredient in this construction is the use of ``all scale" versions of the duality, following the construction of \cite{Kapustin:1999ha} in the supersymmetric case. We'd like to point out that at least for two-node quivers, our method of gauging global flavor symmetries does allow us to give all scale versions of the non-Abelian duality. Say we want a dual for $SU(N)_{k}$ with $N_f$ fermionic flavors with a finite gauge coupling. Since the gauge coupling is dimensionful we are describing a theory with a non-trivial RG running. It interpolates between a free theory in the UV and a strongly coupled CFT in the IR. We can obtain this theory by starting with $N N_f$ free fermions and gauging a $SU(N)$ subgroup of the global $SU(NN_f)$ flavor symmetry. At this stage we can add both the Chern-Simons as well as the Maxwell kinetic terms. The original theory of $NN_f$ free fermions has dual descriptions in terms of a $U(K)_1$ gauge theory coupled to $N N_f$ scalars. Modulo flavor bounds $K$ is a free parameter. The global flavor symmetry simply rotates the scalar flavors in this dual. Promoting a $SU(N)$ subgroup to be dynamical we end up with a $U(K)_1$ $\times$ $SU(N)_{k}$ gauge theory with $N_f$ bi-fundamentals. While the $U(K)$ factor has infinite coupling, the $SU(N)_k$ factor has a finite Maxwell term which maps directly to the Maxwell term of the same $SU(N)_k$ factor on the dual side. This way we did construct a non-Abelian all scale dual to $SU(N)_k$ with fermions. Unfortunately it is not yet clear how to generalize this construction to more interesting quivers.
\section*{Acknowledgments}
We would like to thank Aleksey Cherman, Kristan Jensen, Zohar Komargodski, Brandon Robinson, and David Tong for useful discussions. This work was supported, in part, by the U.S.~Department of Energy under Grant No.~DE-SC0011637.
|
2,877,628,088,558 | arxiv | \section{Introduction}
In recent years, many new services and use cases with a focus on Internet of Things (IoT), such as smart city, factory automation and so on, have emerged for wireless communications. Most of these use cases are realized by deploying computationally complex algorithms on user devices with limited computational resources and battery capacity. For example, a surveillance drone executes complex image processing algorithms for object detection and tracking. Executing them may readily discharge the battery of these devices due to high energy consumption.\\
An alternative solution is to offload these algorithms to a centralized server, which can be located in an edge cloud. This may reduce the device energy consumption, while simultaneously increasing the flexibility of deploying even more complex algorithms. Moreover, centralized processing is crucial in some cases such as factory automation, where robots need to collaborate, communicate, coordinate and synchronize for a given task.
The main challenge is to make the correct offloading decision, i.e., to assess the right criterion and threshold to offload an algorithm to the edge cloud. Even though the computational load on the device can be reduced by offloading, an additional communication load is introduced for transmitting the data to the edge cloud. Therefore, there exists a trade-off between communication load and computational load that user devices experience. To increase the energy efficiency of the user devices, it is necessary to take the offloading decision by analyzing this trade-off. The relevant parameters for this trade-off include communication and computational resources, algorithm's complexity, load condition on the cloud, device energy consumption, and delay constraints.
\subsection{Related Work}
Computation offloading is extensively studied recently \cite{Kumar, Kumar2013, mao2017mobile}. \cite{Kumar} provides a general overview addressing the circumstances under which offloading can save energy. The author has drawn some interesting conclusions, by analyzing the computational load and the available communication resources for a single user case. However, in practice, multiple users share the available resources, and hence, the analysis for multi-user scenario is necessary. Also, many energy minimizing techniques have been proposed in the literature for efficient computation offloading \cite{Zhang2013, XudongXiang2014, Cui2013,Zhang2016,You2017}. In \cite{Zhang2013}, an energy consumption is reduced by optimally scheduling data transmission over a wireless channel, and dynamically configuring the clock frequency of the local processor. Similarly, \cite{XudongXiang2014} presents an algorithm, based on stochastic dynamic programming, with an objective to energy efficiently schedule data transmission and link selection. A computational offloading problem was designed in \cite{Xchen}, based on the game theory approach, for multiple users considering a multi-channel interference environment. \cite{Zhang2016} provides an optimal computation offloading mechanisms in 5G heterogeneous environment. The approach is to effectively classify and prioritize the users, followed by optimally allocating the radio resources. \cite{You2017} also minimizes the energy consumption by optimal resource allocation for TDMA and OFDMA systems. The contributions in \cite{Sardellitti2015} and \cite{Sardellitti2014} deal with joint optimization of communication and computational resources for multiple users, so that the delay constraints are met. In contrast to optimally allocating resources, as in \cite{Zhang2016,You2017,Sardellitti2015,Sardellitti2014}, we evaluate the optimal offloading strategy for the allocated communication and computational resources.
Apart from optimal resource allocation, for computational offloading, approaches like task partitioning and scheduling have been proposed in \cite{Kao,Wu2016}. In \cite{Kao}, the author presents an algorithm to partition a single task and optimally offload these partitioned task by analyzing their dependencies. A low complexity algorithm, that minimizes the device energy consumption by dynamically offloading a partitioned task, is designed with Lyapunov optimization in \cite{DongHuang2012}. The algorithms in \cite{Kao} and \cite{DongHuang2012} consider computational complexity to offload each partitioned task, but do not consider the effects of channel and availability of communication resources. The papers\cite{Kao}, \cite{DongHuang2012} and \cite{Kumar} lack the crucial analysis of the energy consumption for multi-user scenario, where the communication, and the edge cloud resources are shared by multiple users.
\subsection{Contribution and outline of the paper}
This paper analyzes the trade-off between the energy consumption due to local processing, and offloading, in order to evaluate an optimal offloading decision. The optimal offloading decision is evaluated considering the effects of communication channel, load introduced at the edge cloud server by multiple users, computational complexity of the data processing algorithm, and availability of communication resources. We introduce a simple algorithm that not only provides the optimal offloading decision for multiple users, but also provides the optimum amount of computation data that should be offloaded.
In Section~\ref{sec: system:model}, we describe the system model, including an energy consumption model for the user devices considering the algorithmic computational complexity and the communication complexity for offloading. The energy optimization problem and the closed form solution is presented in Section~\ref{sec: sum.energy}. Finally, the results and conclusion are discussed in Section~\ref{sec:results} and \ref{sec: conclusion} respectively.
\section{System Model} \label{sec: system:model}
Consider $N$ user devices uniformly distributed in a circular area of radius $R$. In the center of the area, the base station is placed and co-located with an edge-cloud server. The base station
has knowledge of the channel condition of each user $i\in[1; N]$.
The edge cloud has a processor with a maximum computational capacity of $C_s$, and each user device has a maximum computational capacity of $C_{u}$, where $C_{s} \gg C_{u}$.
\subsection{Data model}
In each time period $T$, every user device needs to process $D_i$ data bits, which may either be processed by the device itself or offloaded to the edge cloud. The share of data per user device, that is offloaded in time period $T$, is given by $0\leq \alpha\leq 1$. As shown in Fig.~\ref{fig:system_model}, the data $D_i$ is composed of $L$ data blocks, each composed of $M$ data elements with $S$ bits, i.\,e., $D_i = L\cdot M\cdot S$. This corresponds, for instance, to an industrial automation scenario where a field-bus gateway receives $M$ data elements from $L$ connected sensors during each time period in order to perform an update of the automation schedule.
The data processing algorithm has the complexity class, given by the function $f_i(M)$, that defines the amount of computational complexity introduced on the user device with respect to the increase in the number of data elements.
\subsection{Device computational complexity and energy consumption}
The computational complexity generated at a user device, if all the data is processed locally is given by
\begin{align}
C_\text{u,i} &= L \cdot \eta_i f_{i}(M),
\label{eq:system.model:device:10}
\end{align}
with the proportionality constant $\eta_i$ that depends on the processor specifications, and represents the amount of computation cycles required to execute the algorithm, when the number of data elements $M$ is $1$. Consequently, the energy consumed by the user device depends on the number of computation cycles required to process $M$ data elements. The number of computational cycles further depends on the number and the type (read/write, memory access) of the operations involved in the algorithm. As the detailed analysis of operation-specific energy consumption for a particular algorithm is out of the scope of this paper, we represent the total energy consumption in terms of the computation complexity, as given in \cite{HopfnerTowardsStrategies}. If the average amount of energy consumed by the user device for a single computation cycle is $\epsilon_i$, then the total energy consumed $E_\text{u,i}$ on the user device during time period $T$ is given by
\begin{align}
E_\text{u,i} & = \epsilon_{i} \cdot C_\text{u,i} =
\epsilon_{i} \cdot L \cdot \eta_i f_{i}(M). \label{eq: optimization:1}
\end{align}
\begin{figure*}
\centering
\begingroup
\unitlength=1mm
\begin{picture}(155, 39)(0, 0)
\psset{xunit=1mm, yunit=1mm, linewidth=0.1mm}
\psset{arrowsize=2pt 3, arrowlength=1.6, arrowinset=.4}
\rput(0, -3){%
\rput[l](0, 35){\rnode{S1}{\psframebox{Sensor $1$}}
\rput[l](0, 25){\rnode{S2}{\psframebox{Sensor $2$}}
\rput[l](8, 17){\rput{90}{$\cdots$}}%
\rput[l](0, 10){\rnode{SL}{\psframebox{Sensor $L$}}
\rput[l](40, 5){\pnode(0, 17){MUXout}%
\pnode(-7, 30){MuxIn1}%
\pnode(-7, 20){MuxIn2}%
\pnode(-7, 5){MuxInL}%
\rnode{MUX}{\rput{90}{\psframe(35, 7)\rput[c](17, 3.5){User Device $i$}}}}%
\rput[c](70, 22){\rnode{Offload}{\psframebox{\parbox[c]{1.5cm}{Offloading Decision}}}}%
\rput[l](100, 10){\rnode{LocalProcess}{\psframebox{\parbox[c]{1.7cm}{Local\\processing ($C_u$)}}}}%
\rput[l](100, 30){\rnode{ServerProcess}{\psframebox{\parbox[c]{1.7cm}{Server\\processing ($C_s$)}}}}%
\rput[l](140, 22){\rnode{Actuation}{\psframebox{Actuation}}}%
}
\ncline{->}{MUXout}{Offload}\naput{$\begin{array}{l}\lambda = 1/T;\\D_i = LMS\end{array}$}
\ncline{->}{Offload}{ServerProcess}\naput{$\alpha_iD_i$}
\ncline{->}{Offload}{LocalProcess}\nbput{$(1-\alpha_i)D_i$}
\ncline{->}{ServerProcess}{Actuation}
\ncline{->}{LocalProcess}{Actuation}
\ncline{->}{S1}{MuxIn1}\naput{$M\cdot S$ bits}
\ncline{->}{S2}{MuxIn2}\naput{$M\cdot S$ bits}
\ncline{->}{SL}{MuxInL}\naput{$M\cdot S$ bits}
\end{picture}
\endgroup
\caption{Data processing model}
\label{fig:system_model}
\end{figure*}
\subsection{Channel model}
The user devices transmit the data to the edge cloud using frequency division multiple access (FDMA), i.\,e., the carrier bandwidth $B$ is distributed equally among all user devices, such that each user device uses bandwidth $B_i$ distributed across $N_\text{RB, i}$ resource blocks (RBs).
The effects of opportunistic scheduler are not considered in this paper for the sake of brevity. Each RB corresponds to a bandwidth of 180 kHz and time-slot duration $T_\text{slot}$ = 0.5 ms \cite{3GPPTS36.2112010}, i.\,e., $B_i = N_\text{RB,i} \cdot \unit[180]{kHz}$.
The received signal-to-noise ratio (SNR) for user device $i$ is given by
\begin{equation}
\gamma_\text{ul,i} = \frac{P_\text{r,i}}{N_0 B_i},
\end{equation}
with the received signal power $P_\text{r, i}$ and the noise power spectral density $N_0$.
The $i^\text{th}$ user device is located at a distance $d_i$ from the cell center, hence, the received power is given by
\begin{equation}
P_\text{r, i} = P_\text{tr, i} \cdot G \left[ \frac{d_0}{d_i}\right]^\beta,
\label{eq:system:model:2}
\end{equation}
with the pathloss exponent $\beta$, transmit power $P_\text{tr, i}$, reference distance $d_0$, and $G = \left(\frac{\lambda}{4\pi d_0}\right)^2$ being an attenuation constant for free-space path-loss. We assume that $G$ is known at the base station.\\
Given the received SNR, the spectral efficiency is given by $r_i = \log_2(1 + \gamma_\text{ul, i}) \leq 6$ bps/Hz, which is the maximum spectral efficiency achievable in 3GPP LTE \cite{3GPP2009a}.\\%\cite{201039}.\\
\subsection{Transmission energy model}
The user device has to offload $D_i$ bits to the edge cloud in the time interval $T$, i.\,e., the spectral efficiency in the time interval has to satisfy the equation
\begin{equation}
D_i = T \cdot B_i \cdot \log_2\left(1 + \frac{P_\text{r,i}}{N_0 B_i}\right).
\end{equation}
Hence, the required receive signal power in order to transfer all $D_i$ bits to the edge cloud in the given time period $T$ is
\begin{equation}
P_\text{r, i}\stackrel{!}{=} \left(2^{D_i/(B_i T)} - 1\right) N_0 B_i \label{eq:system:model:1}
\end{equation}
Using \eqref{eq:system:model:2}, the required transmit power is given by
\begin{equation} \label{eq:system:model:3}
P_\text{tr, i} \stackrel{!}{=} \frac{\left(2^{D_i/(B_i \cdot T)} - 1\right)}{G} \cdot \left [\frac{d_i}{d_0}\right]^{\beta} \cdot N_0 B_i,
\end{equation}
which is upper limited by $P_\text{tr, i}\leq P_\text{tr, max}$ \cite{3GPP2009a}.
Hence, the energy consumed by the $i^\text{th}$ user device to transmit its $D_i$ data bits is given by
\begin{align}
E_\text{tr,i} & = P_\text{tr,i} \cdot T \\
& = \frac{\left(2^{D_i/(B_i \cdot T)} - 1\right)}{G} \cdot \left [\frac{d_i}{d_0}\right]^{\beta} \cdot N_0 B_i \cdot T .\label{eq:system:model:4}
\end{align}
The energy consumed for transmitting the data to the edge cloud is largely impacted by the pathloss, allocated bandwidth, and the amount of data that is require to be offloaded
\subsection{Energy consumption at the user device}
The previous model is now extended by taking into account the possibility of offloading only a share $\alpha_i D_i$, $0\leq\alpha_i\leq 1$, of the overall data. Accordingly, the models in
~\eqref{eq: optimization:1} and ~\eqref{eq:system:model:4} are modified to be
\begin{equation}\label{eq:system:model:energy:2}
E_\text{u,i}(\alpha_i) = (1- \alpha_i) L \times\epsilon_{i} \times \eta_i f_{i}(M)
\end{equation}
and
\begin{equation} \label{eq:system:model:energy:1}
E_\text{tr,i}(\alpha_i) = \frac{\left(2^{\alpha_i D_i/(B_i T)} - 1\right)}{G} \cdot \left [\frac{d_i}{d_0}\right]^{\beta} \cdot N_0 B_i \cdot T
\end{equation}
respectively. The total energy consumption of the user device $i$ can be given as
\begin{align}
E_\text{sum,i}(\alpha_i) = E_\text{tr,i}(\alpha_i) + E_\text{u,i}(\alpha_i) . \label{eq:system:model:energy:10}
\end{align}
The static energy consumption of the user device during idle time is fixed, and hence can be neglected in the model for making an offloading decision.
\subsection{Edge cloud processing}
Similar to the computational complexity introduced on the user device by the algorithm, the computational complexity $C_\text{serv, i}$ is also introduced on the edge cloud, if the computation is offloaded. However, the proportionality constant $\eta_{s}$ for the edge cloud is different, and depends upon its processor characteristics.
The computational complexity on the edge-cloud is
\begin{equation}
C_\text{serv, i} = \eta_{s} \cdot f_{i}(M).
\end{equation}
Given the edge cloud processor's capacity $C_s$, the maximum number of computation cycles that the server can schedule in time period $T_\text{pr}$ is defined by $C_\text{s,max} = C_s \cdot T_\text{pr}$. We assume that $T_\text{pr}\ll T$ because one edge cloud server would need to process the data of the user devices from more than one cell.
\section{Sum Energy Optimization} \label{sec: sum.energy}
\subsection{Problem formulation:}
As discussed in Section~\ref{sec: system:model}, we consider the energy consumed for in-device data processing, as well as for offloading the data to the edge cloud. The optimization problem is device-centric, and designed to minimize the total energy consumption $E_\text{sum,i}$, for all $N$ user devices by offloading an optimal share of data, as given by the set of decision variables $\mathcal{A} = \{\alpha_1, \dots \alpha_N\}$. If $\alpha_i$ is $0$, no data is offloaded to the cloud, whereas if $\alpha_i$ is $1$, all the data is offloaded to the edge cloud.
The optimization problem is given as:
\begin{align} \label{eq:opt.sum.energy.1}
\mathcal{A}' = \text{arg}\min_{\forall \mathcal{A}\in\mathbb{R}^N} &\: \sum_{i=1}^N \: E_\text{sum,i}(\alpha_i) \nonumber \\
\text{s.t} \quad & \sum\limits_i^N L \cdot \alpha_i \cdot C_\text{serv, i} \leq C_\text{s,max} \nonumber \\
& 0 \leq \alpha_i \leq 1 ,
\end{align}
The limiting constraint for offloading is that the total amount of required computational cycles to process the offloaded computation, should not exceed the maximum computational cycles $C_\text{s,max}$, that the server can provide in the given time period $T_\text{pr}$.
We further distinguish \emph{state-full} (SF) and \emph{state-less} (SL) offloading. In the case of SF offloading, every user device either offloads all the computation to the edge cloud or does not offload at all for a given period $T$. The value of offloading parameter is $\alpha_i = \left\lbrace 0,1 \right\rbrace$. This corresponds to the case where the processing algorithm cannot be divided due to mutual data dependencies.
In the case of SL offloading, the user device is allowed to offload any partition of the data processing, i.\,e., $\alpha_i$ is therefore relaxed in the optimization problem and it lies between $[0;1]$. This corresponds to the case mentioned earlier, where $L$ sensors provide data to a gateway device, which processes these data independently.
\subsection{Solution to optimization problem}
This optimization problem is solved using Lagrange's Duality Theorem and by applying Karush-Kuhn-Tucker (KKT) conditions. The objective function is given as
\begin{eqnarray}
\mathcal{L}(\alpha_i, \nu,\psi) & = & \sum\limits_i^N\left(E_\text{u,i}(\alpha_i) + E_\text{tr,i}(\alpha_i)\right) \nonumber \\
& & {+}\: \nu \left(\sum\limits_i^N L \cdot \alpha_i \cdot C_\text{serv,i} \leq C_\text{s,max}\right) \nonumber \\
& & {-}\: \text{tr}\left[\Matrix{\Psi}\text{diag}(\alpha_i)\right]
\label{eq:opt.sum.energy.10}
\end{eqnarray}
where $\nu$ and $\psi$ are the Lagrange multipliers. The solution to this optimization problem is very similar to the water-filling algorithm and drives us towards two theorems stated below.
\begin{theorem}\label{theorem:optimization.problem:10}
The optimum offloading parameter $\alpha_i$ for the $i^\text{th}$ user device is given by
\begin{equation}
\alpha_i = \left(\frac{1}{r_i}\log_2\left(\frac{1}{K_i}\left[E_\text{u,i} - \nu \: C_\text{serv,i} \right]\right)\right)^+
\end{equation}
with $r_i$= $D_i/(B_iT)$, a constant $K_i=\left( \text{ln}(2) \left [\frac{d_i}{d_0}\right]^{\beta} N_0 D_i / G \right)$, and the Lagrangian parameter '$\nu$' defines the offloading threshold for the user device.
\end{theorem}
\begin{proof}
See Appendix \ref{subsec:proof1}.
\end{proof}
The Lagrangian parameter $\nu$ is derived through an iterative method. For an overloaded system, where the cloud server capacity is not able to serve the computational load coming from all the users, i.e. $\sum\limits_i^N L \cdot C_\text{serv,i} \geq C_\text{s,max}$, the threshold is increased stepwise, until the condition in \eqref{eq:opt.energy.sum.111} is satisfied. With this action, the user devices that save less energy by offloading, out of all the user devices, are not allowed to offload anymore. The corresponding constraints on $\nu$ are defined in the following theorem.
\begin{theorem}
Given the solution to the optimization problem in Theorem \ref{theorem:optimization.problem:10}, the threshold '$\nu$' is bounded by
\begin{eqnarray}
\max\limits_{i: \alpha_i>0} \left( \left[\frac{E_\text{u,i} - K_i2^{r_i}}{C_\text{serv,i}}\right]\right)^+
\leq \nu \leq
\min\limits_{i: \alpha_i>0}\left[\frac{E_\text{u,i} - K_i}{C_\text{serv,i}}\right].
\end{eqnarray}
\end{theorem}
Note that the upper and lower bounds on $\nu$ holds only for the user devices, with $\alpha_i \neq 0$.
\begin{proof}
See Appendix \ref{subsec:proof2}
\end{proof}
\section{Results and Discussion} \label{sec:results}
\subsection{Performance metrics}
\paragraph{Sum Energy}
The performance of SF and SL offloading is evaluated by comparing the total optimized energy i.\,e. $E_\text{sum}(\mathcal{A}') = \sum_{\alpha_i\in\mathcal{A}} \: E_\text{sum,i}(\alpha_i)$, with the total energy consumed when no user device offloads the data processing, and when all the user devices completely offload the processing. The energy per user device $E_\text{sum,i}(\alpha_i)$ is given in (\ref{eq:system:model:energy:10}), where $\alpha_i$ is determined according to Theorem \ref{theorem:optimization.problem:10}.
The total energy consumption in the case that no user device offloads ($\forall i \in [1; \dots N]: \alpha_i = 0$) is given by
\begin{equation}
E_\text{sum}(0) = \sum\limits_i^N \: E_\text{u,i}.
\end{equation}
Whereas, when every user device offloads all the data, the total energy consumption is given by
\begin{equation}
E_\text{sum}(1) = \sum\limits_i^N \: E_\text{tr,i}(1).
\end{equation}
\paragraph{Offloading Percentage}
The offloading percentage is the ratio of total offloaded data processing for all user devices to the total data processing of the system, and is given by
\begin{align}
\Lambda = \frac{ \sum\limits_i^N \alpha_i \cdot D_i}{ \sum\limits_i^N D_i}.
\end{align}
\subsection{Performance depending on path-loss}
\begin{table}[t]
\caption{Simulation Parameters}
\label{tb:simulation:parameters}
\centering
\begin{tabular}{|c||c||c||c|}
\hline
Variable & Value & Variable & Value \\ \hline
\hline
$C_{s}$ & 200 MHz & $N$ & 50 Users \\
\hline
L & 10 & $M$ & 60 data elements \\
\hline
$BW$ & 10 MHz & S & 8 bits\\
\hline
$T_\text{pr}$ & 1 ms & $T$ & 20 ms\\
\hline
$d_0$ & 200 m & $R$ & 800 m \\
\hline
$\epsilon_i$ & $5e{-6}$ mJ & $\eta_i$ & 100 cycles \\
\hline
$f_i(M)$ & $M$ & $\eta_s$ & 1 cycle \\
\hline
\end{tabular}
\end{table}
Fig.~\ref{fig:results:offloading:1} shows the optimal offloading percentage $\Lambda$ for $N=50$ depending on different pathloss conditions. Two scenarios are assumed, with the availability of $\unit[100]{\%}$ and $\unit[10]{\%}$ of the cloud server capacity $C_s$. Where, $C_s = \unit[200]{MHz}$, to provide sufficient processing capacity for higher values of $M$ (discussed in the next subsection).
\begin{figure}[tb]
\centering
\begingroup
\unitlength=1mm
\psset{xunit=32.50000mm, yunit=0.42017mm, linewidth=0.1mm}
\psset{arrowsize=2pt 3, arrowlength=1.4, arrowinset=.4}\psset{axesstyle=frame}
\begin{pspicture}(1.53846, -38.08000)(4.00000, 119.00000)
\rput(-0.06154, -11.90000){%
\psaxes[subticks=0, labels=all, xsubticks=1, ysubticks=1, Ox=2, Oy=0, Dx=0.2, Dy=20]{-}(2.00000, 0.00000)(2.00000, 0.00000)(4.00000, 119.00000)%
\multips(2.20000, 0.00000)(0.20000, 0.0){9}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(0, 119.00000)}
\multips(2.00000, 20.00000)(0, 20.00000){5}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(2.00000, 0)}
\rput[b](3.00000, -26.18000){$\text{Pathloss Exponent} \: \beta $}
\rput[t]{90}(1.60000, 59.50000){$\Lambda$}
\psclip{\psframe(2.00000, 0.00000)(4.00000, 119.00000)}
\psline[linecolor=blue, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=blue, dotstyle=o, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 100.00000)(2.20000, 98.58797)(2.40000, 91.33355)(2.60000, 80.00576)(2.80000, 67.08496)(3.00000, 57.18920)(3.20000, 49.71945)(3.40000, 43.88725)(3.60000, 39.30643)(3.80000, 35.60574)(4.00000, 32.52836)
\psline[linecolor=red, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=red, dotstyle=diamond, dotscale=1.2 1.2, linewidth=0.4mm](2.10000, 100.00000)(2.30000, 100.00000)(2.50000, 87.80200)(2.70000, 72.16800)(2.90000, 61.11800)(3.10000, 52.99600)(3.30000, 46.46600)(3.50000, 41.57600)(3.70000, 37.67200)(3.90000, 34.33400)(4.10000, 31.71000)
\psline[linecolor=darkred, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=darkred, dotstyle=square*, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 65.61953)(2.20000, 65.62332)(2.40000, 65.79917)(2.60000, 66.00300)(2.80000, 64.07084)(3.00000, 56.30901)(3.20000, 48.97089)(3.40000, 43.23642)(3.60000, 38.66279)(3.80000, 34.99841)(4.00000, 32.03015)
\psline[linecolor=cyan, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=cyan, dotstyle=triangle*, dotscale=1.2 1.2, linewidth=0.4mm](2.10000, 66.00000)(2.30000, 66.00000)(2.50000, 65.82400)(2.70000, 64.79800)(2.90000, 59.34000)(3.10000, 51.90800)(3.30000, 45.44400)(3.50000, 40.73200)(3.70000, 36.54000)(3.90000, 33.32400)(4.10000, 30.60600)
\endpsclip
\psframe[linecolor=black, fillstyle=solid, fillcolor=white, shadowcolor=lightgray, shadowsize=1mm, shadow=true](3.07692, 79.73000)(3.94615, 119.00000)
\rput[l](3.35385, 111.86000){\footnotesize{$\text{SL} \: 100 \% \: C_\text{s}$}}
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](3.13846, 111.86000)(3.26154, 111.86000)
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](3.13846, 111.86000)(3.26154, 111.86000)
\psdots[linecolor=blue, linestyle=solid, linewidth=0.3mm, dotstyle=o, dotscale=1.2 1.2, linecolor=blue](3.20000, 111.86000)
\rput[l](3.35385, 103.53000){\footnotesize{$\text{SF} \: 100 \% \: C_\text{s}$}}
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](3.13846, 103.53000)(3.26154, 103.53000)
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](3.13846, 103.53000)(3.26154, 103.53000)
\psdots[linecolor=red, linestyle=solid, linewidth=0.3mm, dotstyle=diamond, dotscale=1.2 1.2, linecolor=red](3.20000, 103.53000)
\rput[l](3.35385, 95.20000){\footnotesize{$\text{SL}\: 10 \% \: C_\text{s}$}}
\psline[linecolor=darkred, linestyle=solid, linewidth=0.3mm](3.13846, 95.20000)(3.26154, 95.20000)
\psline[linecolor=darkred, linestyle=solid, linewidth=0.3mm](3.13846, 95.20000)(3.26154, 95.20000)
\psdots[linecolor=darkred, linestyle=solid, linewidth=0.3mm, dotstyle=square*, dotscale=1.2 1.2, linecolor=darkred](3.20000, 95.20000)
\rput[l](3.35385, 86.87000){\footnotesize{$\text{SF}\: 10 \% \: C_\text{s}$}}
\psline[linecolor=cyan, linestyle=solid, linewidth=0.3mm](3.13846, 86.87000)(3.26154, 86.87000)
\psline[linecolor=cyan, linestyle=solid, linewidth=0.3mm](3.13846, 86.87000)(3.26154, 86.87000)
\psdots[linecolor=cyan, linestyle=solid, linewidth=0.3mm, dotstyle=triangle*, dotscale=1.2 1.2, linecolor=cyan](3.20000, 86.87000)
}\end{pspicture}
\endgroup
\caption{Offloading with pathloss variation}
\label{fig:results:offloading:1}
\end{figure}
In Fig.~\ref{fig:results:offloading:1}, for the lower path-loss, $\beta\leq2.2$, $\unit[100]{\%}$ of the data processing is offloaded to the edge-cloud.
This illustrates that if all user devices have good channel conditions, they can minimize their energy-consumption by offloading to the edge cloud. As $\beta$ increases, the offloading percentage drops, as some user devices experience high channel attenuation. This results in an increase of the transmission energy required for offloading, as compared to the energy consumed for computation in the device itself.
In the second scenario with $\unit[10]{\%}C_s$, the edge cloud cannot simultaneously support offloading from all the users. Therefore, even though some users would prefer offloading, only a part of the data processing is carried out by the edge cloud. Hence, the maximum data processing supported by the edge cloud, does not exceed $\unit[65]{\%}$ of the total computation. The path-loss effects are prominent at $\beta>2.6$, and converges with $\unit[10]{\%}C_s$ scenario. This occurs due to a high number of user devices experiencing high channel attenuation, and hence, refrain from offloading. This illustrates that the edge cloud server capacity is not the limiting factor anymore.
\begin{figure}[tb]
\centering
\begingroup
\unitlength=1mm
\psset{xunit=32.50000mm, yunit=1.00000mm, linewidth=0.1mm}
\psset{arrowsize=2pt 3, arrowlength=1.4, arrowinset=.4}\psset{axesstyle=frame}
\begin{pspicture}(1.53846, -16.00000)(4.00000, 50.00000)
\rput(-0.06154, -5.00000){%
\psaxes[subticks=0, labels=all, xsubticks=1, ysubticks=1, Ox=2, Oy=0, Dx=0.2, Dy=10]{-}(2.00000, 0.00000)(2.00000, 0.00000)(4.00000, 50.00000)%
\multips(2.20000, 0.00000)(0.20000, 0.0){9}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(0, 50.00000)}
\multips(2.00000, 10.00000)(0, 10.00000){4}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(2.00000, 0)}
\rput[b](3.00000, -11.00000){$\text{Pathloss Exponent} \: \beta $}
\rput[t]{90}(1.60000, 25.00000){$E_\text{sum} \: \left[ \text{mJ} \right]$}
\psclip{\psframe(2.00000, 0.00000)(4.00000, 50.00000)}
\psline[linecolor=darkred, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=darkred, dotstyle=square*, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 7.07256)(2.20000, 7.47052)(2.40000, 7.93380)(2.60000, 8.47231)(2.80000, 9.16365)(3.00000, 9.85116)(3.20000, 10.40255)(3.40000, 10.83736)(3.60000, 11.18787)(3.80000, 11.47529)(4.00000, 11.71394)
\psline[linecolor=cyan, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=cyan, dotstyle=triangle*, dotscale=1.2 1.2, linewidth=0.4mm](2.10000, 7.29736)(2.30000, 7.75566)(2.50000, 8.32108)(2.70000, 9.00335)(2.90000, 9.69977)(3.10000, 10.28788)(3.30000, 10.75305)(3.50000, 11.12612)(3.70000, 11.43335)(3.90000, 11.68500)(4.10000, 11.89923)
\psline[linecolor=blue, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=blue, dotstyle=o, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 4.55735)(2.20000, 5.71412)(2.40000, 6.98991)(2.60000, 8.18783)(2.80000, 9.15357)(3.00000, 9.87874)(3.20000, 10.43760)(3.40000, 10.87763)(3.60000, 11.23125)(3.80000, 11.52090)(4.00000, 11.76108)
\psline[linecolor=red, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=red, dotstyle=diamond, dotscale=1.2 1.2, linewidth=0.4mm](2.10000, 5.07830)(2.30000, 6.38792)(2.50000, 7.83054)(2.70000, 8.87976)(2.90000, 9.65380)(3.10000, 10.24800)(3.30000, 10.71289)(3.50000, 11.08217)(3.70000, 11.38623)(3.90000, 11.63911)(4.10000, 11.85083)
\psline[linecolor=black, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=black, dotstyle=square, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 15.00000)(2.20000, 15.00000)(2.40000, 15.00000)(2.60000, 15.00000)(2.80000, 15.00000)(3.00000, 15.00000)(3.20000, 15.00000)(3.40000, 15.00000)(3.60000, 15.00000)(3.80000, 15.00000)(4.00000, 15.00000)
\psline[linecolor=darkgreen, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=darkgreen, dotstyle=triangle, dotscale=1.2 1.2, linewidth=0.4mm](2.00000, 4.54698)(2.20000, 5.71345)(2.40000, 7.19553)(2.60000, 9.08090)(2.80000, 11.48205)(3.00000, 14.54343)(3.20000, 18.45066)(3.40000, 23.44238)(3.60000, 29.82561)(3.80000, 37.99555)(4.00000, 48.46119)
\endpsclip
\psframe[linecolor=black, fillstyle=solid, fillcolor=white, shadowcolor=lightgray, shadowsize=1mm, shadow=true](2.15385, 26.50000)(3.35231, 50.00000)
\rput[l](2.43077, 47.00000){\footnotesize{$ E_\text{sum}(\mathcal{A}')\: \text{SL}\: 10 \% \: C_\text{s}$}}
\psline[linecolor=darkred, linestyle=solid, linewidth=0.3mm](2.21538, 47.00000)(2.33846, 47.00000)
\psline[linecolor=darkred, linestyle=solid, linewidth=0.3mm](2.21538, 47.00000)(2.33846, 47.00000)
\psdots[linecolor=darkred, linestyle=solid, linewidth=0.3mm, dotstyle=square*, dotscale=1.2 1.2, linecolor=darkred](2.27692, 47.00000)
\rput[l](2.43077, 43.50000){\footnotesize{$ E_\text{sum}(\mathcal{A}')\:\text{SF} \: 10 \% \: C_\text{s}$}}
\psline[linecolor=cyan, linestyle=solid, linewidth=0.3mm](2.21538, 43.50000)(2.33846, 43.50000)
\psline[linecolor=cyan, linestyle=solid, linewidth=0.3mm](2.21538, 43.50000)(2.33846, 43.50000)
\psdots[linecolor=cyan, linestyle=solid, linewidth=0.3mm, dotstyle=triangle*, dotscale=1.2 1.2, linecolor=cyan](2.27692, 43.50000)
\rput[l](2.43077, 40.00000){\footnotesize{$ E_\text{sum}(\mathcal{A}')\:\text{SL} \: 100 \% \: C_\text{s}$}}
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](2.21538, 40.00000)(2.33846, 40.00000)
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](2.21538, 40.00000)(2.33846, 40.00000)
\psdots[linecolor=blue, linestyle=solid, linewidth=0.3mm, dotstyle=o, dotscale=1.2 1.2, linecolor=blue](2.27692, 40.00000)
\rput[l](2.43077, 36.50000){\footnotesize{$ E_\text{sum}(\mathcal{A}')\:\text{SF}\: 100 \% \: C_\text{s}$}}
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](2.21538, 36.50000)(2.33846, 36.50000)
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](2.21538, 36.50000)(2.33846, 36.50000)
\psdots[linecolor=red, linestyle=solid, linewidth=0.3mm, dotstyle=diamond, dotscale=1.2 1.2, linecolor=red](2.27692, 36.50000)
\rput[l](2.43077, 33.00000){\footnotesize{$ E_\text{sum}(0)$}}
\psline[linecolor=black, linestyle=solid, linewidth=0.3mm](2.21538, 33.00000)(2.33846, 33.00000)
\psline[linecolor=black, linestyle=solid, linewidth=0.3mm](2.21538, 33.00000)(2.33846, 33.00000)
\psdots[linecolor=black, linestyle=solid, linewidth=0.3mm, dotstyle=square, dotscale=1.2 1.2, linecolor=black](2.27692, 33.00000)
\rput[l](2.43077, 29.50000){\footnotesize{$E_\text{sum}(1)$}}
\psline[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm](2.21538, 29.50000)(2.33846, 29.50000)
\psline[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm](2.21538, 29.50000)(2.33846, 29.50000)
\psdots[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm, dotstyle=triangle, dotscale=1.2 1.2, linecolor=darkgreen](2.27692, 29.50000)
}\end{pspicture}
\endgroup
\caption{Sum energy consumption with pathloss variation}
\label{fig:results:offloading:2}
\end{figure}
In Fig.~\ref{fig:results:offloading:2}, the total energy consumption is shown, again for both cases of $\unit[100]{\%}C_s$ and $\unit[10]{\%}C_s$, as well as for state-full and state-less offloading. The amount of total processing data $D_i$ is constant for all user devices.
The energy consumption due to in-device data processing is independent of the channel condition, hence $E_\text{sum}(0)$ is constant over $\beta$.
In the case of full offloading, the energy consumption $E_\text{sum}(1)$ increases exponentially with increasing $\beta$.
Consider the scenario of $\unit[100]{\%}C_s$. For low $\beta\leq2.6$, the energy consumption of $E_\text{sum}(\mathcal{A}')$ and $E_\text{sum}(1)$ are identical, because, offloading all data processing is optimal for all the user devices.
However, as $\beta$ increases, $E_\text{sum}(1)$ increases exponentially, while $E_\text{sum}(\mathcal{A}')$ does not, as only a fraction of user devices offloads. However, at all $\beta$, $E_\text{sum}(\mathcal{A}')<E_\text{sum}(0)$, which implies that strategically offloading the data processing from the user devices, can save energy. For very large $\beta$, $E_\text{sum}(\mathcal{A}')\to E_\text{sum}(0)$, because offloading data processing would become too expensive in terms of energy consumption.
In the second scenario $\unit[10]{\%}C_s$, a similar behavior is observed, apart from $E_\text{sum}(\mathcal{A}')>E_\text{sum}(1)$ for low values of $\beta$. The reason for this behavior was already shown in Fig.~\ref{fig:results:offloading:1}, i.\,e., only a fraction of user devices can offload data processing due to limited server processing capabilities.
As the path-loss further increases, the $C_s$ is not the limiting constraint, and hence $E_\text{sum}(\mathcal{A}')$ for both the scenario converges.
Finally, as shown in Fig.~\ref{fig:results:offloading:2}, no visible differences between SF and SL offloading are observed. This implies that, at lower computational complexity, it is beneficial to either offload all data or nothing. This trend slightly changes as the amount of data elements is increased, which is discussed in the next part.
\subsection{Performance depending on data volume}
\begin{figure}[tb]
\centering
\begingroup
\unitlength=1mm
\psset{xunit=0.23214mm, yunit=0.42017mm, linewidth=0.1mm}
\psset{arrowsize=2pt 3, arrowlength=1.4, arrowinset=.4}\psset{axesstyle=frame}
\begin{pspicture}(-44.61538, -38.08000)(300.00000, 119.00000)
\rput(-8.61538, -11.90000){%
\psaxes[subticks=0, labels=all, xsubticks=1, ysubticks=1, Ox=20, Oy=0, Dx=40, Dy=20]{-}(20.00000, 0.00000)(20.00000, 0.00000)(300.00000, 119.00000)%
\multips(60.00000, 0.00000)(40.00000, 0.0){6}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(0, 119.00000)}
\multips(20.00000, 20.00000)(0, 20.00000){5}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(280.00000, 0)}
\rput[b](160.00000, -26.18000){$ M \left[ \text{data elements}\right]$}
\rput[t]{90}(-36.00000, 59.50000){$\Lambda $}
\psclip{\psframe(20.00000, 0.00000)(300.00000, 119.00000)}
\psline[linecolor=blue, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=blue, dotstyle=o, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 100.00000)(40.00000, 100.00000)(60.00000, 100.00000)(80.00000, 98.82474)(100.00000, 93.85939)(120.00000, 87.55877)(140.00000, 81.09433)(160.00000, 74.93644)(180.00000, 69.27707)(200.00000, 64.16280)(220.00000, 59.57856)(240.00000, 55.48111)(260.00000, 51.81533)(280.00000, 49.32230)(300.00000, 49.32230)
\psline[linecolor=red, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=red, dotstyle=diamond, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 100.00000)(40.00000, 100.00000)(60.00000, 100.00000)(80.00000, 100.00000)(100.00000, 100.00000)(120.00000, 100.00000)(140.00000, 97.41000)(160.00000, 84.70800)(180.00000, 73.81600)(200.00000, 64.12600)(220.00000, 55.89600)(240.00000, 48.56800)(260.00000, 42.20800)(280.00000, 37.85200)(300.00000, 37.85200)
\endpsclip
\psframe[linecolor=black, fillstyle=solid, fillcolor=white, shadowcolor=lightgray, shadowsize=1mm, shadow=true](170.76923, 96.39000)(256.92308, 119.00000)
\rput[l](209.53846, 111.86000){\footnotesize{$\text{SL}$}}
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](179.38462, 111.86000)(196.61538, 111.86000)
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](179.38462, 111.86000)(196.61538, 111.86000)
\psdots[linecolor=blue, linestyle=solid, linewidth=0.3mm, dotstyle=o, dotscale=1.2 1.2, linecolor=blue](188.00000, 111.86000)
\rput[l](209.53846, 103.53000){\footnotesize{$\text{SF}$}}
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](179.38462, 103.53000)(196.61538, 103.53000)
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](179.38462, 103.53000)(196.61538, 103.53000)
\psdots[linecolor=red, linestyle=solid, linewidth=0.3mm, dotstyle=diamond, dotscale=1.2 1.2, linecolor=red](188.00000, 103.53000)
}\end{pspicture}
\endgroup
\caption{Offloading with increasing computation}
\label{fig:results:offloading:3}
\end{figure}
Fig.~\ref{fig:results:offloading:3} shows the offloading percentage, as a function of the number of data elements $M$, for both SF and SL offloading respectively.
The offloading percentage decreases with increasing $M$, as the time period $T$ stays constant, and therefore, the required spectral efficiency increases. This results in an increase of required transmit power. Hence, some user devices do not offload, as the transmit energy consumption $E_\text{tr,i}$ exceeds the in-device energy consumption $E_\text{u,i}$.
Furthermore, we can observe a clear difference between SL and SF, only at higher values of $M$. At lower $M$, $D_i$ and $C_\text{serv,i}$ is small, i.\,e. less communication and computational load is introduced. Therefore, user devices can completely offload the data in case of good channel conditions, or do not offload at all, if the channel attenuation is high. This causes the SL offloading to perform similar to the SF offloading.
In the case of SL offloading, it is possible to offload only parts of the data per user as $M$ increases. However, for SF offloading, all the data per user is either offloaded, or nothing is offloaded. Therefore, SF offloading experiences a steeper slope at higher $M$.
\begin{figure}[tb]
\centering
\begingroup
\unitlength=1mm
\psset{xunit=0.23214mm, yunit=0.31250mm, linewidth=0.1mm}
\psset{arrowsize=2pt 3, arrowlength=1.4, arrowinset=.4}\psset{axesstyle=frame}
\begin{pspicture}(-44.61538, -51.20000)(300.00000, 160.00000)
\rput(-8.61538, -16.00000){%
\psaxes[subticks=0, labels=all, xsubticks=1, ysubticks=1, Ox=20, Oy=0, Dx=40, Dy=20]{-}(20.00000, 0.00000)(20.00000, 0.00000)(300.00000, 160.00000)%
\multips(60.00000, 0.00000)(40.00000, 0.0){6}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(0, 160.00000)}
\multips(20.00000, 20.00000)(0, 20.00000){7}{\psline[linecolor=black, linestyle=dotted, linewidth=0.2mm](0, 0)(280.00000, 0)}
\rput[b](160.00000, -35.20000){$M \left[ \text{data elements}\right]$}
\rput[t]{90}(-36.00000, 80.00000){$ E_\text{sum} \left[\text{mJ}\right]$}
\psclip{\psframe(20.00000, 0.00000)(300.00000, 160.00000)}
\psline[linecolor=blue, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=blue, dotstyle=o, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 1.12473)(40.00000, 2.60882)(60.00000, 4.56708)(80.00000, 7.13729)(100.00000, 10.29657)(120.00000, 13.90650)(140.00000, 17.85839)(160.00000, 22.06744)(180.00000, 26.46965)(200.00000, 31.01730)(220.00000, 35.67496)(240.00000, 40.41619)(260.00000, 45.22043)(280.00000, 49.48871)(300.00000, 52.02260)
\psline[linecolor=red, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=red, dotstyle=diamond, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 1.12473)(40.00000, 2.60882)(60.00000, 4.56708)(80.00000, 7.15103)(100.00000, 10.56057)(120.00000, 15.05948)(140.00000, 20.83186)(160.00000, 26.82347)(180.00000, 33.17149)(200.00000, 39.73277)(220.00000, 46.50419)(240.00000, 53.34676)(260.00000, 60.23785)(280.00000, 66.11309)(300.00000, 69.22049)
\psline[linecolor=black, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=black, dotstyle=square, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 5.00000)(40.00000, 10.00000)(60.00000, 15.00000)(80.00000, 20.00000)(100.00000, 25.00000)(120.00000, 30.00000)(140.00000, 35.00000)(160.00000, 40.00000)(180.00000, 45.00000)(200.00000, 50.00000)(220.00000, 55.00000)(240.00000, 60.00000)(260.00000, 65.00000)(280.00000, 70.00000)(300.00000, 75.00000)
\psline[linecolor=darkgreen, plotstyle=curve, linewidth=0.4mm, showpoints=true, linestyle=solid, linecolor=darkgreen, dotstyle=triangle, dotscale=1.2 1.2, linewidth=0.4mm](20.00000, 1.12473)(40.00000, 2.60882)(60.00000, 4.56708)(80.00000, 7.15103)(100.00000, 10.56057)(120.00000, 15.05948)(140.00000, 20.99584)(160.00000, 28.82890)(180.00000, 39.16469)(200.00000, 52.80285)(220.00000, 70.79850)(240.00000, 94.54392)(260.00000, 125.87617)(280.00000, 155.78542)(300.00000, 155.78542)
\endpsclip
\psframe[linecolor=black, fillstyle=solid, fillcolor=white, shadowcolor=lightgray, shadowsize=1mm, shadow=true](41.53846, 107.20000)(180.92308, 160.00000)
\rput[l](80.30769, 150.40000){\footnotesize{$E_\text{sum}(\mathcal{A}') \: \text{SL}$}}
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](50.15385, 150.40000)(67.38462, 150.40000)
\psline[linecolor=blue, linestyle=solid, linewidth=0.3mm](50.15385, 150.40000)(67.38462, 150.40000)
\psdots[linecolor=blue, linestyle=solid, linewidth=0.3mm, dotstyle=o, dotscale=1.2 1.2, linecolor=blue](58.76923, 150.40000)
\rput[l](80.30769, 139.20000){\footnotesize{$E_\text{sum}(\mathcal{A}') \: \text{SF}$}}
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](50.15385, 139.20000)(67.38462, 139.20000)
\psline[linecolor=red, linestyle=solid, linewidth=0.3mm](50.15385, 139.20000)(67.38462, 139.20000)
\psdots[linecolor=red, linestyle=solid, linewidth=0.3mm, dotstyle=diamond, dotscale=1.2 1.2, linecolor=red](58.76923, 139.20000)
\rput[l](80.30769, 128.00000){\footnotesize{$ E_\text{sum}(0)$}}
\psline[linecolor=black, linestyle=solid, linewidth=0.3mm](50.15385, 128.00000)(67.38462, 128.00000)
\psline[linecolor=black, linestyle=solid, linewidth=0.3mm](50.15385, 128.00000)(67.38462, 128.00000)
\psdots[linecolor=black, linestyle=solid, linewidth=0.3mm, dotstyle=square, dotscale=1.2 1.2, linecolor=black](58.76923, 128.00000)
\rput[l](80.30769, 116.80000){\footnotesize{$ E_\text{sum}(1)$}}
\psline[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm](50.15385, 116.80000)(67.38462, 116.80000)
\psline[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm](50.15385, 116.80000)(67.38462, 116.80000)
\psdots[linecolor=darkgreen, linestyle=solid, linewidth=0.3mm, dotstyle=triangle, dotscale=1.2 1.2, linecolor=darkgreen](58.76923, 116.80000)
}\end{pspicture}
\endgroup
\caption{Sum energy with increasing computation}
\label{fig:results:offloading:4}
\end{figure}
This is also reflected by the energy consumption, as shown in Fig.~ \ref{fig:results:offloading:4}. The slope with which the energy consumption increases for SF offloading, is slightly higher than in the case of SL offloading. However, both SF and SL offloading, have a lower energy consumption compared to a fully centralized processing, $E_\text{sum}(1)$, and a fully localized processing, $E_\text{sum}(0)$.
$E_\text{sum}(0)$ increases linearly because the computational complexity in our scenario scales linearly with the number of data elements $M$. In contrast, the energy consumption for fully centralized processing increases exponentially.
\section{Conclusion} \label{sec: conclusion}
In this paper, we developed an energy consumption model, for an in-device computation and offloading the computation. A closed form solution is obtained to optimally offload the computation, for the given cloud computational resources and channel condition. The results show that the energy consumption of the user devices can be reduced by making an informed decision, and analyzing the trade-off between the communication and computational load of the system. Furthermore, the results illustrate that the bandwidth, and the cloud server capacity, are the limiting factors to optimally offload the computation. If the processing capacity of the cloud server is limited, even with very good channel conditions, the user cannot offload to the cloud, hence, sub-optimally saving the energy. Similarly, if the system has to process a large amount of data, in a short time span, then the available bandwidth is the limiting factor.
This paper only deals with the data processing algorithms that have linear complexity. The multi-user analytical framework can be further use to study algorithms with different complexities.
\vfill
|
2,877,628,088,559 | arxiv | \section{Introduction}
The study of noncommutative geometry is an active topic in both
theoretical physics and mathematics. From the mathematical perspective
it is a generalization of classical (commutative) geometry. From the
physics perspective it is suggested by the \textit{Gedankenexperiment}
of localizing events in spacetime with a Planck scale
resolution~\cite{Doplicher:1994zv}. In this
\textit{Gedankenexperiment}, a sharp localization induces an
uncertainty in the spacetime coordinates, which can naturally be
described by a noncommutative spacetime. Furthermore, noncommutative
geometry and quantum gravity appear to be connected strongly and one
can probably model ``low energy'' effects of quantum gravity theories
using noncommutative geometry.
There have been many attempts to formulate scalar, gauge and gravity
theories on noncommutative spacetime, in particular using the simplest
example of a Moyal-Weyl spacetime having constant noncommutativity
between space and time coordinates, see
\cite{Szabo:2001kg,MullerHoissen:2007xy} for reviews. Furthermore,
this framework had been applied to phenomenological particle physics
with~\cite{NCSM,NCSM-Pheno} and without Seiberg-Witten maps (see the
review~\cite{NC-Pheno} and references therein),
cosmology~\cite{NC-Cosmo} and black hole physics (see the
review~\cite{Nicolini:2008aj} and references therein).
Our work is based on the approach outlined
in~\cite{Aschieri:2005yw,Aschieri:2005zs,Aschieri:2006kc}, where a
noncommutative gravity theory based on an arbitrary twist deformation
is established. This approach has the advantages of being formulated
using the symmetry principle of deformed diffeomorphisms, being
coordinate independent and applicable to nontrivial topologies.
However, there is also the disadvantage that it does not match
the Seiberg-Witten limit of string theory~\cite{AlvarezGaume:2006bn}.
Nevertheless, string theory is not the only candidate for a
fundamental theory of quantum gravity. Therefore, the investigation of
deformed gravity remains interesting on its own terms and it could very well
emerge from a fundamental theory of quantum gravity different from
string theory.
The outline of this paper is as follows. In section \ref{sec:basics}
we review the basics of the formalism of twisted noncommutative
differential geometry. For more details and the proofs we refer to the
original paper~\cite{Aschieri:2005zs} and the
review~\cite{Aschieri:2006kc}. We will work with a general twist and
do not restrict ourselves to the Moyal-Weyl deformation.
In section \ref{sec:symred} we will study symmetry reduction in
theories based on twisted symmetries, such as the twisted
diffeomorphisms in our theory of interest. The reason is that we aim
to investigate which deformations of cosmological and black hole
symmetries are possible. We will derive the conditions that the twist
has to satisfy in order to be compatible with the reduced symmetry.
In section \ref{sec:jambor} we restrict the twists to the class of
Reshetikhin-Jambor-Sykora twists~\cite{Reshetikhin:1990ep,Jambor:2004kc},
that are twists generated by commuting vector fields and are convenient
for practical applications. Within this restricted class of twists we
can classify more explicitly the possible deformations of Lie algebra
symmetries acting on a manifold~$\mathcal{M}$.
In section \ref{sec:cosmo} and \ref{sec:blackhole} we apply the
formalism to cosmological symmetries as well as the black hole. We
classify the possible Reshetikhin-Jambor-Sykora deformations of these models and
obtain physically interesting ones. In section \ref{sec:conc} we
conclude and give an outlook to possible further investigations. In
particular possible applications to phenomenological cosmology and
black hole physics will be discussed.
\section{\label{sec:basics}Basics of Twisted Differential Geometry and Gravity}
In order to establish notation, we will give a short summary of the framework of twisted differential geometry and gravity.
More details can be found in~\cite{Aschieri:2005yw,Aschieri:2005zs,Aschieri:2006kc}.
There is a quite general procedure for constructing noncommutative spaces and their corresponding symmetries
by using a twist. For this we require the following ingredients~\cite{Aschieri:2006kc}:
\begin{enumerate}
\item a Lie algebra $\mathfrak{g}$
\item an action of the Lie algebra on the space we want to deform
\item a twist element $\mathcal{F}$, constructed from the generators of the Lie algebra $\mathfrak{g}$
\end{enumerate}
By a twist element we denote an invertible element of $U\mathfrak{g}\otimes U\mathfrak{g}$, where
$U\mathfrak{g}$ is the universal enveloping algebra of $\mathfrak{g}$. $\mathcal{F}$ has to fulfill some
conditions, which will be specified later. The basic idea in the following is to combine any bilinear map with the inverse twist
and therefore deform these maps. This leads to a mathematically consistent deformed theory covariant under
the deformed transformations. We will show this now for the deformation of diffeomorphisms.
For our purpose we are interested in the Lie algebra of vector fields $\Xi$ on a manifold $\mathcal{M}$.
The transformations induced by $\Xi$ can be seen as infinitesimal diffeomorphisms. A natural action of these
transformations on the algebra of tensor fields $\mathcal{T}:=\bigoplus\limits_{n,m} \bigotimes^{n} \Omega \otimes \bigotimes^m \Xi $
is given by the Lie derivative $\mathcal{L}$. $\Omega$ denotes the space of one-forms.
In order to deform this Lie algebra, as well as its action on tensor fields and the tensor fields themselves, we first have
to construct the enveloping algebra $U\Xi$. This is the associative
tensor algebra generated by the elements of $\Xi$ and the unit $1$, modulo the left and right ideals generated by
the elements $\com{v}{w}-v w + w v$. This algebra can be seen as a Hopf algebra by using the following
coproduct $\Delta$, antipode $S$ and counit $\epsilon$ defined on the generators $u\in \Xi$ and $1$ by:
\begin{flalign}
\begin{array}{ll}
\Delta(u) = u\otimes 1 + 1\otimes u, & \Delta(1) = 1\otimes 1~,\\
\epsilon(u) = 0, & \epsilon(1) = 1~,\\
S(u) = -u, & S(1) = 1~.
\end{array}
\end{flalign}
These definitions can be consistently carried over to the whole enveloping algebra demanding $\Delta$ and $\epsilon$
to be algebra homomorphisms and $S$ to be an anti-homomorphism, i.e.~for any two elements $\eta,\xi\in U\Xi$, we require
\begin{subequations}
\begin{flalign}
&\Delta(\eta\xi) = \Delta(\eta)\Delta(\xi)~,\\
&\epsilon(\eta\xi) = \epsilon(\eta) \epsilon(\xi)~,\\
&S(\eta\xi)= S(\xi) S(\eta)~.
\end{flalign}
\end{subequations}
The action of the enveloping algebra on the tensor fields can be defined by extending the Lie derivative
\begin{flalign}
\mathcal{L}_{\eta\xi} (\tau) := \mathcal{L}_{\eta}(\mathcal{L}_{\xi} (\tau))~,~\forall \eta,\xi\in U\Xi~,~\tau\in \mathcal{T}~.
\end{flalign}
This action is consistent with the Lie algebra properties, since $\mathcal{L}_{\com{u}{v}}(\tau) = \mathcal{L}_{uv}(\tau)-\mathcal{L}_{vu}(\tau)$
for all $u,v\in\Xi$ by the properties of the Lie derivative.
The extension of the Lie algebra $\Xi$ to the Hopf algebra $(U\Xi,\cdot,\Delta,S,\epsilon)$, where $\cdot$ is the multiplication in $U\Xi$,
can now be used in order to construct deformations of it. For the deformations we restrict ourselves
to twist deformations, which is a wide class of possible deformations. The reason is that for twist deformations the
construction of deformed differential geometry and gravity can be performed explicitly by only using properties of the twist,
see~\cite{Aschieri:2005zs}. Other deformations require further investigations.
In order to perform the deformation we require a twist element $\mathcal{F}= f^{\alpha}\otimes f_{\alpha}\in U\Xi\otimes U\Xi$
(the sum over $\alpha$ is understood) fulfilling the following conditions
\begin{subequations}
\begin{flalign}
&\mathcal{F}_{12}(\Delta\otimes \mathrm{id})\mathcal{F}=\mathcal{F}_{23}(\mathrm{id}\otimes\Delta)\mathcal{F}~,\\
&(\epsilon\otimes \mathrm{id}) \mathcal{F} = 1 = (\mathrm{id}\otimes\epsilon)\mathcal{F}~,\\
&\mathcal{F}=1\otimes1 +\mathcal{O}(\lambda)~,
\end{flalign}
\end{subequations}
where $\mathcal{F}_{12}:= \mathcal{F}\otimes 1 $, $\mathcal{F}_{23}:=1\otimes\mathcal{F}$ and $\lambda$ is the deformation parameter.
The first condition will assure the associativity of the deformed products, the second will assure that deformed
multiplications with unit elements will be trivial and the third condition assures the existence of the undeformed
classical limit $\lambda\to 0$.
Furthermore, we can assume without loss of generality that $f_\alpha$ (and also $f^\alpha$) are linearly independent for all $\alpha$,
what can be assured by combining linearly dependent $f$.
Note that $\mathcal{F}$ is regarded as formal power series in $\lambda$, such as the deformation itself.
Strict (convergent) deformations will not be regarded here.
The simplest example is the twist on $\mathbb{R}^n$ given by
$\mathcal{F}_\theta:= \exp{\bigl(-\frac{i\lambda}{2}\theta^{\mu\nu} \partial_\mu\otimes\partial_\nu \bigr)}$ with $\theta^{\mu\nu}=\mathrm{const.}$
and antisymmetric, leading to the Moyal-Weyl deformation, but there are also more complicated ones.
From a twist, one can construct the twisted triangular Hopf algebra
$(U\Xi_\mathcal{F},\cdot,\Delta_\mathcal{F},S_\mathcal{F},\epsilon_\mathcal{F})$
with $R$-matrix $R:=\mathcal{F}_{21}\mathcal{F}^{-1}=:R^\alpha \otimes R_\alpha$, inverse $R^{-1} =: \bar R^\alpha \otimes \bar R_\alpha = R_{21}$
and
\begin{flalign}
&\Delta_\mathcal{F}(\xi):= \mathcal{F}\Delta(\xi)\mathcal{F}^{-1}~,\quad\epsilon_\mathcal{F}(\xi):= \epsilon(\xi)~,\quad S_\mathcal{F}(\xi):= \chi S(\xi)\chi^{-1}~,
\end{flalign}
where $\chi := f^\alpha S(f_\alpha)$, $\chi^{-1}:= S(\bar f^\alpha) \bar f_\alpha$ and $\bar f^\alpha \otimes \bar f_\alpha := \mathcal{F}^{-1}$.
Furthermore, $\mathcal{F}_{21}:= f_\alpha\otimes f^\alpha$ and $R_{21}:=R_\alpha \otimes R^\alpha$. Again, we can assume without loss of generality that all summands of $\mathcal{F}^{-1}$, $R$ and $R^{-1}$ are linearly independent.
However, as explained in~\cite{Aschieri:2005zs}, it is simpler to use the triangular $\star$-Hopf algebra $\mathcal{H}_\Xi^\star=(U\Xi_\star,\star,\Delta_\star,S_\star,\epsilon_\star)$,
isomorphic to $(U\Xi_\mathcal{F},\cdot,\Delta_\mathcal{F},S_\mathcal{F},\epsilon_\mathcal{F})$. The operations
in this algebra on its generators $u,v\in\Xi$ (note that this algebra has the same generators as the classical Hopf algebra) are defined by
\begin{subequations}
\label{eqn:defstarhopfactions}
\begin{flalign}
&u\star v := \bar f^\alpha(u)\bar f_\alpha(v)~,\\
&\Delta_\star(u) := u\otimes 1 + X_{\bar R^\alpha} \otimes \bar R_\alpha(u)~,\\
&\epsilon_\star(u):=\epsilon(u)=0~,\\
&S_\star^{-1}(u) := - \bar R^\alpha(u) \star X_{\bar R_\alpha}~,
\end{flalign}
\end{subequations}
where for all $\xi\in U\Xi$ we define $X_\xi := \bar f^\alpha \xi \chi S^{-1}(\bar f_\alpha)$. The action of the twist on the elements of $U\Xi$
is defined by extending the Lie derivative to the adjoint action~\cite{Aschieri:2005zs}. Note that $U\Xi=U\Xi_\star$ as vector spaces. The $R$-matrix is
given by $R_\star := X_{R^\alpha} \otimes X_{R_\alpha}$ and is triangular. The coproduct and antipode (\ref{eqn:defstarhopfactions})
is defined consistently on $U\Xi_\star$ by using for all $\xi,\eta\in U\Xi_\star$ the definitions
\begin{flalign}
\Delta_\star(\xi\star\eta):= \Delta_\star(\xi)\star\Delta_\star(\eta)~,\quad~S_\star(\xi\star\eta):= S_\star(\eta)\star S_\star(\xi)~.
\end{flalign}
The next step is to define the $\star$-Lie algebra of deformed infinitesimal diffeomorphisms. It has been shown~\cite{Aschieri:2005zs}
that for the twist deformation case the choice $(\Xi_\star,\com{~}{~}_\star)$, where $\Xi_\star=\Xi$ as vector spaces and
\begin{flalign}
\com{u}{v}_\star:= \com{\bar f^\alpha(u)}{\bar f_\alpha(v)}
\end{flalign}
is a natural choice for a $\star$-Lie algebra. It fulfills all conditions which are necessary for a sensible $\star$-Lie algebra given by
\begin{enumerate}
\item $\Xi_\star\subset U\Xi_\star$ is a linear space, which generates $U\Xi_\star$
\item $\Delta_\star(\Xi_\star) \subseteq \Xi_\star\otimes 1 + U\Xi_\star\otimes\Xi_\star$
\item $\com{\Xi_\star}{\Xi_\star}_\star\subseteq\Xi_\star$
\end{enumerate}
The advantage of using the $\star$-Hopf algebra $(U\Xi_\star,\star,\Delta_\star,S_\star,\epsilon_\star)$
instead of the $\mathcal{F}$-Hopf algebra $(U\Xi_\mathcal{F},\cdot,\Delta_\mathcal{F},S_\mathcal{F},\epsilon_\mathcal{F})$
is that the $\star$-Lie algebra of vector fields is isomorphic to $\Xi$ as a vector space.
For the $\mathcal{F}$-Hopf algebra this is not the case and the $\mathcal{F}$-Lie algebra consists in general of multidifferential
operators.
The algebra of tensor fields $\mathcal{T}$ is deformed by using the $\star$-tensor product~\cite{Aschieri:2005zs}
\begin{flalign}
\tau \otimes_\star \tau^\prime := \bar f^\alpha(\tau) \otimes\bar f_\alpha(\tau^\prime)~,
\end{flalign}
where as basic ingredients the deformed algebra of functions $A_\star:=(C^\infty(M),\star)$ as well as the $A_\star$-bimodules of
vector fields $\Xi_\star$ and one-forms $\Omega_\star$ enter. We call $\mathcal{T}_\star$ the deformed algebra of tensor fields. Note that
$\mathcal{T}_\star = \mathcal{T}$ as vector spaces.
The action of the deformed infinitesimal diffeomorphisms on $\mathcal{T}_\star$ is defined by the $\star$-Lie derivative
\begin{flalign}
\label{eqn:starliederivative}
\mathcal{L}^\star_{u}(\tau):= \mathcal{L}_{\bar f^\alpha(u)}(\bar f_\alpha(\tau))~,~\forall \tau\in\mathcal{T}_\star~,~u\in\Xi_\star~,
\end{flalign}
which can be extended to all of $U\Xi_\star$ by $\mathcal{L}^\star_{\xi\star\eta}(\tau):= \mathcal{L}^\star_\xi(\mathcal{L}^\star_\eta (\tau)) $.
Furthermore, we define the $\star$-pairing $\langle \cdot,\cdot\rangle_\star:\Xi_\star\otimes_\mathbb{C} \Omega_\star\to A_\star$ between
vector fields and one-forms as
\begin{flalign}
\label{eqn:starpairing}
\langle v,\omega\rangle_\star := \langle \bar f^\alpha(v),\bar f_\alpha(\omega)\rangle~,~\forall v\in\Xi_\star,~\omega\in\Omega_\star~,
\end{flalign}
where $\langle \cdot,\cdot\rangle$ is the undeformed pairing.
Based on the deformed symmetry principle one can define covariant derivatives, torsion and curvature. This leads to deformed Einstein
equations, see~\cite{Aschieri:2005zs}, which we do not have to review here, since we do not use them in the following.
\section{\label{sec:symred}Symmetry Reduction in Twisted Differential Geometry}
Assume that we have constructed a deformed gravity theory based on a twist $\mathcal{F}\in U\Xi\otimes U\Xi$. Like in Einstein
gravity, the physical applications of this theory is strongly dependent on symmetry reduction. In this section we first define
what we mean by symmetry reduction of a theory covariant under a Lie algebraic symmetry (e.g. infinitesimal diffeomorphisms)
and then extend the principles to deformed symmetries and $\star$-Lie algebras.
In undeformed general relativity we often face the fact that the systems we want to describe have certain (approximate) symmetries.
Here we restrict ourselves to Lie group symmetries.
For example in cosmology one usually constrains oneself to fields invariant under certain symmetry groups $G$, like e.g.~the euclidian
group $E_3$ for flat universes or the $SO(4)$ group for universes with topology $\mathbb{R}\times S_3$, where the spatial hypersurfaces
are 3-spheres.
For a non rotating black hole one usually demands the metric to be stationary and spherically symmetric. Practically, one
uses the corresponding Lie algebra $\mathfrak{g}$ of the symmetry group $G$, represents it faithfully on the Lie algebra
of vector fields $\Xi$ on the manifold $\mathcal{M}$ and demands the fields $\tau\in\mathcal{T}$, which occur in the theory,
to be invariant under these transformations, i.e. we demand
\begin{flalign}
\mathcal{L}_v(\tau)=0~,~\forall v\in \mathfrak{g}~.
\end{flalign}
Since the Lie algebra $\mathfrak{g}$ is a linear space we can choose a basis $\lbrace t_i :i=1,\cdots,\mathrm{dim}(\mathfrak{g})\rbrace$
and can equivalently demand
\begin{flalign}
\mathcal{L}_{t_i}(\tau)=0~,~\forall i=1,2,\cdots ,\mathrm{dim}(\mathfrak{g})~.
\end{flalign}
The Lie bracket of the generators has to fulfill
\begin{flalign}
\com{t_i}{t_j}=f_{ij}^{~~k}t_k~,
\end{flalign}
where $f_{ij}^{~~k}$ are the structure constants.
One can easily show that if we combine two invariant tensors with the tensor product, the resulting tensor is invariant too
because of the trivial coproduct
\begin{flalign}
\mathcal{L}_{t_i}(\tau\otimes\tau^\prime) = \mathcal{L}_{t_i}(\tau) \otimes \tau^\prime + \tau\otimes\mathcal{L}_{t_i}(\tau^\prime)~.
\end{flalign}
The same holds true for pairings $\langle v,\omega\rangle$ of invariant objects $v\in\Xi$ and $\omega\in\Omega$.
Furthermore, if a tensor is invariant under infinitesimal transformations, it is also invariant under (at least small) finite transformations,
since they are given by exponentiating the generators. The exponentiated generators are part of the enveloping algebra,
i.e.~$\exp(\alpha^i t_i)\in U\mathfrak{g}$, where $\alpha^i$ are parameters. For large finite transformations the topology of the
Lie group can play a role, such that the group elements may not simply be given by exponentiating the generators. In the following
we will focus only on small finite transformations in order to avoid topological effects.
We now generalize this to the case of $\star$-Hopf algebras and their corresponding $\star$-Lie algebras. Our plan is
as follows: we start with a suitable definition of a $\star$-Lie subalgebra constructed from the Lie algebra $(\mathfrak{g},\com{~}{~})$.
This definition is guided by conditions, which allow for deformed symmetry reduction using infinitesimal transformations. Then
we complete this $\star$-Lie subalgebra in several steps to a $\star$-enveloping subalgebra, a $\star$-Hopf subalgebra
and a triangular $\star$-Hopf subalgebra. We will always be careful that the dimension of the $\star$-Lie subalgebra remains the same
as the dimension of the corresponding classical Lie algebra. At each step we obtain several restrictions between the twist
and $(\mathfrak{g},\com{~}{~})$.
We start by taking the generators $\lbrace t_i\rbrace$ of $\mathfrak{g}\subseteq\Xi$
and representing their deformations in the $\star$-Lie algebra $(\Xi_\star,\com{~}{~}_\star)$ as
\begin{flalign}
\label{eqn:stargenerators}
t_i^\star = t_i +\sum\limits_{n=1}^{\infty} \lambda^n t_i^{(n)}~,
\end{flalign}
where $\lambda$ is the deformation parameter and $t_i^{(n)}\in\Xi_\star$.
The span of these deformed generators, together with the $\star$-Lie bracket, should form a $\star$-Lie subalgebra $(\mathfrak{g}_\star,\com{~}{~}_\star) := (\mathrm{span}(t_i^\star),\com{~}{~}_\star )$. Therefore $(\mathfrak{g}_\star,\com{~}{~}_\star)$ has
to obey certain conditions. Natural conditions are
\begin{subequations}
\label{eqn:infinitesimalconditions}
\begin{align}
\label{eqn:infconda}\com{\mathfrak{g_\star}}{\mathfrak{g}_\star}_\star \subseteq \mathfrak{g}_\star,
\quad &\text{i.e. $\com{t_i^\star}{t_j^\star}_\star =
f_{ij}^{\star~k}t^\star_k$ with $f_{ij}^{\star~k} = f_{ij}^{~~k}
+ \mathcal{O}(\lambda)$} \\ \label{eqn:infcondb}
\Delta_\star(\mathfrak{g}_\star) \subseteq
\mathfrak{g}_\star \otimes 1 + U\Xi_\star \otimes
\mathfrak{g}_\star,
\quad &\text{which is equivalent to $\bar
R_\alpha(\mathfrak{g}_\star)\subseteq
\mathfrak{g}_\star~\forall_\alpha$}
\end{align}
\end{subequations}
The first condition is a basic feature of a $\star$-Lie algebra.
The second condition implies that if we have two $\mathfrak{g}_\star$ invariant tensors $\tau,\tau^\prime\in\mathcal{T}_\star$, the
$\star$-tensor product of them is invariant as well
\begin{flalign}
\mathcal{L}^\star_{t_i^\star}(\tau\otimes_\star\tau^\prime) = \mathcal{L}^\star_{t_i^\star}(\tau)\otimes_\star\tau^\prime + \bar R^\alpha(\tau)\otimes_\star\mathcal{L}^\star_{\bar R_\alpha (t_i^\star)}(\tau^\prime) =0~,
\end{flalign}
since $\bar R_\alpha(t_i^\star) \in \mathfrak{g}_\star$.
The $\star$-pairings $\langle v,\omega\rangle_\star$ of two invariant objects $v\in\Xi_\star$ and $\omega\in\Omega_\star$ are also invariant
under the $\star$-action of $\mathfrak{g}_\star$.
These are important features if one wants to combine invariant objects to e.g.~an invariant action.
Furthermore, the conditions are sufficient such that the following consistency relation is fulfilled for
any invariant tensor $\tau\in\mathcal{T}_\star$
\begin{flalign}
0=f_{ij}^{\star~k} \mathcal{L}^\star_{t^\star_k}(\tau) = \mathcal{L}^\star_{\com{t^\star_i}{t^\star_j}}(\tau)= \mathcal{L}^\star_{t^\star_{i}}(\mathcal{L}^\star_{t^\star_j}(\tau)) - \mathcal{L}^\star_{\bar R^\alpha(t^\star_{j})}(\mathcal{L}^\star_{\bar R_\alpha(t^\star_i)}(\tau))~,
\end{flalign}
since $\bar R_\alpha(t_i^\star) \in \mathfrak{g}_\star$.
Hence by demanding the two conditions (\ref{eqn:infinitesimalconditions}) for the $\star$-Lie subalgebra
$(\mathrm{span}(t_i^\star),\com{~}{~}_\star )$ we can consistently perform symmetry reduction by using deformed
{\it infinitesimal} transformations. In the classical limit $\lambda\to 0$ we obtain the classical Lie algebra
$(\mathfrak{g}_\star,\com{~}{~}_\star ) \stackrel{\lambda\to 0}{\longrightarrow} (\mathfrak{g},\com{~}{~})$.
Next, we consider the extension of the $\star$-Lie subalgebra $(\mathfrak{g}_\star,\com{~}{~}_\star)\subseteq(\Xi_\star,\com{~}{~}_\star)$
to the triangular $\star$-Hopf subalgebra $\mathcal{H}_\mathfrak{g}^\star=(U\mathfrak{g}_\star,\star,\Delta_\star,S_\star,\epsilon_\star)\subseteq\mathcal{H}_\Xi^\star$.
This can be seen as extending the infinitesimal transformations to a quantum group. We will divide this path into several steps,
where in every step we have to demand additional restrictions on the twist.
Firstly, we construct the $\star$-tensor algebra generated by the elements of $\mathfrak{g}_\star$ and $1$. We take this tensor algebra
modulo the left and right ideals generated by the elements $\com{u}{v}_\star - u\star v + \bar R^\alpha(v)\star \bar R_\alpha(u)$.
It is necessary that these elements are part of $U\mathfrak{g}_\star$, i.e.~we require
\begin{flalign}
\label{eqn:envelopcond}
\bar R^\alpha(\mathfrak{g}_\star)\star \bar R_\alpha(\mathfrak{g}_\star)\subseteq U\mathfrak{g}_\star~.
\end{flalign}
This leads to the algebra $(U\mathfrak{g}_\star,\star)$, which is a subalgebra of $(U\Xi_\star,\star)$.
Secondly, we extend this subalgebra to a $\star$-Hopf subalgebra. Therefore we additionally have to require that
\begin{subequations}
\label{eqn:hopfconditions}
\begin{align}
\label{eqn:hopfconditions1}\Delta_\star(U\mathfrak{g}_\star) &\subseteq U\mathfrak{g}_\star \otimes U\mathfrak{g}_\star~,\\
\label{eqn:hopfconditions2}S_\star(U\mathfrak{g}_\star)&\subseteq U\mathfrak{g}_\star~.
\end{align}
\end{subequations}
Note that we do not demand that $S^{-1}_\star$ (defined on $U\Xi_\star$)
closes in $U\mathfrak{g}_\star$, since this is in general not the case for a nonquasitriangular Hopf algebra and we do not want to
demand quasitriangularity at this stage.
Then the $\star$-Hopf algebra $\mathcal{H}_\mathfrak{g}^\star$ is a Hopf subalgebra of $\mathcal{H}_\Xi^\star$.
Thirdly, we additionally demand that there exists an $R$-matrix $R_\star\in U\mathfrak{g}_\star\otimes U\mathfrak{g}_\star$. It is natural
to take the $R$-matrix of the triangular $\star$-Hopf algebra $\mathcal{H}_\Xi^\star$ defined by $R_\star := X_{R^\alpha}\otimes X_{R_\alpha}$.
This leads to the restrictions
\begin{flalign}
\label{eqn:triangularcond}
X_{R^\alpha},X_{R_\alpha}\in U\mathfrak{g}_\star~,~\forall_\alpha.
\end{flalign}
Since $R_\star$ is triangular, i.e.~$R_\star^{-1 }=\bar R_\star^\alpha \otimes \bar R_{\star\alpha} = R_{\star21} = R_{\star\alpha} \otimes R_\star^\alpha$, we also have
$X_{\bar R^\alpha},X_{\bar R_\alpha}\in U\mathfrak{g}_\star~,~\forall_\alpha$.
If these conditions are fulfilled, $\mathcal{H}_\mathfrak{g}^\star$ is a triangular $\star$-Hopf subalgebra of $\mathcal{H}_\Xi^\star$
with the same $R$-matrix.
As we have seen, extending the $\star$-Lie subalgebra to a (triangular) $\star$-Hopf subalgebra gives severe restrictions on
the possible deformations, more than just working with the deformed infinitesimal transformations given by a $\star$-Lie subalgebra
or the finite transformations given by the $\star$-enveloping subalgebra $(U\mathfrak{g}_\star,\star)$.
Now the question arises if we actually require the deformed finite transformations to form a (triangular) $\star$-Hopf algebra in order to
use them for a sensible symmetry reduction. Because $(U\mathfrak{g}_\star,\star)$ describes deformed finite transformations and we
have the relation
\begin{flalign}
\label{eqn:inf-fin}
\mathcal{L}^\star_{U\mathfrak{g}_\star \backslash \lbrace 1\rbrace}(\tau)=\lbrace0\rbrace\Leftrightarrow\mathcal{L}^\star_{\mathfrak{g}_\star}(\tau)=\lbrace0\rbrace~,
\end{flalign}
we can consistently demand tensors to be invariant under $(U\mathfrak{g}_\star,\star)$, since we
require tensors to be invariant under $(\mathfrak{g}_\star,\com{~}{~}_\star)$.
Therefore, a well defined $(U\mathfrak{g}_\star,\star)$ leads to a structure sufficient for symmetry reduction.
The equivalence (\ref{eqn:inf-fin}) can be shown by using linearity
of the $\star$-Lie derivative and the property $\mathcal{L}^\star_{\xi\star\eta}(\tau)=\mathcal{L}^\star_{\xi}(\mathcal{L}^\star_\eta(\tau))$.
In order to better understand the different restrictions necessary for constructing the $\star$-Lie subalgebra $(\mathfrak{g}_\star,\com{~}{~}_\star)$,
the $\star$-enveloping subalgebra and the (triangular) $\star$-Hopf subalgebra $(U\mathfrak{g}_\star,\star,\Delta_\star,S_\star,\epsilon_\star)$,
we restrict ourselves in the following sections to the class of Reshetikhin-Jambor-Sykora twists~\cite{Reshetikhin:1990ep,Jambor:2004kc}. This is a
suitable nontrivial generalization of the Moyal-Weyl product, also containing e.g.~$\kappa$ and $q$ deformations when applied to Poincar\'{e} symmetry.
\section{\label{sec:jambor}The Case of Reshetikhin-Jambor-Sykora Twists}
Let $\lbrace V_a\in\Xi\rbrace$ be an arbitrary set of mutually commuting vector fields,
i.e. $\com{V_a}{V_b}=0~,~\forall_{a,b}$, on an $n$ dimensional manifold $\mathcal{M}$. Then the object
\begin{flalign}
\label{eqn:jstwist}
\mathcal{F}_{V} := \exp\bigl(-\frac{i\lambda}{2} \theta^{ab} V_a\otimes V_b \bigr)\in U\Xi\otimes U\Xi
\end{flalign}
is a twist element, if $\theta$ is constant and antisymmetric~\cite{Aschieri:2005zs,Reshetikhin:1990ep,Jambor:2004kc}.
We call (\ref{eqn:jstwist}) a Reshetikhin-Jambor-Sykora twist. Note that this twist is not restricted to the topology $\mathbb{R}^n$ for the manifold
$\mathcal{M}$.
Furthermore, we can restrict ourselves to $\theta$ with maximal rank and an even number of vector fields $V_a$,
since we can lower the rank of the Poisson structure afterwards by choosing some of the $V_a$ to be zero.
We can therefore without loss of generality use the standard form
\begin{flalign}
\label{eqn:theta}
\theta=\begin{pmatrix}
0 & 1 & 0 & 0 & \cdots\\
-1 & 0 & 0 & 0 & \cdots\\
0 & 0 & 0 & 1 & \cdots\\
0 & 0 & -1 & 0 & \cdots\\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{pmatrix}
\end{flalign}
by applying a suitable $GL(n)$ transformation on the $V_a$.
This twist element is easy to apply and in particular we have for the inverse and the $R$-matrix
\begin{flalign}
\label{eqn:jsrmatrix}
\mathcal{F}_V^{-1} = \exp\bigl(\frac{i\lambda}{2} \theta^{ab} V_a\otimes V_b \bigr)~,\quad~R=\mathcal{F}_{V,21}\mathcal{F}_V^{-1}=\mathcal{F}_V^{-2}=\exp\bigl(i \lambda\theta^{ab} V_a\otimes V_b \bigr)~.
\end{flalign}
Now let $(\mathfrak{g},\com{~}{~})\subseteq(\Xi,\com{~}{~})$ be the Lie algebra of the symmetry we want to deform. We choose a
basis of this Lie algebra $\lbrace t_i : i=1,\cdots ,\mathrm{dim}(\mathfrak{g})\rbrace$ with $\com{t_i}{t_j}=f_{ij}^{~~k}t_k$.
Next, we discuss the symmetry reduction based on the $\star$-Lie subalgebra, as explained in section \ref{sec:symred}.
Therefore we make the ansatz (\ref{eqn:stargenerators}) for the generators $t_i^\star$.
Furthermore, we evaluate the two conditions (\ref{eqn:infinitesimalconditions}) the $t_i^\star$ have to satisfy.
We start with the coproduct condition (\ref{eqn:infcondb}), which
is equivalent to $\bar R_\alpha(t_i^\star)\in \mathrm{span}(t_i^\star),~ \forall_\alpha$, where $\alpha$ is a multi index. Using the
explicit form of the inverse $R$-matrix (\ref{eqn:jsrmatrix}) we arrive at the conditions
\begin{flalign}
\label{eqn:condition1}
\com{V_{a_1}}{\cdots\com{V_{a_n}}{t_i^\star}\cdots} = \mathcal{N}_{a_1\cdots a_n i}^{\star j} t_j^\star~,
\end{flalign}
where $\mathcal{N}_{a_1\cdots a_n i}^{\star j} :=\mathcal{N}_{a_1\cdots a_n i}^j + \sum\limits_{k=1}^{\infty} \lambda^k \mathcal{N}_{a_1\cdots a_n i}^{(k)~j}$ are constants.
The only independent condition in (\ref{eqn:condition1}) is given by
\begin{flalign}
\label{eqn:condition2}
\com{V_a}{t_i^\star} = \mathcal{N}^{\star j}_{a i} t^\star_j ~,
\end{flalign}
since it implies all the other ones by linearity.
In particular, the zeroth order in $\lambda$ of (\ref{eqn:condition2}) yields
\begin{flalign}
\label{eqn:condition3}
\com{V_a}{t_i} = \mathcal{N}_{ai}^{j} t_j~.
\end{flalign}
This leads to the following
\begin{propo}
\label{propo:ideal}
Let $(\mathfrak{g},\com{~}{~})\subseteq(\Xi,\com{~}{~})$ be a classical Lie algebra and $(\Xi_\star,\com{~}{~}_\star)$ the
$\star$-Lie algebra of vector fields deformed by a Reshetikhin-Jambor-Sykora twist, constructed with vector fields $V_a$.
Then for a symmetry reduction respecting the minimal axioms (\ref{eqn:infinitesimalconditions}), it is necessary that
the following Lie bracket relations hold true
\begin{flalign}
\com{V_a}{\mathfrak{g}}\subseteq\mathfrak{g}~,\forall_a~.
\end{flalign}
In other words, $(\mathrm{span}(t_i,V_a),\com{~}{~})\subseteq(\Xi,\com{~}{~})$ forms a Lie algebra with ideal $\mathfrak{g}$.
Here $t_i$ are the generators of $\mathfrak{g}$.
\end{propo}
\noindent Note that this gives conditions relating the {\it classical} Lie algebra $(\mathfrak{g},\com{~}{~})$ with the twist.
Next, we evaluate the $\star$-Lie bracket condition (\ref{eqn:infconda}).
Using the explicit form of the inverse twist (\ref{eqn:jsrmatrix}) and (\ref{eqn:condition2}) we obtain
\begin{subequations}
\begin{flalign}
&\bar f_{\alpha_{(n)}}(t_i^\star) = \com{V_{a_1}}{\cdots\com{V_{a_n}}{t_i^\star}\cdots} = \bigl(\mathcal{N}^\star_{a_n}\cdots\mathcal{N}^\star_{a_1} \bigr)_i^j t_j^\star=:\mathcal{N}^{\star j}_{\alpha_{(n)}i} t_j^\star~,\\
&\bar f^{\alpha_{(n)}}(t_i^\star) = \Theta^{\beta_{(n)}\alpha_{(n)}} \bar f_{\beta_{(n)}}(t_i^\star)~,\\
&\Theta^{\beta_{(n)}\alpha_{(n)}} := \frac{1}{n!}\left(\frac{i\lambda}{2}\right)^n \theta^{b_1 a_1} \cdots \theta^{b_n a_n}~,
\end{flalign}
\end{subequations}
where $\alpha_{(n)},\beta_{(n)}$ are multi indices. This leads to
\begin{flalign}
\label{eqn:prestarcommutator}
\com{t_i^\star}{t_j^\star}_\star = \Theta^{\beta_{(n)}\alpha_{(n)}} \mathcal{N}^{\star k}_{\beta_{(n)} i} \mathcal{N}^{\star l}_{\alpha_{(n)}j}\com{t_k^\star}{t_l^\star}~.
\end{flalign}
Note that in particular for the choice $t_i^\star = t_i, ~\forall_i,$ the $\star$-Lie subalgebra closes with structure constants
\begin{flalign}
\label{eqn:starliealgebra}
\com{t_i}{t_j}_\star = \Theta^{\beta_{(n)}\alpha_{(n)}} \mathcal{N}^{ k}_{\beta_{(n)} i} \mathcal{N}^{ l}_{\alpha_{(n)}j}\com{t_k}{t_l} = \Theta^{\beta_{(n)}\alpha_{(n)}} \mathcal{N}^{ k}_{\beta_{(n)} i} \mathcal{N}^{ l}_{\alpha_{(n)}j} f_{kl}^{~~m} t_m =: f_{ij}^{\star~m} t_m~,
\end{flalign}
where we have used the $\mathcal{N}$ defined in (\ref{eqn:condition3}).
This leads to the following
\begin{propo}
\label{propo:starliealgebra}
Let $\com{V_a}{\mathfrak{g}_\star}\subseteq\mathfrak{g}_\star~,\forall_a$. Then we can always construct a $\star$-Lie subalgebra
$(\mathfrak{g}_\star,\com{~}{~}_\star)\subseteq(\Xi_\star,\com{~}{~}_\star)$
by choosing the generators as $t_i^\star = t_i$ for all $i$. With this we have $\mathfrak{g}_\star = \mathfrak{g}$ as vector spaces
and the structure constants are deformed as
\begin{flalign}
f_{ij}^{\star ~m}=\Theta^{\beta_{(n)}\alpha_{(n)}} \mathcal{N}^{ k}_{\beta_{(n)} i} \mathcal{N}^{ l}_{\alpha_{(n)}j} f_{kl}^{~~m}~.
\end{flalign}
\end{propo}
\noindent
Since the condition (\ref{eqn:infcondb}) together with the requirement $t_i^\star=t_i$, for all $i$, automatically fulfills
(\ref{eqn:infconda}), we choose $t_i^\star=t_i$, for all $i$, as a
canonical embedding. In general, other possible embeddings
require further constructions to fulfill condition
(\ref{eqn:infconda}) and are therefore less natural.
We will discuss possible differences between this and other embeddings later on, when we construct the $\star$-Hopf subalgebra and the
$\star$-Lie derivative action on $\star$-tensor fields.
In addition, we obtain that the necessary condition (\ref{eqn:envelopcond}) for extending $\mathfrak{g}_\star$ to the $\star$-enveloping
subalgebra $(U\mathfrak{g}_\star,\star)\subseteq(U\Xi_\star,\star)$
is automatically fulfilled, since we have $\bar R_{\alpha_{(n)}}(\mathfrak{g}_\star)\subseteq \mathfrak{g}_\star$
for all $\alpha_{(n)}$ and additionally
\begin{flalign}
\bar R^{\alpha_{(n)}}(\mathfrak{g}_\star)=(-2)^n \Theta^{\beta_{(n)}\alpha_{(n)}}\bar R_{\beta_{(n)}}(\mathfrak{g}_\star)\subseteq\mathfrak{g}_\star~,~\forall{\alpha}_{(n)}.
\end{flalign}
Next, we evaluate the conditions (\ref{eqn:hopfconditions}), which have to be fulfilled in order to construct the $\star$-Hopf subalgebra
$\mathcal{H}_\mathfrak{g}^\star\subseteq\mathcal{H}_{\Xi}^\star$. For the particular choice of the twist (\ref{eqn:jstwist})
we obtain the following
\begin{propo}
\label{propo:starhopf}
Let $(U\mathfrak{g}_\star,\star)\subseteq(U\Xi_\star,\star)$ be a $\star$-enveloping subalgebra and let the deformation parameter
$\lambda\neq 0$.
Then in order to extend $(U\mathfrak{g}_\star,\star)$ to the $\star$-Hopf subalgebra $\mathcal{H}_\mathfrak{g}^\star=(U\mathfrak{g}_\star,\star,\Delta_\star,S_\star,\epsilon_\star)\subseteq\mathcal{H}_\Xi^\star$ the
condition
\begin{flalign}
V_{a_1} \in \mathfrak{g}_\star~,\quad \text{if }~ \com{V_{a_2}}{\mathfrak{g}_\star}\neq\lbrace0\rbrace
\end{flalign}
has to hold true for all pairs of indices $(a_1,a_2)$ connected by the antisymmetric matrix $\theta$ (\ref{eqn:theta}), i.e.~$(a_1,a_2)\in \big\lbrace(1,2),(2,1),(3,4),(4,3),\dots\big\rbrace$.
\end{propo}
\noindent Note that these conditions depend on the embedding $t_i^\star=t_i^\star(t_j)$.
The proof of this proposition is shown in the appendix \ref{app:proof}.
Finally, if we demand $\mathcal{H}_\mathfrak{g}^\star$ to be a triangular $\star$-Hopf algebra (\ref{eqn:triangularcond}) we obtain the stringent condition
\begin{flalign}
V_a \in \mathfrak{g}_\star~,~\forall_a~.
\end{flalign}
This can be shown by using $X_{R_{\alpha}} = R_\alpha$ and $V_a \star V_b = V_a V_b$, which holds true for the class of Reshetikhin-Jambor-Sykora twists.
As we have seen above, there are much stronger restrictions on the Lie algebra $(\mathfrak{g},\com{~}{~})$
and the twist, if we want to extend the deformed infinitesimal transformations $(\mathfrak{g}_\star,\com{~}{~}_\star)$
to the (triangular) $\star$-Hopf subalgebra $\mathcal{H}_\mathfrak{g}^\star$. In particular this extension restricts the $V_a$ themselves,
while for infinitesimal transformations and the finite transformations $(U\mathfrak{g}_\star,\star)$ only the images of $V_a$
acting on $\mathfrak{g}_\star$ are important.
Next, we study the $\star$-action of the $\star$-Lie and Hopf algebra on the deformed tensor fields. The $\star$-action of the generators
$t_i^\star$ on $\tau\in\mathcal{T}_\star$ is defined by (\ref{eqn:starliederivative}) and simplifies to
\begin{flalign}
\mathcal{L}^\star_{t_i^\star}(\tau) = \Theta^{\alpha_{(n)}\beta_{(n)}} \mathcal{N}^{\star j}_{\alpha_{(n)}i} ~\mathcal{L}_{t_j^\star} \bigl(\bar f_{\beta_{(n)}}(\tau)\bigr)~.
\end{flalign}
For invariant tensors, the $\star$-Lie derivative has to vanish to all orders in $\lambda$, since we work with formal power series.
If we now for explicitness take the natural choice $t_i^\star=t_i$ we obtain the following
\begin{propo}
\label{propo:starinvariance}
Let $\com{V_a}{\mathfrak{g}_\star}\subseteq\mathfrak{g}_\star~,\forall_a$ and $t_i^\star=t_i,~\forall_i$. Then a tensor $\tau\in\mathcal{T}_\star$ is $\star$-invariant
under $(\mathfrak{g}_\star,\com{~}{~}_\star)$, if and only if it is invariant under the undeformed action of $(\mathfrak{g},\com{~}{~})$, i.e.
\begin{flalign}
\mathcal{L}^{\star}_{\mathfrak{g}_\star}(\tau)=\lbrace 0\rbrace~\Leftrightarrow~\mathcal{L}_{\mathfrak{g}}(\tau)=\lbrace 0\rbrace~.
\end{flalign}
\end{propo}
\begin{proof}
For the proof we make the ansatz $\tau=\sum\limits_{n=0}^\infty \lambda^n \tau_{n}$ and investigate $\mathcal{L}^\star_{t_i}(\tau)$
order by order in $\lambda$, since we work with formal power series. By using (\ref{eqn:condition3}) to reorder the
Lie derivatives such that $t_i$ is moved to the right, it can be shown recursively in powers of $\lambda$ that the proposition
holds true.
\end{proof}
\noindent Note that for $t_i^\star\neq t_i$ this does not necessarily hold true. We can not make statements for this case, since
we would require a general solution of (\ref{eqn:prestarcommutator}), which we do not have yet. But we mention again that
we consider choosing $t_i^\star$ different from $t_i$ quite unnatural.
This proposition translates to the case of finite symmetry transformations with $t_i^\star=t_i$ because of the properties of
the $\star$-Lie derivative.
The framework developed in this section will now be applied to
cosmology and black holes in order to give some specific examples and
discuss possible physical implications.
\section{\label{sec:cosmo}Application to Cosmology}
In this section we will investigate models with symmetry group $E_3$ in four spacetime dimensions with topology $\mathbb{R}^4$.
These are flat Friedmann-Robertson-Walker (FRW) universes.
The undeformed Lie algebra of this group is generated by the ``momenta'' $p_i$ and ``angular momenta'' $L_i$, $i\in\lbrace1,2,3\rbrace$,
which we can represent in the Lie algebra of vector fields as
\begin{flalign}
p_i = \partial_i\quad,\quad L_i = \epsilon_{ijk}x^j \partial_k~,
\end{flalign}
where $\epsilon_{ijk}$ is the Levi-Civita symbol.
The undeformed Lie bracket relations are
\begin{flalign}
\com{p_i}{p_j}=0~,\quad\com{p_i}{L_j}=-\epsilon_{ijk}p_k~,\quad\com{L_i}{L_j}=-\epsilon_{ijk}L_k~.
\end{flalign}
We will work with the natural embedding $t_i^\star=t_i$, and therefore the $\star$-Lie subalgebra is given by
$\mathfrak{g}_\star=\mathfrak{e}_{3\star}=\mathfrak{e}_3=\mathrm{span}(p_i,L_i)$.
We can now explicitly evaluate the condition each twist vector field $V_a$ has to satisfy given by
$\com{V_a}{\mathfrak{e}_{3}}\subseteq\mathfrak{e}_{3}$ (cf.~proposition \ref{propo:ideal}).
Since the generators are at most linear in the spatial coordinates, $V_a$ can be at most quadratic in order to
fulfill this condition. If we make a quadratic ansatz with time dependent coefficients we obtain that each $V_a$
has to be of the form
\begin{flalign}
\label{eqn:FRWV}
V_a = V_a^0(t) \partial_t + c_a^i\partial_i + d_a^i L_i + f_a x^i\partial_i~,
\end{flalign}
where $c_a^i,~d_a^i,~f_a\in\mathbb{R}$ and $V_a^0(t)\in C^\infty(\mathbb{R})$ in order to obtain hermitian deformations.
If all $V_a$ have the form (\ref{eqn:FRWV}), the $\star$-Lie algebra closes (cf.~proposition \ref{propo:starliealgebra}).
Next, we have to find conditions such that the $V_a$ are mutually commuting. A brief calculation shows that the
following conditions have to be fulfilled:
\begin{subequations}
\label{eqn:frwcond}
\begin{align}
\label{eqn:frwcond1}&d_a^i d_b^j \epsilon_{ijk} = 0 ~,\forall_k~,\\
\label{eqn:frwcond2}&c_a^i d_b^j \epsilon_{ijk} - c_b^i d_a^j \epsilon_{ijk} + f_a c_b^k - f_b c_a^k =0~,\forall_k ~,\\
\label{eqn:frwcond3}&\com{V_a^0(t)\partial_t}{V_b^0(t)\partial_t}=0~.
\end{align}
\end{subequations}
As a first step, we will now work out all possible deformations of $\mathfrak{e}_{3}$ when twisted with two commuting vector fields.
We will classify the possible solutions. Therefore we divide the solutions into classes depending on the value of $d_a^i$ and $f_a$.
We use as notation for our cosmologies $\mathfrak{C}_{AB}$, where $A\in\lbrace1,2,3\rbrace$ and $B\in\lbrace1,2\rbrace$,
which will become clear later on, when we sum up the results in table \ref{tab:frw}.
Type $\mathfrak{C}_{11}$ is defined to be vector fields with $d_1^i=d_2^i=0$ and $f_1=f_2=0$, i.e.~
\begin{flalign}
V_{1(\mathfrak{C}_{11})} = V_{1}^0(t)\partial_t +c_{1}^i\partial_i~,\quad V_{2(\mathfrak{C}_{11})} = V_{2}^0(t)\partial_t +c_{2}^i\partial_i~.
\end{flalign}
These vector fields fulfill the first two conditions (\ref{eqn:frwcond1}) and (\ref{eqn:frwcond2}). The solutions of the third condition
(\ref{eqn:frwcond3}) will be discussed later, since this classification we perform now does not depend on it.
Type $\mathfrak{C}_{21}$ is defined to be vector fields with $d_1^i=d_2^i=0$, $f_1\neq0$ and $f_2=0$.
The first condition (\ref{eqn:frwcond1}) is trivially fulfilled and the second (\ref{eqn:frwcond2}) is fulfilled,
if and only if $c_{2}^i=0,~\forall_i$, i.e.~type $\mathfrak{C}_{21}$ is given by the vector fields
\begin{flalign}
\tilde V_{1(\mathfrak{C}_{21})} = V_{1}^0(t)\partial_t + c_{1}^i\partial_i + f_{1} x^i\partial_i~,\quad \tilde V_{2(\mathfrak{C}_{21})} = V_{2}^0(t)\partial_t~.
\end{flalign}
These vector fields can be simplified to
\begin{flalign}
V_{1(\mathfrak{C}_{21})} = c_{1}^i\partial_i + f_{1} x^i\partial_i~,\quad V_{2(\mathfrak{C}_{21})} = V_{2}^0(t)\partial_t~,
\end{flalign}
since both lead to the same twist (\ref{eqn:jstwist}).
Solutions with $d_1^i=d_2^i=0$, $f_1\neq0$ and $f_2\neq0$ lie in type $\mathfrak{C}_{21}$, since we can perform the twist conserving map
$V_2\to V_2 - \frac{f_2}{f_1} V_1$, which transforms $f_2$ to zero. Furthermore $\mathfrak{C}_{31}$ is defined by
$d_1^i=d_2^i=0$, $f_1=0$ and $f_2\neq0$ and is equivalent to $\mathfrak{C}_{21}$ by interchanging the labels of the vector fields.
Next, we go on to solutions with without loss of generality $\mathbf{d}_1\neq0$ and $\mathbf{d}_2=0$ ($\mathbf{d}$ denotes the vector).
Note that this class contains also the class with $\mathbf{d}_1\neq0$ and $\mathbf{d}_2\neq0$.
To see this, we use the first condition (\ref{eqn:frwcond1}) and obtain that $\mathbf{d}_1$ and $\mathbf{d}_2$ have to be parallel,
i.e.~$d^i_2=\kappa d^i_1$.
Then we can transform $\mathbf{d}_2$ to zero by using the twist conserving map $V_2\to V_2 - \kappa V_1$.
Type $\mathfrak{C}_{12}$ is defined to be vector fields with $\mathbf{d}_1\neq0$, $\mathbf{d}_2=0$ and $f_1=f_2=0$.
The first condition (\ref{eqn:frwcond1}) is trivially fulfilled, while the second condition (\ref{eqn:frwcond2})
requires that $\mathbf{c}_2$ is parallel to $\mathbf{d}_1$, i.e.~we obtain
\begin{flalign}
V_{1(\mathfrak{C}_{12})} = V_{1}^0(t)\partial_t + c_{1}^i\partial_i + d_{1}^i L_i~,\quad V_{2(\mathfrak{C}_{12})} = V_{2}^0(t)\partial_t + \kappa~d_{1}^i\partial_i~,
\end{flalign}
where $\kappa\in\mathbb{R}$ is a constant.
Type $\mathfrak{C}_{22}$ is defined to be vector fields with $\mathbf{d}_1\neq0$, $\mathbf{d}_2=0$, $f_1\neq0$ and $f_2=0$.
Solving the second condition (\ref{eqn:frwcond2}) (therefore we have to use that the vectors are real!) we obtain
\begin{flalign}
V_{1(\mathfrak{C}_{22})} = c_{1}^i\partial_i + d_{1}^i L_i + f_{1} x^i\partial_i~,\quad V_{2(\mathfrak{C}_{22})} = V_{2}^0(t)\partial_t~,
\end{flalign}
where we could set without loss of generality $V_{1}^0(t)$ to zero, as in type $\mathfrak{C}_{21}$. Note that $\mathfrak{C}_{21}$ is
contained in $\mathfrak{C}_{22}$ by violating the condition $\mathbf{d}_1\neq0$.
Finally, we come to the last class, type $\mathfrak{C}_{32}$, defined by $\mathbf{d}_1\neq0$, $\mathbf{d}_2=0$, $f_1=0$ and $f_2\neq0$.
This class contains also the case $\mathbf{d}_1\neq0$, $\mathbf{d}_2=0$, $f_1\neq0$ and $f_2\neq0$ by using the twist
conserving map $V_1\to V_1 - \frac{f_1}{f_2} V_2$. The vector fields are given by
\begin{flalign}
V_{1(\mathfrak{C}_{32})} = V_{1}^0(t)\partial_t + \frac{d_{1}^j c_{2}^k\epsilon_{jki}}{f_2} \partial_i + d_{1}^i L_i ~,\quad V_{2(\mathfrak{C}_{32})} = V_{2}^0(t)\partial_t + c_{2}^i\partial_i + f_{2} x^i\partial_i~.
\end{flalign}
Note that type $\mathfrak{C}_{11}$ and $\mathfrak{C}_{12}$ can be extended to a triangular $\star$-Hopf
subalgebra by choosing $V_1^0(t)=V_2^0(t)=0$ in each case.
For a better overview we additionally present the the results in table \ref{tab:frw}, containing all possible two vector field
deformations $\mathfrak{C}_{AB}$ of the Lie algebra of the euclidian group.
From this table the notation $\mathfrak{C}_{AB}$ becomes clear.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
{\large $~~\mathfrak{C}_{AB}$} & $\mathbf{d}_1=\mathbf{d}_2=0$ & $\mathbf{d}_1\neq0$~,~$\mathbf{d}_2=0$ \\ \hline
$f_1=0$, &$V_1= V_{1}^0(t)\partial_t +c_{1}^i\partial_i$ &$V_1= V_{1}^0(t)\partial_t + c_{1}^i\partial_i + d_{1}^i L_i$ \\
$f_2=0$ &$V_2= V_{2}^0(t)\partial_t +c_{2}^i\partial_i$ &$V_2= V_{2}^0(t)\partial_t + \kappa~d_{1}^i\partial_i$ \\ \hline
$f_1\neq0$, &$V_1= c_{1}^i\partial_i + f_{1} x^i\partial_i$ &$V_1= c_{1}^i\partial_i + d_{1}^i L_i + f_{1} x^i\partial_i$ \\
$f_2=0$ &$V_2= V_{2}^0(t)\partial_t$ &$V_2= V_{2}^0(t)\partial_t$ \\ \hline
$f_1=0$, &$V_1= V_{1}^0(t)\partial_t$ &$V_1= V_{1}^0(t)\partial_t + \frac{1}{f_2}d_{1}^j c_{2}^k\epsilon_{jki} \partial_i + d_{1}^i L_i$ \\
$f_2\neq0$ &$V_2= c_{2}^i\partial_i + f_{2} x^i\partial_i$ &$V_2= V_{2}^0(t)\partial_t + c_{2}^i\partial_i + f_{2} x^i\partial_i$ \\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:frw}Two vector field deformations of the cosmological symmetry group $E_3$.}
\end{table}
Next, we discuss solutions to the third condition (\ref{eqn:frwcond3}) $\com{V_1^0(t)\partial_t}{V_2^0(t)\partial_t}=0$.
It is obvious that choosing either
$V_1^0(t)=0$ or $V_2^0(t)=0$ and the other one arbitrary is a solution. Additionally, we consider solutions with $V_1^0(t)\neq0$ and $V_2^0(t)\neq0$.
Therefore there has to be some point $t_0\in\mathbb{R}$, such that without loss of generality $V_1^0(t)$ is unequal zero in some open region
$U\subseteq\mathbb{R}$ around $t_0$.
In this region we can perform the diffeomorphism $t\to \tilde t(t):= \int\limits_{t_0}^t dt^\prime \frac{1}{V_1^0(t^\prime)}$
leading to $\tilde V_1^0(\tilde t)=1$. With this the third condition (\ref{eqn:frwcond3}) becomes
\begin{flalign}
0=\com{V_1^0(t)\partial_t}{V_2^0(t)\partial_t}=\com{\tilde V_1^0(\tilde t)\partial_{\tilde t}}{\tilde V_2^0(\tilde t)\partial_{\tilde t}}
= \Bigl(\partial_{\tilde t}\tilde V_2^0(\tilde t)\Bigr) \partial_{\tilde t}~.
\end{flalign}
This condition is solved if and only if $\tilde V_2^0(\tilde t) =\mathrm{const.}$ for $t\in U\subseteq\mathbb{R}$.
For the subset of analytical functions $C^\omega(\mathbb{R})\subset C^\infty(\mathbb{R})$ we can continue this condition
to all $\mathbb{R}$ and obtain the global relation $V_2^0(t)= \kappa V_1^0(t)$, with some constant $\kappa\in\mathbb{R}$.
For non analytic, but smooth functions, we can not continue these relations to all $\mathbb{R}$ and therefore only obtain local conditions restricting the functions in the overlap of their supports to be linearly dependent.
In particular non analytic functions with disjoint supports fulfill the condition (\ref{eqn:frwcond3}) trivially.
After characterizing the possible two vector field deformations of $\mathfrak{e}_3$ we briefly give a method how to obtain twists
generated by a larger number of vector fields. For this purpose we use the canonical form of $\theta$ (\ref{eqn:theta}).
Assume that we want to obtain deformations with e.g.~four vector fields. Then of course all vector fields have to be of the form (\ref{eqn:FRWV}).
According to the form of $\theta$ we have two blocks of vector fields $(a,b)=(1,2)$ and $(a,b)=(3,4)$, in which the classification described
above for two vector fields can be performed. This means that all four vector field twists can be obtained by using two types
of two vector field twists.
We label the twist by using a tuple of types, e.g.~$(\mathfrak{C}_{11},\mathfrak{C}_{22})$ means that $V_1,V_2$ are of type $\mathfrak{C}_{11}$ and $V_3,V_4$ of type $\mathfrak{C}_{22}$.
But this does only assure that $\com{V_a}{V_b}=0$ for $(a,b)\in\lbrace(1,2),(3,4)\rbrace$ and we have to demand further restrictions in
order to fulfill $\com{V_a}{V_b}=0$ for all $(a,b)$ and that all vector fields give independent contributions
to the twist. In particular twists constructed with linearly dependent vector fields can be reduced to a twist
constructed by a lower number of vector fields.
This method naturally extends to a larger number of vector fields, until we cannot find anymore independent and
mutually commuting vector fields. We will now give two examples for the $\mathfrak{e}_3$ case in order to clarify the construction.
As a first example we construct the four vector field twist $(\mathfrak{C}_{11},\mathfrak{C}_{11})$.
In this case all four vector fields commute without imprinting further restrictions. We assume that three of the four vectors
$\mathbf{c}_a$ are linearly independent, such that the fourth one, say $\mathbf{c}_4$, can be decomposed into the other ones.
If we now choose four linearly independent functions $V_a^0(t)$ (this means that they are non analytic) leads to a proper four vector field
twist.
As a second simple example we construct the four vector field twist $(\mathfrak{C}_{21},\mathfrak{C}_{21})$.
In order to have commuting vector fields we obtain the condition $c_3^i=\frac{f_3}{f_1}c_1^i$.
We therefore have $V_3=\frac{f_3}{f_1}V_1$ and the four vector field twist can be reduced to the two vector field twist
of type $\mathfrak{C}_{21}$ with $\tilde V_1 =V_1$ and $\tilde V_2 = V_2 +\frac{f_3}{f_1}V_4$. This is an example
of an improper four vector field twist.
This method can be applied in order to investigate general combinations of two vector field twists, if one requires them.
Because this construction is straightforward and we do not require these twists for our discussions, we do not present them here.
At the end we calculate the $\star$-commutator of the linear coordinate functions $x^\mu\in A_\star$ for the various types of models
in first order in the deformation parameter $\lambda$. It is given by
\begin{flalign}
c^{\mu\nu}:=\starcom{x^\mu}{x^\nu} := x^\mu\star x^\nu -x^\nu\star x^\mu = i\lambda \theta^{ab} V_a(x^\mu) V_b(x^\nu) +\mathcal{O}(\lambda^2)~.
\end{flalign}
The results are given in appendix \ref{app:starcom} and show that these commutators can be at most quadratic in
the spatial coordinates $x^i$.
Possible applications of these models will be discussed in the outlook, see section \ref{sec:conc}.
\section{\label{sec:blackhole}Application to Black Holes}
In this section we investigate possible deformations of non rotating black holes.
We will do this in analogy to the cosmological models and therefore do not have to explain every single step.
The undeformed Lie algebra of the symmetry group $\mathbb{R}\times SO(3)$ of a non rotating black hole is generated by
the vector fields
\begin{flalign}
p^0=\partial_t\quad,\quad L_i = \epsilon_{ijk} x^j\partial_k~,
\end{flalign}
given in cartesian coordinates. We choose $t_i^\star =t_i$ for all $i$
and define $\mathfrak{g}_\star=\mathfrak{g} = \mathrm{span}(p^0,L_i)$.
It can be shown that each twist vector field $V_a$ has to be of the form
\begin{flalign}
V_a = (c^0_a(r)+N_a^0 t)\partial_t + d_a^i L_i + f_a(r) x^i \partial_i
\end{flalign}
in order to fulfill $\com{V_a}{\mathfrak{g}}\subseteq\mathfrak{g}$. Here $r=\Vert\mathbf{x}\Vert$ is the euclidian
norm of the spatial position vector.
The next task is to construct the two vector field deformations. Therefore we additionally have to
demand $\com{V_a}{V_b} =0,~\forall_{a,b}$, leading to the conditions
\begin{subequations}
\begin{flalign}
\label{eqn:blackcond1}&d_a^i d_b^j\epsilon_{ijk}=0~,~\forall_k~, \\
\label{eqn:blackcond2}&(f_a(r) x^j \partial_j - N_a^0) c^0_b(r) - (f_b(r) x^j\partial_j -N_b^0) c^0_a(r) =0~,\\
\label{eqn:blackcond3}& f_a(r)f_b^\prime(r)-f_a^\prime(r) f_b(r) = 0~,
\end{flalign}
\end{subequations}
where $f_a^\prime(r)$ means the derivative of $f_a(r)$.
Note that (\ref{eqn:blackcond3}) is a condition similar to (\ref{eqn:frwcond3}), and therefore
has the same type of solutions. Because of this, the functions $f_1(r)$ and $f_2(r)$ have to be parallel in the overlap
of their supports. From this we can always eliminate locally one $f_a(r)$
by a twist conserving map and simplify the investigation of the condition (\ref{eqn:blackcond2}).
At the end, the local solutions have to be glued together.
We choose without loss of generality $f_1(r)=0$ for our classification of local solutions.
The solution to (\ref{eqn:blackcond1}) is that the $\mathbf{d}_a$ have to be parallel. We use
\begin{flalign}
\mathbf{d}_a=\kappa_a \mathbf{d}
\end{flalign}
with constants $\kappa_a\in\mathbb{R}$ and some arbitrary vector $\mathbf{d}\neq0$.
We now classify the solutions to (\ref{eqn:blackcond2}) according to $N_a^0$ and $f_2(r)$ and label them by $\mathfrak{B}_{AB}$.
We distinguish between $f_2(r)$ being the zero function or not. The result is shown in table \ref{tab:blackhole}.
Other choices of parameters can be mapped by a twist conserving map into these classes.
Note that in particular for analytical functions $f_a(r)$ the twist conserving map transforming $f_1(r)$ to zero
can be performed globally, and with this also the classification of twists given in table \ref{tab:blackhole}.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
{\large$~~\mathfrak{B}_{AB}$} & $f_2(r)=0$ & $f_2(r)\neq0$ \\ \hline
$N_1^0=0$, &$V_1=c_1^0(r)\partial_t +\kappa_1 d^iL_i$ &$V_1=c_1^0\partial_t +\kappa_1 d^i L_i $ \\
$N_2^0=0$ &$V_2=c_2^0(r)\partial_t+\kappa_2 d^i L_i$ &$V_2=c_2^0(r)\partial_t +\kappa_2 d^i L_i + f_2(r) x^i\partial_i$ \\ \hline
$N_1^0\neq0$, &$V_1=(c_1^0(r)+N_1^0 t)\partial_t $ &$V_1=(c_1^0(r)+N_1^0 t)\partial_t+\kappa_1 d^i L_i$ \\
$N_2^0=0$ &$V_2=\kappa_2 d^i L_i$ &$V_2= -\frac{1}{N_1^0}f_2(r)r c_1^{0\prime}(r)\partial_t +\kappa_2 d^i L_i+f_2(r)x^i\partial_i$\\ \hline
$N_1^0=0$, &$V_1=\kappa_1 d^i L_i$ &$V_1=c_1^0(r) \partial_t + \kappa_1 d^i L_i$,\quad\text{with (\ref{eqn:blackode})} \\
$N_2^0\neq0$ &$V_2=(c_2^0(r)+N_2^0 t)\partial_t $ &$V_2= (c_2^0(r)+N_2^0 t)\partial_t +\kappa_2 d^i L_i +f_2(r) x^i\partial_i $\\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:blackhole}Two vector field deformations of the black hole symmetry group $\mathbb{R}\times SO(3)$.
Note that $c_1^0(r)=c_1^0$ has to be constant in type $\mathfrak{B}_{12}$.}
\end{table}
In type $\mathfrak{B}_{32}$ we still have to solve a differential equation for $c_1^0(r)$ given by
\begin{flalign}
\label{eqn:blackode}
c_1^0(r) = \frac{f_2(r)}{N_2^0} r c_1^{0\prime}(r)~,
\end{flalign}
for an arbitrary given $f_2(r)$.
We will not work out the solutions to this differential equation, since type $\mathfrak{B}_{32}$ is a quite unphysical model,
in which the noncommutativity is increasing linear in time due to $N_2^0\neq 0$.
Note that $\mathfrak{B}_{11}$ can be extended to a triangular $\star$-Hopf algebra by choosing $c_a^0(r)=c_a^0$, for $a\in\lbrace1,2\rbrace$.
In addition, $\mathfrak{B}_{12}$ is a $\star$-Hopf algebra for $\kappa_1=\kappa_2=0$.
The $\star$-commutators $c^{\mu\nu}=\starcom{x^\mu}{x^\nu}$ of the coordinate functions $x^\mu\in A_\star$ in order $\lambda^1$ for these
models are given in the appendix \ref{app:starcom}. They can be used in order to construct sensible
physical models of a noncommutative black hole.
By using the method explained in the previous section, the two vector field twists can be extended to multiple vector field
twists. Since we do not require these twists in our work and their construction is straightforward, we do not
present them here.
\section{\label{sec:conc}Conclusion and Outlook}
We have discussed symmetry reduction in noncommutative gravity using
the formalism of twisted noncommutative differential geometry. Our
motivation for these investigations derives from the fact that, for
most physical applications of gravity theories, including cosmology,
symmetry reduction is required due to the complexity of such models,
already in the undeformed case.
In section~\ref{sec:symred} we have presented a general method for
symmetry reduction in twisted gravity theories. As a result we have
obtained restrictions on the twist, depending on the structure of the
twisted symmetry group. In particular, we find that deforming the
infinitesimal symmetry transformations results in weaker restrictions
than deforming the finite transformations and demanding a quantum
group structure. In section~\ref{sec:jambor} we have applied this
general method to gravity theories twisted by Reshetikhin-Jambor-Sykora twists.
These are twists constructed from commuting vector fields. In this
case we could give explicit conditions, which have to be fulfilled in
order to allow symmetry reduction of a given Lie group.
In sections~\ref{sec:cosmo} and~\ref{sec:blackhole} we have
investigated admissible deformations of FRW and black hole symmetries
by a Reshetikhin-Jambor-Sykora twist. In this class we have classified all
possible deformations. This lays the foundation for phenomenological
studies of noncommutative cosmology and black hole physics based on
twisted gravity.
In a forthcoming work~\cite{Ohl/Schenkel:Cosmo:2009} we will
investigate cosmological implications of twisted FRW models by
studying fluctuations of quantum fields living on twisted FRW
backgrounds. Quantum fields were already introduced in a twisted
framework in~\cite{Aschieri:2007sq}. As we see from
proposition~\ref{propo:starinvariance}, the noncommutative backgrounds
are also invariant under the undeformed action of the classical
symmetry. This means that they have the same coordinate
representations with respect to the undeformed basis vectors as the
commutative fields in Einstein gravity. With this we have a construction
principle for noncommutative backgrounds, in their natural basis,
by representing the classical fields in the deformed basis.
A class of models of particular interest is
type~$\mathfrak{C}_{22}$ in section~\ref{sec:cosmo}
(cf.~table~\ref{tab:frw}). These twists break classical translation
invariance, but classical rotation invariance can be retained by
tuning~$\mathbf{d}_1$ and~$\mathbf{c}_1$ to small values. Furthermore,
the global factor~$V_2^0(t)$ in the exponent of the twist can be used
in order to tune noncommutativity effects depending on time.
Obviously, enforcing a suitable~$V_2^0(t)$ by hand leads to
phenomenologically valid models.
Since there is no natural choice of~$V_2^0(t)$, it is interesting to
investigate the dynamics of~$V_2^0(t)$ in a given field configuration
and study if it leads to a model consistent with cosmological
observations. In this case, the model would be physically
attractive. This will also be subject of future
work~\cite{Ohl/Schenkel:Cosmo:2009}. Dynamical noncommutativity has
already been studied in the case of scalar field theories on Minkowski
spacetime~\cite{Aschieri:2008zv}.
In the case of black hole physics, models of particular interest would
be~$\mathfrak{B}_{11}$ with functions~$c^0_a(r)$ decreasing
sufficiently quickly with~$r$ and~$\mathfrak{B}_{12}$ with~$f_2(r)$
and~$c_2^0(r)$ decreasing sufficiently quickly with~$r$
(cf.~table~\ref{tab:blackhole}). It will again be interesting to
investigate the dynamics of these functions on a given field
configuration. Note that the type~$\mathfrak{B}_{12}$
with~$\kappa_1=\kappa_2=0$ is invariant under the classical black hole
symmetries, and therefore particularly interesting for physical
applications. On the other hand, models with nonvanishing~$N_a^0$ are
of little physical interest, because the noncommutativity is growing
linearly in time, which would be unphysical.
Other avenues for future work are the classification of models on
nontrivial topologies (like, e.g.,~$\mathbb{R}\times S_3$ in
cosmology), investigating nontrivial
embeddings~$t_i^\star=t_i^\star(t_j)$ and using a wider class of twist
elements.
\section*{Acknowledgements}
AS thanks Christoph Uhlemann and Julian Adamek for discussions and
comments on this work. This research is supported by Deutsche
Forschungsgemeinschaft through the Research Training Group 1147
\textit{Theoretical Astrophysics and Particle Physics}.
|
2,877,628,088,560 | arxiv |
\section{Introduction}
\begin{figure*}
\centerline{\psfig{file=multispec.eps,width=14.8cm}}
\caption{Results of the \hi\ absorption experiment. The uniformly
weighted, 21~cm continuum image is displayed as contours over
greyscale. The white circle indicates the location of the AGN
according to the alignment of Capetti et al. (1995). The
naturally weighted spectra towards each bright component of the
radio jet are displayed as overlays. \hi\
absorption is clearly detected towards component 6, but a search
over the data cube reveals no other significant absorption. The
continuum contour levels, in mJy beam$^{-1}$, are: $\pm 0.53$
($3\sigma$), 1.2, 2.9, 6.8, 15.9, and 37.3 (logarithmic
scaling). The restoring beam dimensions are: natural weight,
$0\farcs32 \times 0\farcs27$, P.A. $-21\hbox{$^\circ$}$; and uniform weight,
$0\farcs16 \times 0\hbox{$.\!\!^{\prime\prime}$} 14$, P.A. $88\hbox{$^\circ$}$.}
\label{f_results}
\end{figure*}
Emission from hot, ionised gas distinguishes active galactic nuclei
(AGNs) from quiescent galaxies. However, conventional models for AGNs
depend on the distribution and kinematics of colder, neutral
media. Firstly, the host galaxy is a massive reservoir of neutral gas
which might ultimately feed an energetic accretion disc, although the
means by which gas funnels down to sub-parsec scales in not well
understood (Rees 1984). Secondly, the unifying schemes for AGNs
propose that the apparent differences between broad-line AGNs
(i.e. Seyfert 1s) and narrow-line AGNs (Seyfert 2s) result from
selective obscuration through neutral, dusty gas located along the
sight-line to the broad-line region (Antonucci \& Miller 1985).
Exploring the neutral gas in AGNs is challenging because the surface
brightness of emission is generally too faint to detect on scales much
smaller than $\sim 1\hbox{$^{\prime\prime}$}$. We are instead continuing a programme to
explore neutral hydrogen (\hi) in {\em absorption} towards AGNs with
the goal of establishing the distribution and kinematics on scales as
small as $0\farcs1$, or roughly 10~parsecs in the nearest Seyfert
galaxies (Pedlar et al. 1995; Mundell et al. 1995; Gallimore et
al. 1994). In this work, we present MERLIN observations of 21~cm
absorption towards the Seyfert 1.5 nucleus of Mkn~6. The localisation
of the \hi\ absorption suggests a particular alignment between the
host galaxy disc and the radio jet. After first describing the
observations and results, we discuss the implications of this
alignment in further detail. For comparison with earlier papers, we
adopt a distance of 77~Mpc to Mkn~6, appropriate for $H_0 =
75$~km s$^{-1}$\ Mpc$^{-1}$, and giving a scale of 1\hbox{$^{\prime\prime}$} = 374~pc (Meaburn
et al. 1989).
\section{Observations}
\begin{figure}
\centerline{\psfig{file=rotcurv.eps,width=8.8cm}}
\caption{The location of the \hi\ absorption line (open
circle) on the position-velocity diagram for Mkn~6. The centroid
velocities of the extended narrow line region (ENLR; filled squares)
and NLR (open triangle) are taken from Meaburn et al.
(1989). The ENLR traces kinematically quiescent gas that is exposed
to the AGN, and so defines the inner rotation curve. Within the
errorbars, the \protect\hi\ absorption is located roughly where
expected on the rotation curve, and so probably arises from gas in
normal rotation about the galaxy center. }
\label{rotcurv}
\end{figure}
We observed Mkn~6 with the 8-element MERLIN array (Wilkinson 1992),
including the Lovell telescope; the results are summarised in
Fig.~\ref{f_results}. The observations were tuned to the 1420~MHz
hyperfine transition of \hi\ centered near the Doppler velocity $cz =
5800$~km s$^{-1}$\ (heliocentric, optical convention). The systemic velocity
of the host galaxy is actually $5640\pm 10$~km s$^{-1}$\ (Meaburn et al.
1989), well within the observed bandwidth. The velocity
resolution of the observations is 26.4~km s$^{-1}$, and, after removing end
channels with poor frequency response, the effective bandwidth is
$\sim$6.6~MHz (1400~km s$^{-1}$).
Data reduction followed standard techniques employed for MERLIN data,
including initial calibration and processing with software local to
Jodrell Bank. Further data processing, including self-calibration
against line-free continuum channels, was performed within the AIPS
data reduction package. Channel maps and line-free continuum images
were produced following standard numerical Fourier transform
techniques and deconvolution using the CLEAN algorithm (H\"ogbom 1974).
A more detailed description of the MERLIN data reduction techniques
employed can be found in Mundell et al. (1995).
We constructed both naturally and uniformly weighted spectral line
cubes. Continuum images were generated by averaging over channels with no
significant line detections. For the naturally weighted images, the
restoring beam dimensions (FWHM) are $0\farcs32 \times 0\farcs27$,
P.A. $-21\hbox{$^\circ$}$, and the respective continuum and spectral line
sensitivities are $0.13$~mJy\ beam$^{-1}$\ and $0.68$~mJy\
beam$^{-1}$\ ($1\sigma$). The resolution of the uniformly weighted images is
$0\farcs16 \times 0\hbox{$.\!\!^{\prime\prime}$} 14$, P.A. $88\hbox{$^\circ$}$, and the continuum and
spectral line sensitivities are $0.19$~mJy\ beam$^{-1}$\ and $1.2$~mJy\
beam$^{-1}$.
\section{Results}
In contrast to the radio continuum emission from Mkn~6, which is
extended and
highly structured (e.g., Kukula et al. 1996; Fig.~\ref{f_results}),
\hi\ absorption is detected only towards component~6, a compact source
located at the northern end of the arcsecond-scale radio jet
(Fig~\ref{f_results}; component numbering following Kukula et al.
1996). Discussed further below, the linewidth is very narrow in
comparison with \hi\ absorbed radio jets in other Seyfert galaxies;
formally, the linewidth (FWHM) is $33\pm 6$~km s$^{-1}$\ (corrected for the
instrumental resolution) and the maximum opacity is $\tau_{max} =
0.45\pm0.01$. The integrated absorption profile corresponds to a
foreground column of $$N_{HI} = (2.6\pm0.3) \times 10^{21}\ (T_S/100{\rm\
K}){\rm\ cm^{-2}}\ ,$$ where $T_S$ is the spin (excitation) temperature of
the ground state. This column is not unusual for a sight-line through
an inclined disk galaxy. However, we note that a similar column,
detected in NGC~4151, was interpreted as absorption in a nuclear
torus (Mundell et al. 1995).
\begin{figure*}
\centerline{\psfig{file=sketch.eps,width=14.8cm}}
\caption{Illustration of the \hi\ absorbing medium of Mkn~6. The {\em
left panel} is an overlay of the 21~cm radio continuum and an
archival HST image taken in the F606W (wide V-band) filter. We have
subtracted an elliptical isophote model of the smooth, bulge light
from the HST image in order to enhance the contrast of the
underlying structure. The halftone rendering of the HST image is
displayed in the positive sense: the dark band across the nucleus is
an apparent
band of high extinction, presumably arising in a dust lane.
We chose the Capetti et al. (1995) alignment between the
MERLIN and HST images. Only component 6 among the brighter jet
features lies, in projection, within the dust lane. The cartoon in
the {\em right panel} depicts a plausible ring geometry for
the neutral, absorbing gas. The proposed location of the AGN, near
radio component 3, is indicated by the dot. This cartoon is purely
illustrative and is not intended to be a detailed model for
the MERLIN \hi\ absorption and HST data.
}
\label{HST}
\end{figure*}
The limits placed by non-detections better define the localisation of
the \hi\ absorption around component~6. Towards the brighter regions
of the southern jet, components~2--4, the ($3\sigma$) limit is
$\tau_{\nu} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.07$, corresponding to a foreground column density
$$N_{HI} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 4\times 10^{20}\ {\rm cm^{-2}}\ (T_s/100) (\Delta v/30\
{\rm km\ s^{-1}})\ .$$ The absorbing gas would easily have been detected
had the gas completely covered the jet. On the other hand, component
5, which is the nearest neighbor to the absorbed component, is much
fainter, and so the limits are less stringent: $\tau_{\nu} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.9$,
or $$N_{HI} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 5\times 10^{21}\ {\rm cm^{-2}}\ (T_s/100) (\Delta v/30\
{\rm km\ s^{-1}})\ .$$ We can conclude is that the \hi\
absorbing gas covers a region including component~6 and extending no
further south than component 5, or roughly 0\farcs75 (280~pc in
projection). However, we can place no limits on the extent of the
absorbing gas in other directions.
The centroid velocity of the absorption line is $5584\pm 3$~km s$^{-1}$,
blue-shifted relative to systemic by $56\pm10$~km s$^{-1}$. For comparison,
the position-velocity curve is plotted in Fig.~\ref{rotcurv}. The
details of the rotation curve within the inner few arcseconds are
unknown, but the velocity of the 21~cm absorption line does not appear
significantly displaced from any plausible rotation curve. We
conclude that the absorption line arises in otherwise normally
rotating gas, and there is no evidence for streaming motions greater
than $\sim 50$~km s$^{-1}$. Furthermore, we do not detect any velocity
gradients across component~6. Assuming that the absorbing gas
completely covers the background source (Sect.~\ref{discuss}), the
upper limit for the velocity gradient is approximately the width of
the absorption line divided by the component size ($\sim 0\farcs08$;
Kukula et al. 1996), or $< 1.0$~km s$^{-1}$\ pc$^{-1}$. For comparison, the
projected velocity gradient of the \hi\ absorption seen towards
NGC~4151 is $\sim 3$~km s$^{-1}$\ pc$^{-1}$\ (Mundell et al. 1995).
\section{Discussion}\label{discuss}
The trivial explanation for the localised \hi\ absorption is an
isolated cloud which fortuitously aligns with component~6. We consider
it more likely, however, that the absorbing gas lies in the galaxy
disk surrounding the nucleus. For example, this result compares
favorably with the localised \hi\ absorption observed towards the
radio jet of NGC~4151 (Mundell et al. 1995). The interesting question
is whether, as was proposed for NGC~4151, the absorbing gas might be
located in small-scale ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 100$~pc) disc surrounding the AGN. In the
case of Mkn~6, however, we find that absorption from gas distributed
on kpc-scales is more consistent with the observations. The first evidence is
that the linewidth is very narrow, $\sim 30$~km s$^{-1}$, which is less than
half the \hi\ absorption linewidth of NGC~4151. In contrast, \hi\
absorption linewidths towards Seyfert and starburst galaxies often
exceed 100~km s$^{-1}$, particularly in those cases where the \hi\ absorption
is known to trace gas deep in the nucleus (Pedlar et al. 1996; Mundell
et al. 1995; Gallimore et al. 1994; Dickey 1986). This evidence is
not sufficient, however, since we cannot rule out the possibility that
the absorption arises from a compact, circularly rotating disc viewed
nearly face-on. Nevertheless, the narrowness of the line is consistent
with that expected from a larger scale ring or disc.
We next examine the displacement of the absorption from the AGN.
Unfortunately, the correspondence between components in the optical
and radio images is not accurately known. Moreover, the continuum
spectra and sizes of the radio features are indistinct, and so there
is currently no clear radio candidate for the AGN proper (Kukula et
al. 1996). Clements (1983) places the optical nucleus somewhere
between component 5 and (the \hi\ absorbed) component 6, but the
uncertainties are roughly one quarter the length of the radio jet.
Nevertheless, the Clements position is significantly displaced
southward from component 6 (Kukula et al. 1996). Capetti et al. (1995)
propose an alignment between the radio and optical images based on
{\em Hubble Space Telescope} images. They found a linear extension of
\boiiib\ emission that agrees well both in orientation and detailed
shape with the southern part of the radio jet (i.e., components~1--5).
Aligning the radio and optical jet structures places the AGN $\sim
1\hbox{$^{\prime\prime}$}$ ($\sim 380$~pc in projection) south of component~6,
somewhere nearer component~3 (from Kukula et al.: $\alpha_{\rm (J2000)}
= 6^h\ 52^m\ 12\fs336$, $\delta_{(J2000)} = 74\hbox{$^\circ$}\ 25\hbox{$^\prime$}\
37\farcs08$; $S_{\nu}(20\ {\rm cm}) = 16$~mJy). Adopting this
alignment, and further considering the narrowness of the absorption
line, we are drawn to the conclusion that the \hi\ absorption in Mkn~6
arises from neutral gas displaced from the nucleus by $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 400$~pc.
For reference, the strongest absorption lines observed towards the
Seyfert nucleus of NGC~1068 similarly trace a $\sim 500$~pc radius,
central disc (Gallimore et al. 1994).
From a more detailed study of the optical and radio continuum
structures of the nucleus (Holloway et al. in preparation), we have
discovered a conspicuous candidate for the \hi\ absorber. Illustrated
in Fig.~\ref{HST}, there is an obvious band of increased extinction
which crosses $\sim 1\hbox{$^{\prime\prime}$}$ north of the optical nucleus. For
convenience, we refer to this dark region simply as a dust lane.
According to the alignment of Capetti et al. (1995), the dust lane
encompasses the position of the \hi\ absorbed radio feature. The high
aspect ratio of the dust lane suggests a disk or spiral arms viewed
edge-on.
The simplest picture is that the dust lane traces a kpc-scale disc or
ring surrounding the nucleus, or perhaps a spiral arm segment
lying in front of the nucleus. The radio jet must be oriented with
component 6 lying behind the disc to the north and components 1--5 in
front of the disc to the south. There are two important implications
of this result. Firstly, the location of the \hi\ absorbed radio
feature within the newly discovered dust lane lends self-consistent
support for the Capetti et al. alignment, which, as a corollary,
strengthens their argument for an interaction between the radio jet
and the NLR gas. The second implication is that the northern jet and
NLR structures fall behind the galaxian disc, contrary to our earlier
model for the northern ionisation cone (Kukula et al. 1996). More
specifically, there is a strong correspondence between \boiiib\
emission and radio emission only at the southern end of the jet. The
lack of \boiiib\ emission towards the northern end of the jet (i.e.,
component~6) is naturally explained by extinction in our model for the
\hi\ absorption. We will explore a revised model for the ionisation
cone structure in a follow-up paper (Holloway et al. 1997).
\section{Conclusions}
Our primary results and conclusions are as follows.
\begin{enumerate}
\item There is no \hi\ absorption detected toward the probable
location of the AGN of
Mkn~6. This result is consistent with the more general picture that
sight-lines
towards Seyfert 1 nuclei are relatively unobscured.
\item The detected \hi\ absorption probably arises from a kpc-scale
distribution of gas, possibly a disc, spiral arms, or a ring,
surrounding the nucleus and associated with a conspicuous dust lane
passing north of the AGN.
\item The kinematics of the \hi\ absorption line gas places it near
the systemic velocity as interpolated from measurements of the
ENLR. Unlike other \hi\ absorbed Seyfert nuclei
(Dickey 1986), there is no evidence for rapid streaming motions in
the absorbing gas.
\item The radio jet is probably oriented behind the galactic disc to the north
and in front of the galactic disk to the south. If, as appears to be
the case for most Seyfert nuclei, the NLR and ENLR gas share a
similar axis with the radio jet, this result places the northern
ENLR on the far side of the disc, contrary to earlier models.
\end{enumerate}
\begin{acknowledgements}
J.F.G. received collaborative travel support from the University of
Manchester Dept. of Astronomy and computer support at NRAL, Jodrell
Bank during the completion of this work. C.G.M. acknowledges receipt
of a PPARC Research Fellowship.
\end{acknowledgements}
|
2,877,628,088,561 | arxiv | \section{Introduction} \label{sc.intro}
The recent results from the Relativistic Heavy Ion Collider (RHIC)
\cite{rhic-results} indicate the formation of a thermalized medium
endowed with large collective flow and very low viscosity \cite{teaney}.
These findings suggest that Quark Gluon Plasma (QGP) is a strongly
interacting system for temperatures close to its transition temperature
($T_c$). Apart from the experimental indications, the most convincing
evidence in favour of the existence of a strongly interacting QGP comes
form the lattice QCD simulations \cite{lattice-results, swagato}. These
non-perturbative studies show that the thermodynamic quantities, like
pressure and energy density, deviate form there respective ideal gas (of
free quarks and gluons) values by about $20\%$ even at temperature
$T=3T_c$. On the other hand, other lattice studies indicate the
smallness of the viscous forces in QGP \cite{nakamura}. All these
results point to the fact that close to $T_c$ nature of QGP is far from
a gas of free quarks and gluons.
In order to uncover the nature of QGP in the vicinity of $T_c$ and also
to understand the underlying physics of these lattice results many
different suggestions have been made over the last decade. Descriptions
in terms of various quasi-particles \cite{quasi-results,bluhm}, resummed
perturbation theories \cite{pert-results}, effective models
\cite{effect-results} {\sl etc.\/}\ are few among many such attempts. Apart
from all these, the newly proposed model of Shuryak and Zahed
\cite{shuryak} has generated considerable amount of interest in the
recent years. Motivated by the lattice results for the existence of
charmonium in QGP \cite{charmonium}, this model proposed a strongly
interacting chromodynamic system of quasi-particles (with large thermal
masses) of quarks, anti-quarks and gluons along with their numerous
bound states. As different conserved charges, {\sl e.g.\/}, baryon number ($B$),
electric charge ($Q$), third component of isospin ($I$) {\sl etc.\/}, are
carried by different flavours ($u,d,s$) of quarks, in the conventional
quasi-particle models, conserved charges come in strict proportion to
number of $u$, $d$, $s$ quarks. Thus conserved charges are strongly
correlated with the flavours and the flavours have no correlations among
themselves. On the other hand, in the model of \cite{shuryak}, presence
of bound states demand correlations among different flavours. Hence
correlations between conserved charges and flavours depend on the
mass-spectrum of the bound states and the strong correlations among them
are lost.
Based on the above arguments, in \cite{koch}, it has been suggested that
the quantity
\begin{equation}
C_{BS} = -3 \frac{ \langle BS \rangle - \langle B \rangle \langle S
\rangle } { \langle S^2 \rangle - \langle S \rangle^2},
\end{equation}
can be used to probe the degrees of freedom of QGP. Here $B=(U+D-S)/3$
is the net baryon number and $U$, $D$, $S$ are the numbers of net
(quarks minus anti-quarks) up-quarks, down-quarks and strange-quarks
respectively. The notation $\langle \cdot \rangle$ denotes average taken
over a suitable ensemble. It has been argued in \cite{koch} that for
QGP where quarks are the effective degrees of freedom,
{\sl i.e.\/}, where correlations among $U$, $D$ and $S$ are absent, $C_{BS}$ will
have a value of $1$ for all temperature $T>T_c$. On the other hand, for
the model of \cite{shuryak} $C_{BS}=0.62$ at $T=1.5T_c$, while for a gas
of hadron resonances $C_{BS}=0.66$. Thus the knowledge of $C_{BS}$ helps
to identify the degrees of freedom in QGP.
By extending the idea of \cite{koch}, recently in \cite{gavai}, many
ratios like
\begin{equation}
C_{(KL)/L}=\frac{ \langle KL \rangle - \langle K \rangle \langle L
\rangle } { \langle L^2 \rangle - \langle L \rangle^2} \equiv
\frac{ \chi_{KL} } { \chi_L },
\label{eq.ratio}
\end{equation}
have been calculated using lattice QCD simulations with two flavours of
dynamical light quarks and three flavours (two light and one heavy) of
valance quarks. Here $\chi_L$ and $\chi_{KL}$ denote the susceptibilities
corresponding to conserved charge $L$ and correlation among conserved
charges $K$ and $L$ respectively. The physical meaning of the ratios
like $C_{(KL)/L}$ can be interpreted as follows--- Create an excitation
with quantum number $L$ and then observe the value of a different
quantum number $K$ associated with this excitation. Thus these ratios
identify the quantum numbers corresponding to different excitations and
hence provide information about the degrees of freedom. The calculations
of \cite{gavai} found no evidence for the existence of bound states
\cite{shuryak} even at temperatures very close to $T_c$. These finding
are consistent with the results of \cite{ejiri}, where the hypothesis of
\cite{shuryak} has been tested by investigating the ratios of higher
order baryon number susceptibilities obtained from lattice simulations.
As these lattice studies \cite{gavai} involved simulations with
dynamical quarks, they were done using small lattices having temporal
lattice size $N_{\tau}=4$. By comparing with the results from quenched
simulations it has been shown \cite{gavai} that $C_{(KL)/L}$ do not
depend on $N_{\tau}$ for temperature $T=2T_c$. It is clearly important
to verify whether the same conclusion holds even close to $T_c$.
Furthermore, it is known that in the case of quenched QCD with standard
staggered quarks the diagonal quark number susceptibilities (QNS) have
strong dependence on the lattice spacing even for the free theory
\cite{gavai1, gavai2}. On the other hand, the off-diagonal QNS are
identically zero for an ideal gas and acquires non-zero value only in
the presence of interactions. So the lattice spacing dependence of the
off-diagonal QNS is likely to be more complicated, as opposed to that
for the diagonal QNS where these corrections are dominated by the
lattice artifacts of the naive staggered action. Thus if these two QNS
become comparable the ratios mentioned in eq.\ (\ref{eq.ratio}) can
have non-trivial dependence on the lattice spacing $a$ and hence the
continuum limit of these ratios can be different from that obtained
using small lattices. Since the perturbative expressions for diagonal
and off-diagonal QNS (for vanishingly small quark mass and chemical
potential) are respectively \cite{blaizot}---
\begin{equation}
\frac{\chi_{ff}}{T^2} \simeq 1 + {\mathcal O}(g^2)
\qquad {\rm and} \qquad
\frac{\chi_{ff'}}{T^2} \simeq -\frac{5}{144\pi^6}~ g^6 \ln g^{-1},
\label{eq.pert}
\end{equation}
it is reasonable to expect that the off-diagonal QNS may not be
negligible at the vicinity of $T_c$ where the coupling $g$ is large. As
the contributions of the bound states in the QNS become more and more
important as one approaches $T_c$ \cite{liao}, on the lattice it is
necessary to investigate the continuum limit of the these ratios of in
order to verify the existence of bound states in a strongly coupled QGP.
At present a continuum extrapolation of this kind can only be performed
using quenched approximation due to the limitations of present day
computational resources. A quenched result for these ratios will also
provide an idea about the dependence of these ratios on the sea quark
mass.
The aim of this work is to carefully investigate the continuum limit of
the ratios of the kind $C_{(KL)/L}$ for temperatures $T_c<T\le2T_c$
using quenched lattice QCD simulations. The plan of this paper is as
follows --- In Section \ref{sc.results} we will give the details of our
simulations and present our results. In the Section \ref{sc.discussion}
we will summarise and discuss our results.
\section{Simulations and results} \label{sc.results}
The partition function of QCD for $N_f$ flavours, each with chemical
potential $\mu_f$ and mass $m_f$, at temperature $T$ has the form
\begin{equation}
{\cal Z}\left(T,\{\mu_f\},\{m_f\}\right) = \int {\cal DU}~
e^{-S_G({\cal U})} \prod_f \det M_f (T,\mu_f,m_f),
\end{equation}
where $S_G$ is the gauge part of the action and $M$ is the Dirac
operator. We have used standard Wilson action for $S_G$ and staggered
fermions to define $M$. The temperature $T$ and the spatial volume $V$
are expressed in terms of lattice spacing $a$ by the relations
$T=1/(aN_\tau)$ and $V=(aN_s)^3$, $N_s$ and $N_\tau$ being the number of
lattice sites in the spatial and the Euclidean time directions
respectively. The flavour diagonal and the flavour off-diagonal quark
number susceptibilities (QNS) are given by---
\begin{eqnarray}
\chi_{ff} &=& \left(\frac{T}{V}\right) \frac{\partial^2\ln{\cal Z}}
{\partial \mu_f^2} = \left(\frac{T}{V}\right) \left[ \left\langle
{\rm \bf{Tr}} \left( M_f^{-1}M_f'' - M_f^{-1}M_f'M_f^{-1}M_f'\right)
\right\rangle+\left\langle\left\{ {\rm\bf{Tr}}
\left(M_f^{-1}M_f'\right) \right\}^2\right\rangle\right], \qquad{\rm
and} \qquad \\ \chi_{ff'} &=& \left(\frac{T}{V}\right)
\frac{\partial^2\ln{\cal Z}} {\partial \mu_f \partial \mu_{f'}} =
\left(\frac{T}{V}\right)\left\langle{\rm\bf{Tr}}
\left(M_f^{-1}M_f'\right){\rm\bf{Tr}}\left(M_{f'}^{-1}M_{f'}'\right)
\right\rangle ,
\end{eqnarray}
respectively. Here the single and double primes denote first and second
derivatives with respect to the corresponding $\mu_f$ and the angular
bracket denote averages over the gauge configurations.
In this paper we report results of these susceptibilities on lattices
with $N_\tau=4,~8,~10,~{\rm and}~12$, for the temperatures $1.1T_c\le
T\le 2T_c$, chemical potential $\mu_f=0$ and using quenched
approximations. The details of our scale setting procedure are given in
\cite{swagato}. We have generated quenched gauge configurations by using
the Cabbibo-Marinari pseudo-heatbath algorithm with Kennedy-Pendleton
updating of three $SU(2)$ subgroups on each sweep. Since for
$m_q/T_c\le0.1$ QNS are almost independent of the bare valance quark
mass ($m_q$) \cite{gavai1}, we have used $m_q/T_c=0.1$ for the light $u$
and $d$-flavours. Motivated by the fact that for the full theory
$m_s/T_c\sim1$ we have used $m_q/T_c=1$ for the heavier $s$-flavour. The
fermion matrix inversions were done by using conjugate gradient method
with the stopping criterion $|r_n|^2< \epsilon|r_0|^2$, $r_n$ being the
residual after the $n$-th step and $\epsilon=10^{-4}$ \cite{gupta}. The
traces have been estimated by the stochastic estimator--- ${\rm\bf
Tr}A=\sum_{i=1}^{N_v}R_i^{\dagger}AR_i/2N_v$, where $R_i$ is a complex
vector whose components have been drawn independently from a Gaussian
ensemble with unit variance. The square of a trace has been calculated
by dividing $N_v$ vectors into $L$ non-overlapping sets and then using
the relation--- $({\rm\bf Tr}A)^2=2\sum_{i>j=1}^{L}({\rm\bf
Tr}A)_i({\rm\bf Tr}A)_j/L(L-1)$. We have observed that as one approaches
$T_c$ from above these products, and hence $\chi_{ff'}$, become more and
more noisy for larger volumes and smaller quark masses. So in order to
reduce the errors on $\chi_{ff'}$ number of vectors $N_v$ have been
increased (for the larger lattices and the smaller quark masses) with
decreasing temperature. Details of all our simulations are provided in
Table \ref{tb.simulation}.
\begin{table}[!ht]
\squeezetable
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
$T/T_c$&$\beta$&Lattice size&$N_{stat}$&\multicolumn{2}{c|}{$N_v$} \\
\cline{5-6}
&&&&$m_q/T_c=0.1$&$m_q/T_c=1$ \\ \hline
&5.7000&$4\times10^3$&44&250&100 \\
&&$~\times16^3$&50&250&100\\
&&$~\times20^3$&30&250&100\\
1.1&6.1250&$8\times18^3$&48&250&100 \\
&6.2750&$10\times22^3$&38&250&100 \\
&6.4200&$12\times26^3$&41&250&100 \\ \hline
&5.7880&$4\times10^3$&52&100&100 \\
1.25&6.2100&$8\times18^3$&49&200&100 \\
&6.3600&$10\times22^3$&46&200&100 \\
&6.5050&$12\times26^3$&45&200&100 \\ \hline
&5.8941&$4\times10^3$&51&100&100 \\
1.5&6.3384&$8\times18^3$&49&150&100 \\
&6.5250&$10\times22^3$&49&150&100 \\
&6.6500&$12\times26^3$&48&150&100 \\ \hline
&6.0625&$4\times10^3$&51&100&100 \\
2.0&6.5500&$8\times18^3$&50&100&100 \\
&6.7500&$10\times22^3$&46&100&100 \\
&6.9000&$12\times26^3$&49&100&100 \\ \hline
\end{tabular}
\end{center}
\caption{The couplings ($\beta$), lattice sizes ($N_\tau\times N_s^3$),
number of independent gauge configurations ($N_{stat}$) and number of
vectors ($N_v$) that have been used for our simulations are given for
each temperature. The gauge configurations were separated by $100$
sweeps.}
\label{tb.simulation}
\end{table}
In the following sections we present our results. The notations we use
are same as in \cite{gavai}. Since we use equal masses for the two light
$u$ and $d$ flavours, the flavour diagonal susceptibilities in this
context are $\chi_{uu}=\chi_{dd}\equiv\chi_u$ and the flavour off-diagonal
susceptibilities are $\chi_{du}=\chi_{ud}$. For the heavy flavour $s$ the
flavour diagonal susceptibility is denoted as $\chi_{ss}\equiv\chi_s$ and
the flavour off-diagonal susceptibilities are $\chi_{ds}=\chi_{us}$.
Expressions for all the susceptibilities used here have been derived in
the appendix of \cite{gavai}.
\subsection{Susceptibilities}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{u-s-1.1Tc.eps}
\includegraphics[scale=0.7]{u-s-1.25Tc.eps}
\end{center}
\caption{We show the $N_\tau$ ($\propto 1/a$) dependence of $\chi_u/T^2$
(squares) and $\chi_s/T^2$ (circles) for $1.1T_c$ (top panel) and for
$1.25T_c$ (bottom panel). The continuum extrapolations (linear fits in
$1/N_\tau^2$) are shown by the lines.}
\label{fig.u-s}
\end{figure}
In order to understand the cut-off dependence of $C_{(KL)/L}$ let us
start by examining the same for the diagonal and off-diagonal QNS. We
have found that the for all the temperatures the diagonal QNS ($\chi_u$
and $\chi_s$) depend linearly on $a^2\propto 1/N_\tau^2$, {\sl i.e.\/}, the finite
lattice spacing corrections to the diagonal QNS have the form
$\chi_{ff}(a,m_f,T)=\chi_{ff}(0,m_f,T)+b(m_f,T)a^2+\cdots$. As an
illustration of this we have shown our data for $1.1T_c$ and $1.25T_c$
in Fig.\ \ref{fig.u-s}. Similar variations were found for the other
temperatures also. We have made continuum extrapolations of the
diagonal QNS by making linear fits in $1/N_\tau^2$. Our continuum
extrapolated results match, within errors, with the available data of
\cite{gavai1} at $1.5T_c$ and $2T_c$.
In Fig.\ \ref{fig.ud-us} we present some of our typical results for the
off-diagonal QNS. Note that here the scales are $\sim100$ magnified as
compared to Fig.\ \ref{fig.u-s}. The sign of our off-diagonal QNS is
consistent with the perturbative predictions of \cite{blaizot}, as well
as with the lattice results of \cite{gupta,ejiri1}. The order of
magnitude of our off-diagonal QNS matches with the results of
\cite{gupta} which uses the same unimproved staggered fermion action as
in the present case. As can be seen from Fig.\ \ref{fig.ud-us}, within
our errors, we have not found any perceptible dependence $\chi_{ff'}$ on
the lattice spacing $a$. Hence to good approximation
$\chi_{ff'}(a,m_f,m_{f'},T)\approx\chi_{ff'}(0,m_f,m_{f'},T)$. Also for the
other temperatures, which are not shown in Fig.\ \ref{fig.ud-us}, similar
variations were found. Results of our continuum extrapolations of the
diagonal and off-diagonal QNS are listed in Table\ \ref{tb.cont-qns}.
\begin{table}[h!]
\squeezetable
\begin{center}
\begin{tabular}{|c|c c c|c c c|c c|c c|} \hline
$T/T_c$&\multicolumn{3}{c|}{$\chi_u/T^2$}&\multicolumn{3}{c|}{$\chi_s/T^2$}&
\multicolumn{2}{c|}{$\chi_{ud}/T^2$}&\multicolumn{2}{c|}{$\chi_{us}/T^2$}
\\ \cline{2-11}
&$a$&$b$&$\chi^2_{d.o.f}$&$a$&$b$&$\chi^2_{d.o.f}$&
$c\times10^3$&$\chi^2_{d.o.f}$&$c\times10^3$&$\chi^2_{d.o.f}$ \\ \hline
1.1&0.79(1)&11.3(5)&0.3&0.33(1)&5.4(1)&0.1&-4(4)&0.5&-6(4)&0.1 \\ [2pt]
1.25&0.84(1)&15(1)&0.5&0.45(1)&10(1)&0.8&-0.2(1.0)&0.1&-0.7(1.0)&0.6 \\ [2pt]
1.5&0.83(1)&17.3(3)&0.5&0.55(1)&12.5(5)&0.9&-7(5)&0.1&2(2)&0.6 \\ [2pt]
2.0&0.86(2)&19.7(2)&0.7&0.70(2)&17(2)&0.8&2(3)&0.5&-0.1(1.0)&0.8 \\ \hline
\end{tabular}
\end{center}
\caption{Parameters for the continuum extrapolations of the diagonal
($\chi_u$, $\chi_s$) and off-diagonal ($\chi_{ud}$, $\chi_{us}$) QNS. For the
diagonal QNS continuum extrapolations are made by fitting $a+b/N_\tau^2$
to our data for the three largest lattice sizes. For the off-diagonal
QNS continuum extrapolations are made by fitting our data to a constant
$c$. Numbers in the bracket denote the errors on the fitting parameters
and $\chi^2_{d.o.f}$ refers to the value of the chi-square per degrees of
freedom for that particular fit.}
\label{tb.cont-qns}
\end{table}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{ud-1.5Tc.eps}
\includegraphics[scale=0.7]{us-1.1Tc.eps}
\end{center}
\caption{$N_\tau$ dependence of the off-diagonal QNS $\chi_{ud}/T^2$ at
$1.5T_c$ (top panel) and $\chi_{us}/T^2$ at $1.1T_c$ (bottom panel) have
been shown.}
\label{fig.ud-us}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{b-q.eps}
\end{center}
\caption{The continuum results for $\chi_B/T^2$ (squares) and $\chi_Q/T^2$
(circles) have been shown.}
\label{fig.b-q}
\end{figure}
For the sake of completeness we also present our continuum extrapolated
results for the two very important quantities, the baryon number
susceptibility ($\chi_B$) and the electric charge susceptibility ($\chi_Q$).
These quantities are related to the event-by-event fluctuations of
baryon number and electric charge \cite{e-b-e} which have already been
measured at RHIC \cite{e-b-e-expt}. The definitions that we use for
$\chi_B$ and $\chi_Q$ are \cite{gavai}
\begin{equation}
\chi_B = \frac{1}{9}\left( 2\chi_u + \chi_s + 2\chi_{ud} + 4\chi_{us} \right) ,
\qquad{\rm and}\qquad
\chi_Q = \frac{1}{9}\left( 5\chi_u + \chi_s - 4\chi_{ud} - 2\chi_{us} \right) .
\end{equation}
In Fig.\ \ref{fig.b-q} we show the continuum results for $\chi_B/T^2$ and
$\chi_Q/T^2$. Continuum extrapolations have been performed by making
linear fits in $a^2\propto1/N_\tau^2$. Continuum limit of these
quantities were also obtained in \cite{gavai1} for $T\ge1.5T_c$, though
using different definitions for these quantities. Nevertheless, given
the compatibility of our diagonal QNS with that of \cite{gavai1} and the
smallness of the off-diagonal QNS for $T\ge1.5T_c$ our continuum results
for $\chi_B$ and $\chi_Q$ are compatible with that of Ref.\ \cite{gavai1}, for any
chosen definitions for these quantities.
\subsection{Ratios}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{lam_s-1.1Tc.eps}
\includegraphics[scale=0.7]{lam_s.eps}
\end{center}
\caption{In the top panel robustness of the Wroblewski parameter
($\lambda_s$) with changing lattice spacings has been shown for
$1.1T_c$. The lines indicate the $5\%$ error band of a constant fit to
this data. In the bottom panel we show our continuum results for
$\lambda_s$ (see text for details).}
\label{fig.lam_s}
\end{figure}
Wroblewski parameter ($\lambda_s$) \cite{wroblewski} is a quantity of
extreme interest due to its relation to the enhancement of strangeness
production in QGP \cite{rafelski}. The rate of production of quark pairs
in a equilibrated plasma is related to the imaginary part of the
complex QNS by fluctuation-dissipation theorem. If one assumes that the
plasma is in chemical (and thermal) equilibrium and the typical energy
scales for the production of $u$, $d$ and $s$ quarks are well separated
from the inverse of the characteristic time scale of the QCD plasma,
then using Kramers-Kroing relation one can relate $\lambda_s$ to the
ratio of QNS \cite{gavai3}---
\begin{equation}
\lambda_s = \frac{2\langle s\bar{s} \rangle}
{\langle u\bar{u} + d\bar{d} \rangle} = \frac{\chi_s}{\chi_u}.
\end{equation}
(In the above equation $\langle f\bar{f}\rangle$ should be interpreted
as quark number density and not as quark anti-quark condensates.) We
have found that $\lambda_s$, which is a ratio of two diagonal QNS,
remains constant (within $\sim5\%$) with varying lattice spacings for
all temperatures in $1<T/T_c\le2$. We have illustrated this in the top
panel of Fig.\ \ref{fig.lam_s} by plotting $\lambda_s$ with $1/N_\tau^2$
for the temperature $1.1T_c$. These results are somewhat surprising
since the order $a^2$ corrections are not negligible for the individual
diagonal QNS. But for the ratio of the diagonal QNS for two different
bare valance quark masses these order $a^2$ corrections happen to be
negligible and thus seems to be quark mass independent. This indicates
that the finite lattice spacing corrections to the diagonal QNS is
constrained to have the form
$\chi_{ff}(a,m_f,T)=\chi_{ff}(0,m_f,T)[1+b(T)a^2+\cdots]$, as opposed to the
more general form
$\chi_{ff}(a,m_f,T)=\chi_{ff}(0,m_f,T)+b(m_f,T)a^2+\cdots$.
Our continuum results for the Wroblewski parameter have been shown in
the bottom panel of Fig.\ \ref{fig.lam_s}. In view of the constancy of
$\lambda_s$ we have made the continuum extrapolations by making a
constant fit to $a^2\propto1/N_\tau^2$. Our Continuum limit for
$\lambda_s$ are consistent with the previously reported \cite{gavai1}
continuum values for $T\ge1.5T_c$. Our continuum results for
$\lambda_s$ are very close to the results of \cite{gavai} for the whole
temperature range of $T_c<T\le2T_c$. Closeness of our quenched results
with the results from the dynamical simulations of \cite{gavai} suggest
that the Wroblewski parameter has practically no dependence on the mass
of the sea quarks. These observations along with the fact that
$\lambda_s$ has very mild dependence on the valance quark mass
\cite{gupta1} shows that the present day lattice QCD results for the
Wroblewski parameter are very reliable. The robustness of the Wroblewski
parameter is very encouraging specially since in the vicinity of $T_c$
the lattice results for this quantity almost coincides with the value
($\lambda_s \approx 0.43$) extracted by fitting the experimental data of
RHIC with a hadron gas fireball model \cite{cleymans}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{ud_u-1.5Tc.eps}
\includegraphics[scale=0.7]{us_s-1.1Tc.eps}
\end{center}
\caption{The top panel shows the $N_\tau$ dependence of $\chi_{ud}/\chi_u$
at $1.5T_c$. The bottom panel shows the same for $\chi_{us}/\chi_s$ at
$1.1T_c$ }
\label{fig.ud_u-us_s}
\end{figure}
After examining the ratio of the diagonal QNS let us focus our attention
on the ratios of off-diagonal to diagonal QNS. Given our results for
the diagonal and off-diagonal QNS it is clear that these will have the
form--- $\chi_{ff'}(a,m_f,m_{f'},T)/\chi_{ff}(a,m_f,T)\approx
[\chi_{ff'}(0,m_f,m_{f'},T)/\chi_{ff}(0,m_f,T)][1-b(T)a^2]$. Since $b(T)$ is
positive, {\sl i.e.\/}, $\chi_{ff}$ decreases with decreasing lattice spacing, this
ratio is expected to decrease (as $\chi_{ff'}$ is negative) and move
away from zero. However, due smallness of these ratios itself, within
our numerical accuracies, we have been unable to identify any such
effect. This has been exemplified in Fig.\ \ref{fig.ud_u-us_s} where
$\chi_{ud}/\chi_u$ at $1.5T_c$ (top panel) and $\chi_{us}/\chi_s$ at $1.1T_c$
(bottom panel) have been shown.
\begin{figure}[t!]
\begin{center}
\subfigure[]{ \includegraphics[scale=0.68]{cxu-2Tc.eps} }
\subfigure[]{ \includegraphics[scale=0.68]{cxs-1.5Tc.eps} } \\
\subfigure[]{ \includegraphics[scale=0.68]{cxu-1.25Tc.eps} }
\subfigure[]{ \includegraphics[scale=0.68]{cxs-1.1Tc.eps} }
\end{center}
\caption{Lattice spacing dependence of $C_{XU}$ and $C_{XS}$ are shown
for temperatures $2T_c$ [panel (a)], $1.5T_c$ [panel (b)], $1.25T_c$
[panel (c)] and $1.1T_c$ [panel (d)] by plotting these quantities as a
function of $1/N_\tau^2$ ($\propto a^2$), for $N_\tau=4,8,10,12$. The
lines indicate the ideal gas values for these ratios.}
\label{fig.cxy}
\end{figure}
Following the main theme of this paper we now present the lattice
spacing dependence of ratios the like $C_{(KL)/L}$. Two such ratios that
can directly probe the degrees of freedom in a QGP are \cite{koch, gavai}
\begin{subequations}
\begin{eqnarray}
C_{BS} &\equiv& -3C_{(BS)/S} = -3\frac{\chi_{BS}}{\chi_S} =
\frac{\chi_s +2\chi_{us}}{\chi_s} = 1 + \frac{2\chi_{us}}{\chi_s} ,
\qquad{\rm and}\qquad \\
C_{QS} &\equiv& 3C_{(QS)/S} = 3\frac{\chi_{QS}}{\chi_S} =
\frac{\chi_s - \chi_{us}}{\chi_s} = 1 - \frac{\chi_{us}}{\chi_s} .
\end{eqnarray}
\label{eq.cxs}
\end{subequations}
These quantities probe the linkages of the strangeness carrying
excitations to baryon number ($C_{BS}$) and electric charge ($C_{QS}$)
and hence give an idea about the average baryon number and the average
electric charge of all the excitations carrying the $s$ flavours. These
ratios are normalized such that for a pure quark gas, {\sl i.e.\/}, where unit
strangeness is carried by excitations having $B=-1/3$ and $Q=1/3$,
$C_{BS}=C_{QS}=1$. A value of $C_{BS}$ and $C_{QS}$ significantly
different from $1$ will indicate that the QGP phase may contain
some other degrees of freedom apart form the quasi-quarks.
Similar ratios can also be formed for the light quark sector
\cite{gavai}, {\sl e.g.\/}, for the $u$ flavour the ratios
\begin{subequations}
\begin{eqnarray}
C_{BU} &\equiv& 3 C_{(BU)/U} = 3\frac{\chi_{BU}}{\chi_U} =
\frac{ \chi_u + \chi_{ud} + \chi_{us} }{\chi_u} =
1 + \frac{\chi_{ud}}{\chi_u} + \frac{\chi_{us}}{\chi_u},
\qquad{\rm and}\qquad \\
C_{QU} &\equiv& 3 C_{(QU)/U} = \frac{3\chi_{QU}}{\chi_U} =
\frac{ 2\chi_u - \chi_{ud} - \chi_{us} } {\chi_u} =
2 - \frac{\chi_{ud}}{\chi_u} - \frac{\chi_{us}}{\chi_u}
\end{eqnarray}
\label{eq.cxu}
\end{subequations}
quantifies the average baryon number ($C_{BU}$) and and the average
electric charge ($C_{QU}$) of all the excitations carrying $u$ quarks.
For a medium of pure quarks, {\sl i.e.\/}, where the $u$ flavours are carried by
excitations with baryon number $1/3$ and electric charge $2/3$,
$C_{BU}=1$ and $C_{QU}=2$. Similar ratios can also be formed for the
$d$ quarks \cite{gavai}.
As can be seen seen from eqs.\ (\ref{eq.cxs}, \ref{eq.cxu}) the lattice
spacing dependence of $C_{BS}$ {\sl etc.\/}\ are governed by the cut-off
dependence of the ratios $\chi_{ff'}/\chi_{ff}$. Since we have already
emphasised that, within our numerical accuracies, the ratios
$\chi_{ff'}/\chi_{ff}$ are almost independent of lattice spacings it is
expected that the same will also happen for the ratios $C_{(KL)/L}$. In
accordance to this expectation we have found that for temperatures
$1.1T_c\le T\le 2T_c$ these ratios are independent of lattice spacings
within $\sim 5\%$ errors, see Fig.\ \ref{fig.cxy}. Note that these
ratios are not only independent of the lattice spacings but also acquire
values which are very close to their respective ideal gas limits.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.7]{cxu.eps}
\includegraphics[scale=0.7]{cxs.eps}
\end{center}
\caption{Continuum results for $C_{XU}$ (top panel) and $C_{XS}$ (bottom
panel). The lines indicate the ideal gas values for these quantities.
See text for details.}
\label{fig.cxy-cont}
\end{figure}
In Fig.\ \ref{fig.cxy-cont} we present our continuum results for
$C_{XS}$ (bottom panel) and $C_{XU}$ (top panel), where $X=B,Q$. Since
these ratios remain almost constant with changing $1/N_\tau^2$ (see
Fig.\ \ref{fig.cxy}) we have made continuum extrapolations by making
constant fits of our data to $1/N_\tau^2$. For the whole temperature
range of interests ($T_c<T\le 2T_c$) these ratios have values which are
compatible with that for a gas of pure quarks. This is exactly what has
been found in \cite{gavai} using partially quenched simulations with
smaller lattices. For the $d$ quarks also we have found similar results.
\section{Summary and discussion} \label{sc.discussion}
In this paper we have made a careful investigation of the continuum
limit of different ratios of off-diagonal to diagonal susceptibilities
in quenched QCD using lattices with large temporal extents
($N_\tau=12,10,8~ {\rm and}~ 4$), for a very interesting range of
temperature ($T_c<T\le2T_c$) and for vanishing chemical potential. We have
found that for this whole range of temperature the lattice results for
the ratios like $C_{BS}$, $C_{QS}$ {\sl etc.\/}\ are robust, {\sl i.e.\/}, they are
almost independent (within $\sim5\%$) of the lattice spacing. We have
also arrived at the same conclusion for the Wroblewski parameter which
is of interest to the experiments in RHIC and Large Hadron Collider
(LHC).
At this point, it is good to have some idea about how unquenching may
change our results. It has been found \cite{gavai4} that in the
temperature range $T\ge1.25T_c$ there is only $5-10\%$ change in the QNS
in going from quenched to $N_f=2$ dynamical QCD. On the other hand,
since the order of the phase transition depends strongly on the number
of dynamical flavours the change in QNS is likely to be much larger in
the vicinity of the transition temperature for the quenched theory which
has a first order phase transition. Though this may be true for the
individual QNS, their ratios may have very mild dependence on the sea
quarks content of the theory. Given the good compatibility of our
results of $C_{BS}$, $C_{QS}$ {\sl etc.\/}\ with the results of \cite{gavai} it
is clear that indeed these ratios have very mild dependence on the sea
quark content of the theory. It is also known that \cite{gavai1} for
bare valance quark mass of $m_q/T_c\le0.1$ the dependence of the QNS on
the valance quarks mass is very small. Hence our results show that the
ratios like $C_{(KL)/L}$ are robust not only in the sense that they do
not depend on the lattice spacings but also they have very mild
dependence on the quark masses.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.7]{vol-dep.eps}
\end{center}
\caption{Dependence of the ratio $\chi_{us}/\chi_s$ on the aspect ratio has
been shown for $N_\tau=4$ at temperature $1.1T_c$.}
\label{fig.vol-dep}
\end{figure}
All the results presented in this paper are for spatial lattice sizes
$N_s=2N_\tau+2$, {\sl i.e.\/}, for aspect ratios $N_s/N_\tau=2.5-2.17$. In view
of the fact that quenched QCD has a first order phase transition it is
important to have some idea about the volume dependence of our
results, specially in the vicinity of the transition temperature $T_c$.
To check this dependence we have performed simulations using lattices
having aspect ratios $N_s/N_\tau=2.5-5$, for our smallest temporal
lattice $N_\tau=4$ and at temperature $1.1T_c$. In these simulations we
have not found any significant volume dependence of any quantity which
have been presented in this paper. As an illustration, in Fig.\
\ref{fig.vol-dep}, we have shown the dependence of $\chi_{us}/\chi_s$ on the
aspect ratio, for $N_\tau=4$ at $1.1T_c$. The volume dependence is
expected to be even smaller as one goes further away from first order
phase transition point. Also the agreement of our results with that of
\cite{gavai}, where an aspect ratio of $4$ have been used, shows that
the these ratios have almost no volume dependence for $N_s\ge N_\tau+2$.
While the closeness of $C_{XU}$ and $C_{XS}$ ($X=B,Q$) to their
respective ideal gas values do support the notion of quasi-particle like
excitations in QGP, a significant deviation of these ratios from their
ideal gas values neither rule out the quasi-particle picture nor
confirms the existence of the bound states proposed in \cite{shuryak}.
Large contributions from the chemical potential dependence of the
quasi-particle masses may lead to significant deviation of these ratios,
especially in the vicinity of $T_c$. It has already been pointed out
\cite{bluhm,liao} that, near $T_c$, the chemical potential dependence of
the quasi-particle masses becomes crucial for the baryonic
susceptibilities.
Nevertheless, it may be interesting to compare our results with the
predictions of the bound state model of \cite{shuryak}. Based on the
model of \cite{shuryak} (and assuming that the mass formulae given in
\cite{shuryak} hold right down to $T_c$) the predicted values of
$C_{BS}$ are approximately $0.62$ at $1.5T_c$ \cite{koch}, $0.11$ at
$1.25T_c$ and almost zero at $1.1T_c$ \cite{ack-majumder}. Clearly, as
can be seen form Fig.\ \ref{fig.cxy-cont} (bottom panel), these values
are very much different from our continuum results. However, it has been
argued in \cite{liao} that apart from all the bound states mentioned in
\cite{shuryak}, baryon like bound states may also exist in QGP. These
baryons make large contributions to the baryonic susceptibilities,
especially close to $T_c$ \cite{liao}. Taking account the contributions
from the strange baryons may increase the value of $C_{BS}$. In
\cite{liao} it has also been argued that for two light flavours if one
considers the contributions of the baryons only then close to $T_c$ the
ratio of 2-nd order isospin susceptibility ($d_2^I$) to the 2-nd order
baryonic susceptibility ($d_2$) is
$d^I_2/d_2=(\chi_u-\chi_{ud})/(\chi_u+\chi_{ud})=0.467$. Clearly this is
inconsistent with our results since a value of $d^I_2/d_2=0.467$ gives a
positive $\chi_{ud}/\chi_u$ ($=0.363$). Whereas, the lattice results for
$\chi_{ud}/\chi_u$ are negative and much smaller in magnitudes. This suggest
that the contribution of the mesons (also possibly of the quarks,
diquarks and $qg$-states) are definitely important in the isospin
susceptibility $d^I_2$. If one takes into account of the contributions
of the mesons (pions and rhos) and assumes that the Boltzmann weight of
the mesons are equal to that of the baryon one gets a lower bound for
$d^I_2/d_2$, namely $d^I_2/d_2 \ge 0.644$ \cite{ack-liao}. But this
lower bound gives $\chi_{ud}/\chi_u=0.217$ and hence very far from our
results. Moreover, very recently it has been argued \cite{majumder} that
one can carefully tune the densities of the baryon and meson like bound
states in the model of Refs.\ \cite{shuryak,liao} to reproduce the
lattice results for off-diagonal QNS. But even those carefully tuned
values fail to reproduce \cite{majumder} the lattice results for higher
order susceptibilities. In view of all these, the lattice results of
\cite{gavai} favours a quasi-particle like picture of QGP, as opposed to
the bound state model of \cite{shuryak,liao}. The results of this paper
show that these lattice results are really robust in the sense that they
have very mild dependence on the lattice spacing and sea quark content
of the theory.
\begin{acknowledgments}
The author is grateful to Rajiv Gavai for his constant encouragement,
many illuminating discussion and careful reading of the manuscript. The
author would also like to thank Sourendu Gupta for many useful comments
and discussion. Part of this work was done during a visit to ECT*,
Trento. The financial support from the Doctoral Training
Programme of ECT*, Trento is gratefully acknowledged.
\end{acknowledgments}
|
2,877,628,088,562 | arxiv | \section{Introduction}
There has been a growing interest in laser-driven pulsed neutron sources, for their wide applications in fields like fundamental physics~\cite{kappeler_nuclear_2006}, energy~\cite{perkins_investigation_2000}, security~\cite{sowerby_recent_2007}, and medical science~\cite{wittig_boron_2008}.
One mechanism to generate ultra-short burst of fast neutrons is by interactions of laser-produced light ion beams with solid targets~\cite{roth-bright-2013, davis_neutron_2010, yang_neutron_2004, jung_characterization_2013, lancaster_characterization_2004,higginson_laser_2010, storm_fast_2013, mirfayzi_calibration_2015,kar_beamed_2016}.
Neutrons can also be generated through nuclear fusions in laser-produced plasmas~\cite{ditmire_nuclear_1999, bang_experimental_2013, zhao_novel_2016} or $(\gamma,\mathrm n)$ reactions~\cite{Photonuclear-PRL2000, pomerantz-ultrashort-2014}.
Diagnosing pulsed neutrons is not only prerequisite to optimization of these neutron sources for applications, but also an essential tool for understanding fundamental physics in plasmas~\cite{alvarez_laser_2014, bang_calibration_2012}.
For example, in inertial confinement fusion (ICF) studies, plasma temperature and density distribution could be retrieved through the measurement of neutron yield and energy spectrum~\cite{cable_neutron_1987, kodama_fast_2001}.
Several techniques have been adopted to diagnose fast pulsed neutrons, including neutron active analysis~\cite{landoas_absolute_2011},
neutron track detector~\cite{oda_application_1991},
thermal neutron gas counter~\cite{moreno_system_2008}, along with neutron TOF~\cite{glebov_national_2010}, etc.
In neutron active analysis, foils made from silver, indium, or copper are commonly employed for absolute neutron dosimetry~\cite{slaughter_highly_1979}.
However, due to its low detection efficiency, this method requires a minimum neutron yield of $10^5$ per burst~\cite{tarifeno-saldivia_calibration_2014}.
The same problem is encountered by nuclear track detectors like CR-39~\cite{frenje_absolute_2002, kar_beamed_2016, bang_calibration_2012}.
Gas neutron counters (filled with $^3\mathrm{He}$ and $\mathrm{BF}_3$ etc.)
usually employ external moderators to convert fast neutrons to slow neutrons.
The neutron related signals are spread in a time span of hundreds of microseconds as the fast neutron slowing down.
Due to gas detector's large pulse width (microseconds) and long dead time (tens of microseconds), it could have serious pile-up problem at high counting rate cases.
TOF technique plays a major role in neutron spectroscopy~\cite{vlad-time-1984}.
A TOF system usually consists of scintillators and fast photomultiplier tubes (PMTs), and could achieve high efficiency, as well as a good temporal response.
Two serious issues emerge when TOF technique is employed in ultra-intense laser-plasma interactions, especially for solid targets.
Firstly, laser-induced electromagnetic pulses (EMP) and gamma-rays
may saturate the PMT and blind the detector for over hundreds of nanoseconds,
making it difficult to recover neutron TOF signals within this temporal window.
To reduce the original gamma shower, lead shieldings are prerequisite to cover the scintillator detectors~\cite{roth-bright-2013}, which inevitably decreases the sensitivity.
Most neutrons generated from laser-induced nuclear reactions are fast neutrons, which have an energy of hundreds of keV or higher.
These fast neutrons arrive at detectors in hundreds of nanoseconds, when the PMT may not restore its sensitivity from the initial gamma-flash.
Secondly, the pile-up effect caused by multiple neutrons at nearly the same time might drain the PMT voltage supply even though the detectors are operated in current mode~\cite{meigo_development_2000}.
In this paper, we report a long-temporal-window TOF measurement.
Two temporal structures are identified and correlated to the neutron production, which are fast neutrons and delayed neutron-capture gamma rays.
We found a linear correlation between the fast neutron yield and the gamma-ray number, which provide a new way to diagnose laser-produced fast neutrons.
The basic idea is similar to the gas counters in neutron moderators, while the pulse width for a scintillator is much smaller, giving a superior higher temporal resolution and consequently a higher counting rate limitation.
\section{Experimental Setup}
The experiment was performed utilizing the PW laser system at the Shenguang II (SG-II) facility, in the National Laboratory for High Power Lasers and Physics, Shanghai, China.
The laser wavelength was 1053 nm and pulse duration was 0.7 ps at full width at half maximum (FWHM). The beam was focused onto the target with a spot size of 70 $\mu$m at an angle of $23.8^\circ$. The energy on target was 150 J in this experiment,
giving rise to a peak intensity of 2.6$\times 10^{18}$ W/cm$^2$.
A pitcher-catcher scenario of target was adopted for neutron generation.
The pitcher is a stainless steel (SS) foil with a thickness of 30 $\mu$m,
and the catcher is a 0.9 mm-thick LiF with the transverse dimensions of $3\times3\mathrm{mm}^2$, located at 3 mm behind the pitcher.
The schematic is shown in Fig.~\ref{fig.setup}(a).
A single-layer 30 $\mu$m-thick SS was also employed for reference.
The spatial-intensity distributions of the accelerated protons were measured by stacks of radiochromic film (RCF). The energy spectra of the sampled beam were measured by a Thomson parabola spectrometer (TPS) with an acceptance angle of $1.57\times 10^{-6}$ sr.
Calibrated image plate (IP) was used as detector in the TPS.
\begin{figure}
\centering
\includegraphics[width=0.21\textwidth]{./figure1a.pdf}
\hspace{0em}
\includegraphics[width=0.26\textwidth]{./figure1b.eps} \\
\hspace{-2em}
\includegraphics[width=0.2\textwidth,trim=0 0 0 0]{./figure1c.pdf}
\hspace{0.5em}
\includegraphics[width=0.2\textwidth,trim=0 0 0 0]{./figure1d.pdf}
\caption{(a) Schematic of the experimental setup. The laser beam was focused on a 30um-thick stainless steel (SS) foil to generate protons by the TNSA mechanism. Accelerated protons hit on a 0.9 mm-thick LiF foil 3 mm apart and initiate the $\mathrm{^7Li(p,n)^7Be}$ reactions.
(b) Proton energy spectra measured from the Thomson parabola spectrometer (red line) and RCF stack (blue dots).
(c)-(d) Selected RCF images showing the spatial distribution of 5.5 MeV and 10 MeV protons, respectively. The grid structures were shadows of a mesh inserted between proton reference target and the RCF stack.}
\label{fig.setup}
\end{figure}
Two types of TOF equipped with different scintillators were used for neutron detection. The first type is EJ-301 liquid scintillator with dimensions of $12.5\times \pi \times (12.5/2)^2$ cm$^3$.
The EJ-301 has excellent pulse shape discrimination (PSD) properties which allow to discriminate the neutrons from gamma-ray.
Six of these scintillators were placed around the target along different directions and shielded with lead brick houses of either 10 cm- or 5 cm-thick wall, respectively.
The second type is two BC-420 plastic scintillators ($10\times10\times40$ cm$^3$), which were also shielded in 5 cm-thick lead. The decay time of BC-420 is 1.5 ns, which is reduced by a factor of two in comparison to EJ-301, giving rise to a higher temporal resolution.
All the scintillators were coupled with fast photomultiplier tubes (PMTs),
and the signals were recorded with oscilloscopes of 1 GHz bandwidth.
\section{Experimental Results and Discussion}
The energy spectra and spatial-intensity distributions of protons from the reference target were measured.
In Fig.~\ref{fig.setup}(b), the energy spectrum of protons sampled in a small solid angle by TPS is shown (red line), along with that of integrated whole beam recorded by the RCF stack (blue dots).
Both of them have near exponential distributions and the maximum proton energy is about 21 MeV.
Figure~\ref{fig.setup}(c)-(d) show two example spatial-intensity distributions of protons from the $\mathrm{5^{th}}$ and $\mathrm{13^{th}}$ RCF layers, corresponding to proton energies of 5.5 MeV and 10 MeV, respectively.
Well collimated beams are along the target rear normal direction.
The beams have very high quality as seen from the shadowgraph of the crossed meshes.
Further data analyses show that the transverse emittance of the beams is less than 0.1 $\pi\, \mathrm{mm}\cdot\mathrm{mrad}$ and the virtual source size is less than 8 $\mu\mathrm m$.
In combination with the spectral shape, it suggests the dominant proton acceleration mechanism is the so-called target normal sheath acceleration (TNSA)~\cite{wilks-energetic-2001}.
These protons then impinge onto the LiF target and induce nuclear reactions
$\mathrm{p + {}^7Li \rightarrow {}^7Be + n}$.
It has an energy threshold of 1.88 MeV and a notable cross-section up to 8 MeV.
The cross-section of the reaction
$\mathrm{p + {}^6Li \rightarrow {}^6Be + n}$
for $^6\mathrm{Li}$ ($7.5\%$ natural abundance), another stable isotope of lithium, is too low to make contribution in neutron generation.
Despite the differences in location and high-voltage setting for all the detectors, similar signal patterns were found.
A good repeatability was also measured from shot to shot for similar laser and target parameters.
A typical TOF signal recorded by one of the scintillators is shown in Fig.~\ref{fig.tof}.
Figure~\ref{fig.tof}(a) represents the overall signal ranging from 0 to 10 ms,
while Fig.~\ref{fig.tof}(b) is a section of Fig.~\ref{fig.tof}(a) ranging from 0 to 16 $\mu$s.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{./figure2.pdf}
\caption{Example TOF signals.
(a) Overview in a time window of $0-10$ ms. Note the intense peaks within 1 ms.
(b) Detailed view in a time window of $0-16$ $\mu\mathrm s$.
The inset of Fig.~\ref{fig.tof}(b) is the zoom-in of one single peak, which has a half-width of several tens nanoseconds.
(c) Waveform from a reference shot without a LiF catcher.
}
\label{fig.tof}
\end{figure}
It can be seen that the temporal waveform has four characteristic structures, labeled from (i) to (iv) in Fig.~\ref{fig.tof}(b):
\begin{enumerate}[(i)]
\item \label{itm:first} The signal dip at $t=20$ ns with a width of 0.1 $\mu$s.
It accounts for the copious bremsstrahlung photons generated at the time $t=0$,
i.e. the moment when the laser hits the target.
Because the prompt radiation is too intense, the signal overrun the input voltage range of the oscilloscope.
This dip could be reduced by the extensive lead shielding;
\item \label{itm:second} The signal dip at $t=0.3$ $\mu$s with a width of 0.2 $\mu$s. This is the signal of fast neutrons with energies of $1-5$ MeV, which are produced from the $\mathrm{^7Li(p,n)^7Be}$ reactions;
\item \label{itm:third} The broad dip from 4 $\mu$ to 8 $\mu$s. This is overlapped with the leading part of structure (\ref{itm:fourth}) which is attributed to the malfunction of the PMT.
Due to the strong gamma photon shower at $t=20$ ns, the current in the PMT circuit (especially between the last dynode and the anode) is very high, which makes the voltage between electrodes drops~\cite{PMT-2002-flyckt}.
This broad dip structure is caused by the voltage recovery, and the dip width is determined by the electrode restore time;
\item \label{itm:fourth} The intense narrow discrete peaks from approximately 4 $\mu$s to 10 ms. The typical width is 10 ns.
One zoom-in example of these peaks is shown in the inset of Fig.~\ref{fig.tof}(b).
This type of signal has not been reported as far as we know. In the following, we will focus on it.
\end{enumerate}
Signals of two PMTs coupled to a same plastic scintillator are compared.
The results show that the timing and amplitude of the peaks from the two PMTs are well matched.
This indicates that their signals are caused by incoming radiation events instead of false signals due to thermal noises of the PMT circuits.
When the reference target was used, i.e. shot without a LiF catcher, there remained a much smaller neutron background originated from $(\gamma,\mathrm n)$ reactions. The amplitude of structure (\ref{itm:second}) reduces, and peaks in structure (\ref{itm:fourth}) are scarce, while structure (\ref{itm:first}) and (\ref{itm:third}) almost remain the same as shown in Fig.~\ref{fig.tof}(c).
This suggests that structure (\ref{itm:second}) and (\ref{itm:fourth}) are correlated and (\ref{itm:fourth}) is due to neutron generation.
A pulse shape discrimination (PSD) procedure was conducted to distinguish between gamma-rays and neutrons by analyzing the difference in their characteristic pulse shapes~\cite{brooks_scintillation_1959}.
Pulses in structure (\ref{itm:fourth}) were analyzed by defining a PSD parameter $P={Q_{\mathrm{tail}}}/{Q_{\mathrm{total}}}$, where $Q_{\mathrm{total}}$ and $Q_{\mathrm{tail}}$ are the charge integration over the whole pulse and the tail part only, respectively~\cite{tomanin_characterization_2014}.
The results ($P$ versus $Q_{\mathrm{total}}$) are shown in Fig.~\ref{fig.psd} as red crosses.
To determine the respective parameter regimes for neutrons and gamma-rays, the detector was calibrated with a $^{252}\mathrm{Cf}$ radiation source, with a same high-voltage setting as in the experiment.
$^{252}\mathrm{Cf}$ is a neutron emitter with strong gamma background, so that both neutrons and gamma-rays can be recorded and analyzed.
The results are shown in black dots in Fig.~\ref{fig.psd}, in which one can see neutrons and gamma-rays are well distinguished with only a minor portion of overlap.
It is clear that most of the pulses from structure (\ref{itm:fourth}) are distributed in the gamma region.
Therefore, we conclude that these pulses are predominately attributed to gamma-rays.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{./figure3.pdf}
\caption{Pulse shape discrimination results. Pulses from a $^{252}\mathrm{Cf}$ source (black dots) are separated into two groups representing neutrons and gamma-rays respectively. Nearly all pulses from the experiment (red crosses) fall in the gamma region. The dashed lines are drawn to guide the eye.
}
\label{fig.psd}
\end{figure}
A $^{137}$Cs and a $^{60}$Co gamma sources were further used to calibrate the detector, enabling the pulse amplitude to be converted into gamma-ray energy, by determining their positions of Compton edges in the response functions~\cite{tomanin_characterization_2014}.
A peak-seeking algorithm was carried out to reconstruct the temporal and amplitude distributions of these delayed gamma-rays.
The result is shown in Fig.~\ref{fig.tof.2d}(a).
The amplitude distribution is showin Fig. ~\ref{fig.tof.2d}(b).
Gamma-ray signals with an amplitude greater than 0.82 V pile up at the maximum (corresponding to a gamma-ray energy of 6.2 MeV) due to the dynamic range of the oscilloscope.
The temporal spectrum (Fig.~\ref{fig.tof.2d}(c)) shows a double exponential decay feature, and can be fitted by function (red dashed line):
\begin{equation}
N=(1174\pm32)\cdot e^{-t/(201\pm11)}+(169\pm23)\cdot e^{-t/(1609\pm192)}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{./figure4.pdf}
\caption{
Temporal-amplitude distribution of the gamma-ray signals.
(a) Scatter plot of gamma-ray events.
(b) Amplitude projection of the scatter plot. The maximum value ($\sim 0.82$ V) is limited by the dynamic range of the oscilloscope, corresponding to a gamma-ray energy of 6.2 MeV.
(c) Temporal projection of the scatter plot.
The experimental data is fitted by a two-component exponential decay function
and shown as a red dashed line.
The Monte-Carlo simulation is shown as a blue solid line.
}
\label{fig.tof.2d}
\end{figure}
It is theoretically possible that these gamma-rays are from radioactive isotopes or isomers directly created by the high-intensity laser.
However, after searching the chart of nuclides for a candidate,
we could not find a known isotope with a half-life of several tens of microseconds that fits in our experimental conditions.
On the other hand, fast neutrons generated by $\mathrm{{}^7Li(p,n){}^7Be}$ reactions interact with the target chamber, air, as well as with the supporting infrastructures outside the chamber.
After tens of scattering events, a neutron reaches thermal equilibrium with the atoms of the medium.
As the neutron energy decreases, its capture cross-section increases.
Considering that the life of a free neutron is approximately 10 minutes, almost all neutrons end up with being captured.
The compound nucleus that has absorbed a neutron subsequently decays to its ground state with prompt emission of one or more characteristic gamma-rays~\cite{foderaro_elements_1971}.
The process could be formulated as
${}^A\mathrm{X}_Z + \mathrm{n} \rightarrow {}^{A+1}\mathrm{X}^{\ast}_Z \rightarrow {}^{A+1}\mathrm{X}_Z + \gamma$.
Neutron capture cross-sections for some isotopes existed in our experimental environment are listed in Table.~\ref{tab.n.CS}.
One of them
$\mathrm{{}^{56}Fe+n \rightarrow {}^{57}Fe^{\ast}}$
has a fairly large capture cross-section.
A considerable portion of the gamma-rays have energies of $6-8$ MeV from the gamma spectrum of ${}^{57}\mathrm{Fe}^{\ast}$ decay (evaluated nuclear database~\cite{reedy_prompt_2002}).
This is in agreement with our observation in Fig.~\ref{fig.tof.2d}(b).
The reaction of
$\mathrm{{}^{1}H+n \rightarrow {}^{2}H^{\ast}}$
is another candidate where hydrogen also has a large neutron capture cross-section,
and is widely distributed in the concrete infrastructures of the experimental hall.
However, it makes no contribution to the higher gamma-ray energies,
since the only gamma-ray line emitted by hydrogen neutron capture has an energy of 2.2 MeV~\cite{reedy_prompt_2002}.
\begin{table}
\caption{\label{tab.n.CS}Thermal neutron absorption cross sections $\sigma_n$
and corresponding maximum gamma-ray energy $E_\gamma^{\mathrm{max}}$
for isotopes which were in relatively large abundances in our experimental environment.
Taken from Ref.~\cite{reedy_prompt_2002}.}
\begin{tabular}{r d d}
\toprule
Isotope &\multicolumn{1}{c}{$\sigma_n\ (\mathrm b)$} &\multicolumn{1}{c}{$E_\gamma^{\mathrm{max}}\ \mathrm{(MeV)}$} \\
\colrule
$^1$H & 0.332 & 2.22\\
$^{12}$C & 0.0035 & 4.95\\
$^{14}$N & 0.080 & 10.83\\
$^{16}$O & 0.00019 & 4.14\\
$^{27}$Al & 0.230 & 7.73\\
$^{28}$Si & 0.17 & 8.47\\
$^{56}$Fe & 2.6 & 7.65\\
$^{63}$Cu & 4.5 & 7.92\\
\botrule
\end{tabular}
\end{table}
In the case of a point neutron source in an infinite moderator,
the temporal profile of the neutron captures can be described by exponential functions~\cite{bowden_note_2012},
whose slopes depend on the material and initial energy of the neutrons.
In a real experiment, the geometry and materials can be very complicated.
Therefore, we take a Monte-Carlo (MC) toolkit Geant4~\cite{agostinelli_geant4simulation_2003}
to simulate the neutron transport and moderation under our experimental conditions.
The geometry for simulation includes the main structure of the target chamber with an isotropic point neutron source at its center, diagnostic apparatus, along with large objects inside and outside the chamber, ceiling, floor, and walls of the experimental hall.
Physics processes involved in neutron-matter interactions including elastic and inelastic scattering, neutron capture, are considered.
The simulated temporal distribution of neutron captures (Fig.~\ref{fig.tof.2d}(c)) agrees with the first parameter of the experimental data while it overestimates the second.
We suspect that the disagreement may be a result of the simplification in MC geometry modeling.
To quantitively investigate the correlations of the number of radiative capture gamma-rays and the fast neutron yield,
we conducted another neutron-generation experiment.
In this particular experiment, two sets of laser beams were used,
each delivering 4$\times$250 J energy onto two targets separated by 4.4 mm.
Each target has a $0.5\times 0.5\ \mathrm{mm}^2$ copper base and was coated with 10 $\mu$m thick deuterated hydrocarbon (CD) layer.
Each laser had a typical pulse width of 1 ns and a focal spot diameter of 150 $\mu$m.
Ablated plasmas expand and collide with each other in the middle area between the targets,
inducing $\mathrm{D(d,n){}^3He}$ reactions, and generating monoenergetic neutrons with an energy of 2.4 MeV.
Similar liquid and plastic scintillation detectors were used for neutron detection.
A detailed description of the experiment can be found in Ref.~\cite{zhao_neutron_2015}.
Comparing with picosecond-duration laser driver, backgrounds caused by bremsstrahlung gamma-ray bursts and EMP noise in nanosecond pulse lasers are much smaller.
Thus the neutron yields can be determined more accurately.
The dependence of the number of delayed gamma-rays $N_{\gamma}$ and the neutron yield $N_{\mathrm n}$ is shown in Fig.~\ref{fig.n-gamma-ratio}.
One can find a linear relationship between $N_\gamma$ and $N_{\mathrm n}$.
The Linear fitting correlation coefficient is $R=0.951$.
A slight underestimation on $N_\gamma$ for the small neutron yield was observed.
This may be due to the insufficient statistic of the gamma-ray numbers and the large uncertainty of the neutron yields.
With these results, we conclude that $N_\gamma$ can be seen as a prompt and convenient parameter for the estimation of the fast neutron yield.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./figure5.eps}
\caption{
Observed gamma-ray number versus neutron yield. The dashed line is a linear fit to the experimental data.
}
\label{fig.n-gamma-ratio}
\end{figure}
\section{Summary}
In summary, delayed discrete gamma-rays with energies of MeV level were observed in a time window from several microseconds to milliseconds using a TOF diagnostic.
The number of gamma-rays has a linear correlation with the fast neutron yield on a shot-by-shot basis.
A diagnostic of pulsed fast neutrons can benefit from measuring the thermal neutron related gamma-rays with commonly used TOF systems.
Fast neutron signals from a scintillation detector in high-intensity laser experiments are often obscured by bremsstrahlung X-rays and EMP interference,
while delayed (n,~$\gamma$) signals provides a quick reference on whether or how many neutrons are generated.
Because these gamma signals are separated temporally with the initial X-ray and EMP bursts.
High sensitivity and signal-to-noise ratio is expected by detecting both fast and thermal neutron related signals with the same detector.
This method can also be used in other fundamental physics studies under extreme conditions which cannot be provided by conventional accelerators.
For instance, in a hot and dense plasma produced by high-intensity lasers,
new nuclear excitation states, isomers, and other exotic atomic states can be created
but difficult to detect~\cite{savelev-direct-2017, andreev_excitation_2000}.
The method shown here can detect those states with lifetimes in the range from about 100 ns to 1 s.
\section{Acknowledgements}
We acknowledge financial supports from 985-III grant from Shanghai Jiao Tong University, National Basic Research Program of China (Grant Nos. 2013CBA01502 and 2010CB833005), National Natural Science Foundation of China (Grant Nos. 11375114 and 11205100), Doctoral Fund of Ministry of Education of China (Grant No. 20120073110065), Strategic Priority Research Program of the Chinese Academy of Sciences (Grant Nos. XDB16010200 and XDB07030300), and Shanghai Municipal Science and Technology Commission (Grant No. 11DZ2260700).
The authors thank the staff of the SG-II laser facility for operating the laser and target area.
|
2,877,628,088,563 | arxiv | \section{Introduction}
Since Hawking radiation was first discovered \cite{swh74,swh75}, its
inconsistency with quantum theory has been widely noticed.
Irrespective of what initial state a black hole starts with before
collapsing, it evolves eventually into a thermal state after being
completely exhausted into emitted radiations. Such a scenario
violates the principle of unitarity as required for quantum
mechanics and brings a serious challenge to the foundation of modern
physics. Many groups
\cite{swh76,swh05,acn87,jp92,kw89,jdb93,hm04,sl06,bp07} have
attempted addressing this puzzle of the so-called paradox of black
hole information loss. None has been successful. Most discussions
treat Hawking radiation as thermally distributed without considering
energy conservation or self-gravitation effect. Recently, Parikh and
Wilczek point out that Hawking radiation is completely non-thermal
when energy conservation is enforced \cite{pw00}. Making use of
their result, we discover the existence of non-trivial correlations
amongst Hawking radiations. A queue of correlated radiation can
transmit encoded information. By carefully counting the entropy
embedded in the sequentially emitted (tunneled out)
radiations/particles, we show that the process of Hawking radiation
is entropy conserved, contrary to entropy growth by the thermal
spectrum \cite{ZW82}. While information is carried away by Hawking
radiation, the total entropy of the black hole and the radiation is
conserved. Our work thus implies that the black hole evaporation
process, whereby Hawking radiation is emitted, is unitary.
In the past few decades, several approaches have been suggested for
resolving the paradox of black hole information loss. Hawking
initially proposed to accept information loss when quantum theory is
unified with gravity \cite{swh76}. He has since renounced this
proposal and admits \textquotedblleft elementary quantum gravity
interactions do not lose information or quantum
coherence\textquotedblright\ \cite{swh05}. A second approach focuses
on the black hole remnant \cite{acn87}, stemmed from the idea of
correlation or entanglement of the radiation and the black hole. It
also fails because of the infinite degeneracy, which is hard to
reconcile with causality and unitarity \cite{jp92}. A third idea is
related to
\textquotedblleft quantum hair\textquotedblright\ on a black hole \cite%
{kw89} that is found to be capable of store more information than one expects.
To resolve the paradox with this approach,
a projection onto local
quantum fields of low energies is required, and no one knows how this can be done.
A fourth approach is from Bekenstein \cite{jdb93} who suggests that
if the radiation spectrum is analyzed in detail,
enough non-thermal features might exist to encode all lost information.
Recently a new approach is
brought forward along the lines of quantum teleportation and the so-called
final state projection \cite{hm04}. The quantum information is estimated
to be capable of escaping with a fidelity of about $85\%$ on average \cite{sl06},
although whether the final state projection exists or not and, how it
can be justified,
remains a mystery. Finally, another recent work attracted serious attention after it
ruled out the possibility that information about the infallen matter could
hide in the correlations between the Hawking radiation and the internal
states of a black hole \cite{bp07}. The current state of affairs is a
direct confrontation: either unitarity or Hawking radiation being thermal
must break down.
In the original treatment, Hawking considered a fixed background geometry
without enforcing energy conservation \cite{swh74,swh75}. In contrast,
energy conservation is crucial in an improved treatment
by Parikh and Wilczek that considers s-wave outgoing particles,
or the Hawking radiation, as due to quantum tunneling,
and obtains a non-thermal spectrum for the Schwarzschild black hole \cite{pw00}.
The non-thermal probability distribution is
related directly to the change of entropy in a black hole \cite{pw00}.
In this Letter, we show that the non-thermal distribution
implies information can be coded into the correlations of sequential
emissions. We find that entropy remains conserved in the radiation
process; which leads naturally to the conclusion that
the process of Hawking radiation is unitary, and no
information loss occurs.
This implies that
even in a semiclassical treatment of the Hawking radiation process,
unitarity is not violated. The so-called black hole information
loss paradox arises from the neglect of energy conservation or
self-gravitational effect.
We start with a brief review of Hawking radiation as due to quantum
tunneling \cite{pw00}. Unlike the Schwarzchild coordinate,
the derivation that makes use of the Painlev\'{e}
coordinate system is regular at
the horizon and thus is particularly convenient for tunneling calculation.
Particles are supplied by considering the geometrical limit because of the
infinite blueshift of the outgoing wave-packets near the horizon. The
barrier is created by the outgoing particle itself, which is ensured by
energy conservation. The radial null geodesic motion is considered, and the
WKB approximation is adopted to arrive at the tunneling probability
\begin{eqnarray}
\Gamma &\sim & \exp \left[ -2\,\mathrm{Im}(I)\right]\nonumber\\
& =&\exp \left[ -8\pi E\left(
M-\frac{E}{2}\right) \right]\equiv\exp \left( \Delta S\right), \label{tp}
\end{eqnarray}%
as the imaginary part of the action, and is
related to the change of the black hole's Bekenstein-Hawking entropy
on the second line, as was shown in Ref. \cite{pw00}.
This result Eq. (\ref{tp}) is clearly
distinct from the thermal distribution: $\Gamma (E)=\exp \left( -8\pi
EM\right) $, thus subsequent Hawking radiation emissions must be correlated
and capable of carrying away information encoded within. Further insight can
be gained if we compare with the general form of a quantum transition
probability \cite{mkp04}, expressed as
\begin{equation*}
\Gamma \sim \frac{e^{S_{\mathrm{final}}}}{e^{S_{\mathrm{initial}}}}=\exp
\left( \bigtriangleup S\right) ,
\end{equation*}%
in terms of the entropy change $\Delta S= S_f-S_i$
between the final and initial entropies $S_f$ and $S_i$.
This is in agreement with the tunneling probability, up to a factor
containing the square of the amplitude of the process. In other words,
the non-thermal Hawking radiation Eq. (\ref{tp}) reveals the
possibility of unitarity and no information loss.
We will find out whether or not there exist statistical correlations
between quanta of Hawking radiation. This was first discussed
in Refs. \cite{mkp04,amv05} by considering two emissions
with energies $E_{1}$ and $E_{2}$, or one emission
with energy $E_{1}+E_{2}$. The function
\begin{eqnarray}
C\left( E_{1}+E_{2;}\,E_{1},E_{2}\right) =\ln \Gamma \left(
E_{1}+E_{2}\right) -\ln [\Gamma \left( E_{1}\right) \Gamma \left(
E_{2}\right) ]\nonumber
\label{cf}
\end{eqnarray}%
was used to measure statistical correlation between the two
emissions. With the following function forms
\begin{eqnarray}
\Gamma (E_{1}) &=& \exp \left[ -8\pi E_{1}\left( M-\frac{E_{1}}{2}\right) %
\right], \nonumber \\
\Gamma (E_{2}) &=& \exp \left[ -8\pi E_{2}\left( M-E_{1}-\frac{E_{2}}{2}%
\right) \right], \label{w1} \\
\Gamma (E_{1}+E_{2}) &=& \exp \left[ -8\pi (E_{1}+E_{2})\left( M-\frac{%
E_{1}+E_{2}}{2}\right) \right], \nonumber
\end{eqnarray}
$C\left( E_{1}+E_{2;}E_{1},E_{2}\right) =0$ is found, and
Refs. \cite{mkp04,amv05} wrongly conclude that no
correlation exists, including the
case of tunneling through a quantum horizon \cite{amv05}.
This makes no sense.
The notations used in the above (adopted from Refs. \cite{mkp04,amv05})
for $\Gamma (E_{1})$, $%
\Gamma (E_{2})$, and $\Gamma (E_{1}+E_2)$ are incorrect.
In particular, the form of the function $\Gamma (E_{2})$ [Eq. (\ref{w1})]
is misleading because it is different from Eq. (\ref{tp}).
To properly evaluate statistical correlation \cite{gs92}, it is
important to distinguish between statistical dependence or independence.
If the probability of two events arising simultaneously is identically the
same as the product probabilities of each event occurring independently,
these two events are independent or non-correlated. Otherwise, they are
dependent or correlated.
Because of the non-thermal nature, the probability $\Gamma \left(
E_{2}\right)$ used in Eq. (\ref{w1}) is not independent; instead, it is
conditioned on the emission with energy $E_{1}$.
The proper forms for the probabilities $\Gamma (E_{1})$ and $%
\Gamma (E_{2})$ are derived in the appendix using the
standard approach: $\Gamma (E_{1})=\int \Gamma (E_{1},E_{2})dE_{2}$ and $%
\Gamma (E_{2})=\int \Gamma (E_{1},E_{2})dE_{1}$, where the probability for
simultaneously two emissions with energies $E_1$ and $E_2$ is $\Gamma
(E_{1},E_{2})=\Gamma (E_{1}+E_{2})$. We find both independent
probabilities take the expected functional form of Eq. (\ref{tp}),
\begin{eqnarray}
\Gamma (E_{2}) =\exp \left[ -8\pi E_{2}\left( M-\frac{E_{2}}{2}\right) %
\right] , \label{2tp}
\end{eqnarray}
which then gives
\begin{eqnarray}
\ln \Gamma (E_{1}+E_{2})-\ln \left[ (\Gamma (E_{1})\ \Gamma (E_{2})\right]
=8\pi E_{1}E_{2}\neq 0, \label{co}
\end{eqnarray}%
unlike what was concluded previously \cite{mkp04,amv05}.
Equation (\ref{co}) is the central result of this work.
To better understand its
implications we can make connection to a closely related topic in quantum
information.
Our result Eq. (\ref{co}) shows that subsequent emissions are
statistically dependent, and correlations must exist between them.
For sequential emissions of energies $%
E_{1}$ and $E_{2}$, the tunneling probability for the
second emission with energy $%
E_{2}$ should be understood as conditional probability given
the occurrence of tunneling of the particle with energy $E_{1}$. Thus,
instead of the misleading Eq. (\ref{w1}),
a proper notation is
\begin{eqnarray}
\Gamma (E_{2}|E_{1})=\exp \left[ -8\pi E_{2}\left( M-E_{1}-\frac{E_{2}}{2}%
\right) \right] , \label{cp}
\end{eqnarray}%
defined according to $\Gamma
(E_{1},E_{2})=\Gamma (E_{1})\cdot \Gamma (E_{2}|E_{1})$. The Bayesian law $%
\Gamma (E_{2}|E_{1}){\Gamma (E_{1})=\Gamma (E_{1}|E_{2}) \Gamma (E_{2})}$
then self-consistently connects between different probabilities.
Analogously, the conditional probability $\Gamma (E_{i}|E_{f})=\exp \left[
-8\pi E_{i}\left( M-E_{f}-\frac{E_{i}}{2}\right) \right] $ corresponds to
the tunneling probability of a particle with energy $E_{i}$ conditional on
radiations with a total energy of $E_f$. The entropy taken away by the
tunneling particle with energy $E_i$ after the black hole has emitted
particles with a total energy $E_{f}$ is then given by
\begin{eqnarray}
S\left( E_{i}|E_{f}\right) =-\ln \Gamma (E_{i}|E_{f}). \label{te}
\end{eqnarray}%
In quantum information theory \cite{nc00}, $S\left(E_{i}|E_{f}\right)$
denotes conditional entropy, and it measures the entropy of $E_{i}$ given
that the values of all the emitted particles with a total energy $E_{f}$ are
known. Quantitatively, it is equal to the decrease of the entropy of a black
hole with mass $M-E_{f}$ upon the emission of a particle with energy $E_i$. Such
a result is consistent with the thermodynamic second law of a black hole \cite{swh760}:
the emitted particles must carry entropies
in order to balance the total entropy of the black hole and the
radiation. In what follows we show that the amount of correlation Eq. (\ref{co})
hidden inside Hawking radiation is precisely equal to mutual information.
The mutual information \cite{nc00} in a composite quantum system composed of
sub-systems $A$ and $B$ is defined as%
\begin{equation*}
S(A:B)\equiv S(A)+S(B)-S(A,B)=S(A)-S(A|B),
\end{equation*}%
where $S(A|B)$ is the conditional entropy. It is a legitimate measure for
the total amount of correlations between any bi-partite system. For
sequential emission of two particles with energies $E_{1}$ and $E_{2}$,
we find
\begin{eqnarray}
S(E_{2}:E_{1}) &\equiv & S(E_{2})-S(E_{2}|E_{1})\nonumber\\
&=& -\ln \Gamma (E_{2})+\ln \Gamma
(E_{2}|E_{1}). \label{tmi}
\end{eqnarray}%
Using Eqs. (\ref{2tp}) and (\ref{cp}), we obtain $S(E_{2}:E_{1})=8\pi
E_{1}E_{2}$, \textit{i.e.}, the correlation of Eq. (\ref{co}) is
exactly equal to the mutual information between the two sequential
emissions.
We now count the entropy carried away by Hawking radiations. The
entropy of the first emission with an energy $E_{1}$ from a black
hole of mass $M$ is
\begin{eqnarray}
S(E_{1})=-\ln \Gamma (E_{1})=8\pi E_{1}\left( M-\frac{E_{1}}{2}\right).
\label{tp1}
\end{eqnarray}%
The conditional entropy of a second emission with an energy $E_{2}$ after
the $E_{1}$ emission is
\begin{eqnarray}
S(E_{2}|E_{1})=-\ln \Gamma (E_{2}|E_{1})=8\pi E_{2}\left( M-E_{1}-\frac{E_{2}%
}{2}\right) . \label{ctp}
\end{eqnarray}%
The total entropy for the two emissions $E_{1}$ and $E_{2}$ then becomes
\begin{equation*}
S(E_{1},E_{2})=S(E_{1})+S(E_{2}|E_{1}),
\end{equation*}%
and the mass of the black hole reduces to $M-E_{1}-E_{2}$ while
it proceeds with the emission of energy $E_{3}$ with an entropy $%
S(E_{3}|E_{1},E_{2})=-\ln \Gamma (E_{3}|E_{1},E_{2})$. The total entropy of
three emissions at energies $E_{1} $, $E_{2}$, and $E_{3}$ is
\begin{equation*}
S(E_{1},E_{2},E_{3})=S(E_{1})+S(E_{2}|E_{1})+S(E_{3}|E_{1},E_{2}).
\end{equation*}%
Repeating the process until the black hole is completely exhausted, we find
\begin{eqnarray}
S(E_{1},E_{2},\cdots ,E_{n})=\sum\limits_{i=1}^{n}S(E_{i}|E_{1},E_{2},\cdots
,E_{i-1}), \label{bhe}
\end{eqnarray}%
where $M=\sum_{i=1}^{n}E_{i}$ equals to the initial black hole mass
due to energy conservation and $S(E_{1},E_{2},...,E_{n})$ denotes
the joint entropy of all emissions while $S(E_{i}|E_{1},E_{2},\cdots
,E_{i-1}) $ is the conditional entropy. Equation (\ref{bhe}) then
corresponds to nothing but the chain rule of conditional entropies
in quantum information theory \cite{nc00}. In the appendix, we find
the total entropy $S(E_{1},E_{2},...,E_{n})=4\pi M^{2}$ exactly
equals the black hole's Bekenstein-Hawking entropy. This result is
independently verified by counting of microstates of Hawking
radiations as shown in the appendix.
The reason information can be carried away by black hole radiation is
the probabilistic nature of the emission itself. Given the emission rate $%
\Gamma (E)\sim \exp \left[ -8\pi E\left( M-\frac{E}{2}\right) \right] $, one
knows definitively that a radiation of energy $E$ may occur with a
probability $\Gamma (E)$. In other words, the uncertainty of the event (an
emission with an energy $E$) or the information we can gain, on
average, from the event is $S(E)=-\ln \Gamma (E)$. When an emission with an
energy $E_{1}$ is received, the potential gain in information is $%
S(E_{1})=-\ln \Gamma (E_{1})$. When the next emission with an
energy $E_{2}$ is received, an additional information $S(E_{2}|E_{1})=-\ln
\Gamma (E_{2}|E_{1})$ can be gained, which is conditional on already
receiving the emission of an energy $E_{1}$. Continuing on, we compute the
information gained from all emissions until the black hole
is exhausted. The total entropy carried out by radiations is then found to
be $S(E_{1},E_{2},...,E_{n})=4\pi M^{2}$, which means all the entropy of the
black hole is taken out by its Hawking radiations. Putting together our
earlier result that the entropy carried away by an emission is
the same as the entropy reduction of the accompanying black hole
during each emission, we conclusively show that entropies of Hawking
radiations and their accompanying black holes are conserved during black
hole radiation. According to quantum mechanics, a unitary process does not
change the entropy of a closed system. This implies that the process
of Hawking radiation
is unitary in principle, and no information loss is expected.
In conclusion, through a careful reexamination of Hawking radiation, we
discover and quantify correlations amongst radiated particles in terms of
Eq. (\ref{co}). Our result for the first time provides a clear picture of
how and how much information can be carried away by Hawking radiation from a
black hole. Although the prospect for information hidden inside Hawking
radiation has been discussed time and again, earlier works do not enforce
energy conservation strictly and assumed a thermal distribution for the
radiated particles (please see \cite{jp92} and references therein). In
contrast, our study is built on the principle of energy conservation, where
the effect of self-gravitation plays a crucial role, and the spectrum of
radiated particles is non-thermal. Making connection with information
theory, we find that entropy is strictly conserved during Hawking radiation,
\textit{i.e.}, the entropy of a black hole is coded completely in the
correlations of the emitted radiations upon its exhaustion.
Our conclusions show the information is
not lost, and unitarity is held in the process of Hawking radiation
although based on results within a semiclassical treatment for s-wave emissions
where energy conservation is enforced \cite{pw00}. For more elaborate
treatments, {\it e.g.}, those involving coding information in the correlations,
a complete quantum gravity theory may still be needed. However, our
analysis confirms that the energy conservation or self-gravitational
effect remains crucial for approaches
based on self-consistent quantum gravity theories.
Finally, we hope to point out that our analysis can be extended to
charged black holes, Kerr black holes, and Kerr-Neumann black holes.
Even for the situations involving quantum gravity effects or the
noncommutative black holes, our method remains effective in
providing consistent resolutions \cite{zcyz}. We show that due to
self-gravitation effect, information can come out in the form of
correlated emissions from a black hole, and our work thus resolves
the black hole information loss paradox.
This work is supported by National Basic Research Program of China
(NBRPC) under Grant No. 2006CB921203.
\section{appendix}
This appendix contains some details for
a few key steps supporting our
results as given in the main text.
The joint probability distribution of two simultaneous emissions
of energies $E_{1}$ and $E_{2}$ is
\begin{eqnarray}
\Gamma (E_{1},E_{2}) &=& \Gamma (E_{1}+E_{2})\nonumber\\
&=&\exp \left[ -8\pi
(E_{1}+E_{2})\left( M-\frac{E_{1}+E_{2}}{2}\right) \right],\nonumber
\end{eqnarray}
subjected to a normalization factor $\Lambda$, determined by
$\Lambda\int_{0}^{M}\exp [-8\pi E(M-\frac{E}{2})]dE=1$. The
independent probability distributions for a single emission $\Gamma (E_{1})$
or $\Gamma (E_{2})$ are $\Gamma (E_{1})=\Lambda
\int_{0}^{M-E_{1}}\Gamma (E_{1},E_{2})dE_{2}=\exp [-8\pi E_{1}(M-\frac{E_{1}%
}{2})]$ and $\Gamma (E_{2})=\Lambda \int_{0}^{M-E_{2}}\Gamma
(E_{1},E_{2})dE_{1}=\exp [-8\pi E_{2}(M-\frac{E_{2}}{2})]$ and are
identical in their function forms.
In the main text, our result Eq. (\ref{te}) reveals that Hawking
radiations are correlated and carry away that much entropy
from the black hole. We now show that the initial entropy
of a black hole is the same as the entropy of all emitted radiations upon
its exhaustion.
Assuming the tunneling/emission probability is given by Eq. (\ref{tp}),
when the black hole is
exhausted due to emissions, we can find the entropy of our system by counting the number of
its microstates. For example, one of the microstates is $\left(
E_{1},E_{2},\cdots ,E_{n}\right) $ and $\sum_{i}E_{i}=M$. Within such a
description, the order of $E_{i}$ cannot be changed, the distribution of
each $E_{i}$ is consistent with the discussion in the
main text. The probability for the specific microstate $\left(
E_{1},E_{2},\cdots ,E_{n}\right) $ to occur is given by
\begin{equation*}
P=\Gamma (M;E_{1})\times \Gamma (M-E_{1};E_{2})\times \cdots \times \Gamma
(M-\sum_{j=1}^{n-1}E_{j};E_{n}),
\end{equation*}%
with
\begin{eqnarray}
\Gamma (M;E_{1}) &=&\exp \left[ -8\pi E_{1}(M-E_{1}/2)\right] , \notag \\
\Gamma (M-E_{1};E_{2}) &=&\exp \left[ -8\pi E_{2}(M-E_{1}-E_{2}/2)\right] ,
\notag \\
&&\cdots , \notag \\
\Gamma (M-\sum_{j=1}^{n-1}E_{j};E_{n}) &=&\exp \left[ -8\pi
E_{n}(M-\sum_{j=1}^{n-1}E_{j}-E_{n}/2)\right] \nonumber\\
&=&\exp (-4\pi E_{n}^{2}),
\notag
\end{eqnarray}%
where $\Gamma (M;E_{1})$ denotes the probability Eq. (\ref{tp})
for a emission with energy $E_{1}$ by a black hole with mass $M$. Proceeding
with a detailed calculation, we find that $P=\exp (-4\pi M^{2})=\exp (-S_{%
\mathrm{BH}})$, where $S_{\mathrm{BH}}$ is the entropy of the black hole.
According to the fundamental postulate of statistical mechanics that all
microstates of an isolated system are equally likely, we find the number of
microstates $\Omega =\frac{1}{p}=\exp (S_{\mathrm{BH}})$. On the other hand,
according to the Boltzmann's definition, the entropy of a system is given by
$S=\ln \Omega =S_{\mathrm{BH}}$, (where the Boltzmann constant $k=1$ is taken.)
Thus we prove that after a black hole is exhausted due to
Hawking radiation, the entropy carried away by all emissions
is precisely equal to the entropy in the original black hole.
|
2,877,628,088,564 | arxiv |
\section{Introduction}
Autonomous systems and humans have different strengths. \citet{moravec1988mind} claimed in 1988 that ``It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.'' Thirty years later, computers have achieved superhuman level performance on many games like Go and StarCraft II \citep{schrittwieser2020mastering, vinyals2019grandmaster}. Despite major strides in perception, algorithms still exhibit brittleness and struggle with consistency in tasks that most would consider trivial for a young adult, like reliably generating a plausible sentence from a list of words \citep{Lin2020CommonGenAC, su2019one, akhtar2021advances, Korteling2021HumanVA}.
Humans often make suboptimal decisions; however, they are exceptionally good at finding ways to accomplish tasks, balance risk, reason through unstructured problems, and incorporate new information into decisions without prior experience in a situation. Systems integrating autonomy often take advantage of human strengths by using a human-on-the-loop structure. This framework allows for humans to use their experience to recognize situations where the autonomy might struggle or is failing and to take control \citep{teslaAutopilot, korber2018introduction}. We hypothesize that instead of requiring the human to assume control, an agent can use occasional control inputs from the human-on-the-loop in the form of action suggestions to improve its understanding of the environment.
In this work, we explore a method of collaboration that enables combining benefits of heterogeneous systems like humans and machines through action suggestions versus direct control. Our approach was inspired by the way that we see humans incorporate action suggestions in daily life. A real world example is a pilot flying an aircraft while coordinating with air traffic control (ATC). In the United States, there are various services ATC provides to ensure safety. Many of these services provide suggestions to pilots, but do not mandate an action or remove responsibilities from the pilot (e.g. radar assistance to VFR aircraft) \citep{faraim}. Consider a situation where a pilot is flying at low altitude, has knowledge of another aircraft to the left, and an ATC controller recommends a turn to the left. This action suggestion would cause the pilot to consider situations where a left turn towards traffic would be optimal (e.g. terrain ahead) and reassess their belief about the environment.
In our problem of interest, we are assuming the agent has the same objective as a suggester and use action suggestions as additional information about the environment to increase the quality of the action selection. We avoid having to share exact belief states between agents by treating the suggested action as an observation. Using the assumption that the suggester is collaborative, we use the agent's policy to infer a distribution over suggested actions and use this distribution to update the agent's belief. This approach allows collaboration between different types of systems and avoids explicit translation of complex belief spaces.
In this paper, we first review related work in \cref{sec: prev work} and discuss how our approach differs from previous methods. Our primary contributions are in \cref{sec: primary section}. We first provide an overview of a partially observable Markov decision process (POMDP) and then outline the problem setting. \Cref{sec: incorporating recs} demonstrates how we can view action suggestions as independent observations and modify the belief update process. We then provide two methods to estimate the distribution over suggested actions using the agent's policy in order to update the belief in \cref{sec: methods}. \Cref{sec: experiments} presents the experiments and discusses key results demonstrating the effectiveness of our approach.
\section{Previous Work} \label{sec: prev work}
Collaborative sequential decision making between two agents is a form of shared autonomy. The exact definition of shared autonomy and control varies in the literature \citep{Abbink2018ATO}. A common theme among the approaches is combining inputs between two agents to achieve performance better than the agent's acting alone. The problems often involve one of two main categories: $1$) assistance from an autonomous agent to a human or $2$) assistance from a human to the autonomous system. The problem framework we are considering aligns with the second category.
In the first category, an autonomous agent is seeking to perform actions to assist the other agent (often a human). \citet{Dragan2013APF} discuss key aspects of this category and decompose it into prediction of the user's intent and arbitration between the user's inputs and the action chosen by the autonomous system. \citet{Nguyen2011CAPIRCA} propose a method that decomposes a game into subtasks that can be modelled as a Markov decision process. The autonomous agent then infers the subtask the other agent is attempting to accomplish and selects actions to assist. Others model this category as a POMDP where the uncertainty is around the goal of the assisted agent \citep{Macindoe2012POMCoPBS, Javdani2015SharedAV, Javdani2018SharedAV}.
The second category often involves forms of corrections. \citet{LoseyDylan2020LearningTC} use aspects of optimal control to learn from physical human interventions. \citet{Nemec2018AnEP} first learn from human demonstrations and then refine the policy through incremental learning from kinesthetic guidance of the autonomous agent. Other approaches perform the corrections in an iterative cycle during a planning process. \citet{Reardon2018Shaping} present a method that allows a human to interactively provide feedback during planning. The feedback is in the form of hints or suggestions and is used to modify the optimization process. This approach is similar to the concept of reward shaping in reinforcement learning.
\citet{Cognetti2020PerceptionAwareHN} provide a method that allows for real time modifications of a path while \citet{Hagenow2021CorrectiveSA} present a method that permits an outside agent to modify key robot state variables. These changes are then blended with the original control. The concept of following guidance versus acting independently and how to blend the two was explored by \citet{Evrard2009HomotopybasedCF} where they use two controllers and alternate roles of following and leading. \citet{Medina2013DynamicSS} present a method that selects between two strategies depending on the level of disagreement between agents.
Another related area of research is imitation learning. Within this field, there are methods to learn from feedback by querying the expert during training or expert intervention real-time \citep{Ross2011ARO, Spencer2021ExpertIL}. These methodologies are a form of human-machine collaboration by using the expert knowledge of the human to train or correct the system and then letting the system perform the task autonomously.
These approaches generally rely on strict assumptions or an explicit model of the other agent so the autonomous system can interpret inputs in a way to reason how to integrate them. In the first category, \citet{Jeon2020SharedAW} propose a model structure to map human inputs to different actions based on the robot's confidence in the goal. \citet{Reddy2018SharedAV} deviate from the use of model based methods by using human-in-the-loop deep reinforcement learning to map from observations and user inputs to an agent action.
Our approach builds on the concept of mapping user inputs to an agent action. However, we differ from other methods in that we do not build an explicit model of the suggester nor learn a mapping that might be suggester dependent. We achieve this mapping by assuming the suggester is collaborative and shares the same objective when providing inputs. This assumption enables the use of the key insight from \citet{Spencer2021ExpertIL}, ``... any amount of expert feedback ... provides information about the quality of the current state, the quality of the action, or both'', and we build an implicit model of the suggester using only the agent's knowledge of the environment and the agent's policy.
\section{Action Suggestions as Observations} \label{sec: primary section}
\subsection{Background} \label{sec: background}
A partially observable Markov decision process (POMDP) is a mathematical framework to model sequential decision making problems under uncertainty \citep{Smallwood1973TheOC}. A POMDP is represented as a tuple $\left( \mathcal{S}, \mathcal{A}, \mathcal{O}, T, O, R, \gamma \right)$, where $\mathcal{S}$ is a set of states, $\mathcal{A}$ is a set of actions, and $\mathcal{O}$ is a set of observations. At each time step, an agent starts in state $s \in \mathcal{S}$ and chooses an action, $a \in \mathcal{A}$. The agent transitions from state $s$ to state $s^\prime$ based on the transition function $T(s, a, s^\prime) = p(s^\prime \mid s, a)$, which represents the conditional probability of transitioning to state $s^\prime$ from state $s$ after choosing action $a$. The agent does not directly observe the state, but receives an observation $o \in \mathcal{O}$ based on the observation function $O(s^\prime, a, o)=p(o \mid s^\prime, a)$, which represents the conditional probability of observing observation $o$ given the agent chose action $a$ and transitioned to state $s^\prime$.
At each time step the agent receives a reward, $R(s,a) \in \mathbb{R}$ for choosing action $a$ from state $s$. For infinite horizon POMDPs, the discount factor $\gamma \in [0, 1)$ is applied to the reward at each time step. The goal of an agent is to maximize the total expected reward $\mathbb{E} \left[ \sum_{t=0}^\infty \gamma^tR \left( s_t, a_t \right) \right]$, where $s_t$ and $a_t$ are the state and action at time $t$. One method to solve a POMDP is to infer a belief distribution $b \in \mathcal{B}$ over $\mathcal{S}$ and then solve for a policy $\pi$ that maps the belief to an action where $\mathcal{B}$ is the set of beliefs over $\mathcal{S}$ \citep{Kochenderfer2022}. Executing with this type of policy requires maintaining $b$ through updates after each time step.
Policies can be generated offline or computed online during execution. In this work, we focus on applying our method to policies generated offline and leave the application to online solvers for future work. Many approximate offline solvers involve point-based value iteration. The idea is to sample the belief space and perform backup operations on the sampled points of the belief space, iteratively applying the Bellman equation until the value function converges. PBVI \citep{Pineau2003PointbasedVI}, HSVI2 \citep{Smith2005PointBasedPA}, FSVI \citep{Shani2007ForwardSV}, and SARSOP \citep{Kurniawati2008SARSOPEP} are examples of such an approach, though they differ in the selection of initial belief points, the generation of points at each iteration, and the choice of which points to backup. These algorithms represent the policy as a set of alpha vectors. In this work, we used SARSOP to generate the policies; however, any algorithm that produces a policy where the utility of a belief can be calculated could be implemented with little change to our methods.
\subsection{Problem Formulation} \label{sec: problem overview}
For a given problem of sequential decision making under uncertainty, we choose to model the problem as a POMDP and use a point-based solution method. In this work, we assume discrete state, action, and observation spaces but the methods can be generalized to continuous spaces. Our problem involves two entities: an autonomous system using the POMDP policy to perform actions and interact with the environment that we will refer to as the agent, and a suggester providing action suggestions to the agent. The suggester can observe the environment, but not interact or affect the state except to provide recommended actions to the agent. The suggester is not required to provide suggestions but can provide a maximum of one action suggestion at each time step.
The agent and suggester are collaborative and share the same objective of maximizing the total expected reward. The agent and suggester can receive different information and maintain separate beliefs of the environment. The separation of the actors allows each to process and receive information independently and capitalize on strengths differently. An example problem is an expert human receiving observations not modeled by an autonomous robot and providing intermittent suggestions.
\subsection{Incorporating Action Suggestions} \label{sec: incorporating recs}
There are different ways for the agent to use action suggestions. One approach would be to model the suggester in the original POMDP. This approach is similar to previous shared autonomy work (\cref{sec: prev work}). Modeling the suggester within the POMDP would provide a fundamental way of incorporating suggestions, but would require a priori knowledge of the suggester to develop an explicit model to solve for the policy and also require maintaining a belief over the model during execution. A simple method that does not require a model of the suggester would be for the agent to naively follow each suggestion. A similar and more robust method would be to follow a suggestion if it meets some defined criteria, thus potentially disregarding suboptimal suggestions (e.g. only follow a suggestion if it is a specific action). These approaches are simple and can incorporate suggestions; however, they do not benefit from incorporating implied information contained with each suggested action.
Applying our inspirations we introduced in the flight example to our problem, each action suggestion contains information related to the suggester's belief of the environment. We propose treating each action suggestion as an observation of the state in order to update the agent's belief. This idea enables the suggester to influence the agent while the agent remains autonomous. Our problem assumes the suggester and the agent maintain independent beliefs. If we further assume the suggested action is not influenced by the agent's previous action, the suggested action is only dependent on the state. The independence of the agent's previous action and the suggester allows a modification of our belief update process that only involves the probability of receiving the suggested action given the current state. The suggested action is not always independent of the agent's action, but the assumption that it only depends on the current state is often reasonable.
Our belief at time $t$ over state $s \in \mathcal{S}$ with observations $o \in \mathcal{O}$ and action suggestions $o^s \in \mathcal{A}$ is $p(s_t \mid a_{0:t-1}, o_{o:t}, o_{o:t}^s)$. Using Bayes' theorem, we can rewrite our expression as
\begin{align}
p(s_t \mid a_{0:t^-}, o_{0:t}, o_{0:t}^s) \propto p(o_t^s \mid s_t, a_{0:t^-}, o_{0:t^-}, o_{0:t^-}^s) p(o_t &\mid s_t, a_{0:t^-}, o_{0:t^-}, o_{0:t}^s) & \nonumber \\
& p(s_t \mid a_{0:t^-}, o_{0:t^-}, o_{0:t^-}^s)
\end{align}%
where the subscript $0\!\!:\!\!t$ refers to all instances of that variable from $0$ to $t$, and $t^- = t-1$. This expression can be simplified using the independence assumption, the law of total probability, and the Markov property to
\begin{align} \label{eq: belief update}
p(s_t \mid a_{0:t^-}, o_{0:t}, o_{0:t}^s) &\propto p(o_t^s \mid s_t) p(o_t \mid s_t, a_{t^-}) \sum_{s_{t^-} \in \mathcal{S}} p(s_t \mid s_{t^-}, a_{t^-}) p(s_{t^-} \mid a_{t^-}, o_{t^-}, o_{t^-}^s).
\end{align
\Cref{eq: belief update} is a simple modification to our standard belief update procedure with POMDPs and is an expression of updating a belief with two independent observations.
\subsection{Inferring the Distribution over Suggested Actions} \label{sec: methods}
We can update the agent's belief based on $p(o_t^s \mid s_t)$. However, the agent cannot calculate this distribution directly. We propose using the assumption that the agents are collaborative and estimate $p(o_t^s \mid s_t)$ by using the agent's policy. In our flight example, when the pilot receives the suggestion to turn left they would use their experience to reason through what scenarios they would have provided that same suggestion.
\paragraph{Scaled Rational.}
One approach is to assume the suggester is perfectly rational and has a policy identical to the agent. The suggester would only give actions that would maximize the total expected reward using the agent's policy $\pi$. In other words, $p(o_t^s \mid s_t) \approx \mathbf{1}(o_t^s = \pi(s_t))$ where $\mathbf{1}(\cdot)$ is the indicator function. To relax the assumption of a perfectly rational agent and an identical policy, we introduce a scaling factor, $\tau \in (0, 1]$. With a scaling factor, we assume the suggester acts rationally and with the agent's policy a fraction $\tau$ of the time and uses a random policy otherwise. The scaled rational update can be expressed as
\begin{align} \label{eq: scaled}
p(o_t^s \mid s_t) \approx
\begin{cases}
\tau, & \text{if}\ o_t^s = \pi(s_t) \\
\frac{1-\tau}{ | \mathcal{A} | - 1}, & \text{otherwise}.
\end{cases}
\end{align}
\paragraph{Noisy Rational.}
Another approach is to assume the likelihood of the suggested action is related to the total expected reward of choosing that action. \citet{shepard1957} developed a noisy rational model from a psychological perspective, and this model has been widely used in robotics to model suboptimal decision making \citep{kwon2020humans, basu2019active, holladay2016}. With this model, the suggester is most likely to choose the action with the highest expected return and less likely to choose suboptimal actions. This model requires calculating the total expected return of each action.
The action value function $Q(s,a)$ returns the expected value of performing action $a$ in state $s$ and then executing optimally thereafter. We can use the agent's policy and the reward function and perform a one step look ahead to calculate the Q-function \cite{Kochenderfer2022}. Using a policy represented with alpha vectors, we can calculate $Q(s,a)$ by performing a one step look ahead using a belief that the agent is in state $s$. The noisy rational model has the same form as the Boltzmann distribution and the softmax function. We can express this model as
\begin{align} \label{eq: noisy}
p(o_t^s \mid s_t) \approx \frac{\exp{ \left( \lambda Q(s_t,o_t^s) \right) }}{\sum_{a \in \mathcal{A}} \exp{ \left( \lambda Q(s_t,a) \right) }}
\end{align}%
where $\lambda \in [0, \infty) $ is a hyperparameter often referred to as the rationality coefficient. The distribution approaches a uniform distribution as $\lambda$ approaches $0$. As $\lambda$ increases, the model approaches a perfectly rational agent.
\section{Experiments} \label{sec: experiments}
The proposed methodology to incorporate action suggestions as observations was evaluated on two classic POMDP problems, Tag \citep{Pineau2003PointbasedVI} and RockSample \citep{Smith2004HeuristicSV}. These domains are relatively simple but provide a way to evaluate the merits of the approach by removing domain-specific variables that might influence performance. The simulations were constructed to first evaluate the effectiveness and efficiency of the proposed approach and then to test the robustness to suboptimal action suggestions.
\subsection{Environments} \label{sec: scenarios}
The two environments chosen to evaluate our approach have discrete action, state, and observation spaces. The structure of the problems allow for the visualization of the belief space while also providing a modest scalability problem.
\paragraph{Tag.} The Tag environment was first introduced by \citet{Pineau2003PointbasedVI}. The layout of the environment can be seen in \cref{fig: tag belief change}. The agent and an opponent are initialized randomly in the grid. The goal of the agent is to tag the opponent by performing the \textit{tag} action while in the same square as the opponent. The agent can move in the four cardinal directions or perform the \textit{tag} action. The movement of the agent is deterministic based on its selected action. A reward of $-1$ is imposed for each motion action and the \textit{tag} action results in a $+10$ for a successful tag and $-10$ otherwise. The agent's position is fully observable but the opponent's position is unobserved unless both actors are in the same cell. The opponent moves stochastically according to a fixed policy away from the agent. The opponent moves away from the agent \SI{80}{\percent} of the time and stays in the same cell otherwise. Our implementation of the opponent's movement policy varies slightly from the original paper allowing more movement away from the agent, thus making the scenario slightly more challenging. \Cref{sec: tag diff} provides more details of the differences.
\paragraph{RockSample.}
The RockSample environment consists of a robot that must explore an environment and sample rocks of scientific value \citep{Smith2004HeuristicSV}. Each rock can either be good or bad and the robot receives rewards accordingly. The robot also receives a reward for departing the environment by entering an exit region. The robot knows the positions of every rock and its own location exactly, but does not know whether each rock is good or bad. The robot has a noisy sensor to check if a rock is good or bad and the accuracy of the sensor depends on the distance to the rock. Upon each use of the sensor, the robot receives a negative reward. In the following sections, a RockSample problem will be designated as RockSample$(n,k,sr,sp)$ where $n$ designates a grid size of $n \times n$, $k$ is the number of rocks, $sr$ is the sensor range efficiency, and $sp$ is the penalty for using the sensor. The reward for sampling a good rock and exiting the environment is $+10$ and the penalty for sampling a bad rock is $-10$.
\subsection{Simulation Details} \label{sec: implement details}
The simulation environment was built using the POMDPs.jl framework \citep{egorov2017pomdps}. The Tag environment used the standard parameters and was simulated until the first of the agent tagging the opponent or $100$ steps. The RockSample environment was simulated until the agent exited the environment. The SARSOP algorithm was used to generate policies for the agent. All of the agents used the same policies for the environment but incorporated the action suggestions differently. If the suggested action was the same as the selected action with the current belief, no modification of the belief was performed. This implementation decision potentially prohibits valuable information to be passed when actions align. However, simulations with the RockSample and Tag environments showed differences were negligible and the slight decrease in computation time allowed for more simulations to be performed. Performing belief updates with aligned suggestions would likely be critical in different scenarios.
In RockSample, the rocks were randomly initialized based on a uniform distribution. The agent's belief was initialized with a uniform distribution over the state space in Tag and a uniform distribution over the rocks in RockSample. The number of simulations for a given scenario varied and we provide the \SI{95}{\percent} confidence interval for each value reported in the results section. The rewards reported are the mean value over all simulations for a given scenario. The number of suggestions refers to the number of times the suggested action differed from the action initially selected by the agent before considering the suggestion. To simulate a suggester that was not always present or could not communicate actions reliably, the simulation also supported varying the percentage of suggestions passed to the agent.
One suggester was used but the quality and consistency of the action suggestions varied. Different agents were simulated to evaluate our proposed approach and establish baselines in each scenario. Details of each agent are provided below.%
\begin{itemize}
\item \textit{Suggester}. The suggester is a combination of an all-knowing agent and a purely random suggester. The rate of randomness is adjusted to scale from purely random to completely all-knowing. If the suggester is sending a non-random suggestion, the action is selected from the POMDP policy using the true state of the environment.
\item \textit{Normal Agent}. The normal agent executes in the environment using the policy without considering any action suggestions. This agent provides a baseline when action suggestions are not considered.
\item \textit{Perfect Agent.} The perfect agent is initialized and executed with perfect knowledge of the state. At each time step an action is selected from the policy given the true state of the system. This agent provides and upper bound on the total expected reward executing with the given policy.
\item \textit{Random Agent.} The random agent chooses an action from a uniform distribution over the action space at each time step. This agent provides a lower bound on the total expected reward when investigating robustness to suboptimal and random action suggestions.
\item \textit{Naive Agent.} The naive agent executes in the environment using the POMDP policy. When action suggestions are received, it follows the action suggestions naively, and performs no modifications to its belief state. The rate at which the naive agent follows the suggestions was adjusted to investigate robustness. The rate of following the suggestion is depicted by $\nu$ in the results.
\item \textit{Scaled Agent.} The scaled agent incorporates the methodology outlined in \cref{sec: methods} and uses \cref{eq: scaled} to estimate $p(o_t^s \mid s_t)$. The hyperparameter $\tau$ is kept constant for each simulation and the value used is shown with the presented results. The value for $\tau$ was not adjusted to fine-tune performance. It was broadly changed to show overall effects of the hyperparameter in different situations.
\item \textit{Noisy Agent.} The noisy agent also incorporates the ideas from \cref{sec: methods}. This agent uses \cref{eq: noisy} to estimate $p(o_t^s \mid s_t)$. The hyperparamter $\lambda$ is kept constant for each simulation and the value is shown with the respective results. Like the scaled agent, the parameter $\lambda$ was not fine-tuned for performance. Coarse adjustments were made to depict how the hyperparameter influenced the algorithm.
\end{itemize}
\subsection{Effectiveness and Efficiency Results} \label{sec: effectiveness}
The different agents were compared using an all-knowing suggester with a \SI{100}{\percent} message reception rate. Simulations were ran on Tag, RockSample$(7,8,20,0)$, and RockSample$(8,4,10,-1)$. RockSample$(8,4,10,-1)$ was designed with the rocks near the corners of the environment. This layout emphasized the importance of each movement direction. The results are summarized in \cref{tab: results}. We also compared the different agents using suggesters with partial views of the state and provide those results in \cref{sec: diff qual sugg}.
As expected, incorporating actions suggestions from an all-knowing suggester improves performance across all scenarios. Despite not directly following the suggested action, the Scaled and Noisy agents were able to reach a near perfect reward. A key metric to note with these results is the number of suggestions. The naive agent was able to achieve perfect scores by simply following the suggestions; however, it did not integrate any information contained from each suggestion and resulted in requiring more suggestions to achieve similar scores. While the scores for all assisted agents decrease when the hyperparameters change (become less trusting of the suggester) the naive agent's score decreases at a faster rate. Again, this difference is an artifact of the benefit of updating the belief of the agent with each suggestion.
\begin{table}[tb]
\footnotesize
\centering
\ra{1.3}
\caption{Simulation results of various agents using an all-knowing suggester.}
\begin{tabular}{@{}lrrrcrrcrr@{}}
\toprule
\multicolumn{2}{l}{\multirow{2}{*}{Agent Type}} & \multicolumn{2}{c}{Tag} & \phantom{ } & \multicolumn{2}{c}{RS$(7,8,20,0)$} & \phantom{} & \multicolumn{2}{c}{RS$(8,4,10,-1)$} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
&& Reward & $\#$ Sugg && Reward & $\#$ Sugg && Reward & $\#$ Sugg \\
\midrule
\multicolumn{2}{l}{Normal} & $-10.7 \pm 0.3$ & -- && $21.5 \pm 0.6$ & -- && $10.1 \pm 0.1$ & -- \\
\multicolumn{2}{l}{Perfect} & $-1.7 \pm 0.2$ & -- && $28.4 \pm 0.5$ & -- && $16.7 \pm 0.1$ & -- \\
\multicolumn{2}{l}{Naive} \\
& $\nu=1.00$ & $-1.6 \pm 0.2$ & $3.7 \pm 0.1$ && $28.5 \pm 0.6$ & $15.3 \pm 0.3$ && $16.9 \pm 0.1$ & $8.4 \pm 0.1$ \\
& $\nu=0.75$ & $-3.8 \pm 0.2$ & $6.1 \pm 0.3$ && $26.0 \pm 0.2$ & $15.3 \pm 0.2$ && $14.6 \pm 0.1$ & $7.7 \pm 0.1$ \\
& $\nu=0.50$ & $-6.8 \pm 0.3$ & $15.2\pm 0.9$ && $23.8 \pm 0.3$ & $15.1 \pm 0.2$ && $12.7 \pm 0.2$ & $7.8 \pm 0.2$ \\
\multicolumn{2}{l}{Scaled} \\
& $\tau=0.99$ & $-1.8 \pm 0.2$ & $3.1 \pm 0.1$ && $27.4 \pm 0.5$ & $6.4 \pm 0.1$ && $16.4 \pm 0.1$ & $2.8 \pm 0.1$ \\
& $\tau=0.75$ & $-2.4 \pm 0.2$ & $3.3 \pm 0.1$ && $27.3 \pm 0.5$ & $6.8 \pm 0.1$ && $16.2 \pm 0.1$ & $2.8 \pm 0.1$ \\
& $\tau=0.50$ & $-3.6 \pm 0.2$ & $3.9 \pm 0.1$ && $27.0 \pm 0.4$ & $7.8 \pm 0.1$ && $16.3 \pm 0.1$ & $3.2 \pm 0.1$ \\
\multicolumn{2}{l}{Noisy} \\
& $\lambda=5.0$ & $-1.8 \pm 0.2$ & $3.2 \pm 0.1$ && $27.5 \pm 0.4$ & $7.8 \pm 0.1 $ && $16.4 \pm 0.1$ & $4.6 \pm 0.1$ \\
& $\lambda=2.0$ & $-2.0 \pm 0.2$ & $3.3 \pm 0.1$ && $27.8 \pm 0.6$ & $9.1 \pm 0.2$ && $16.3 \pm 0.1$ & $4.5 \pm 0.1$ \\
& $\lambda=1.0$ & $-2.4 \pm 0.2$ & $3.6 \pm 0.1$ && $26.8 \pm 0.6$ & $10.6 \pm 0.2$ && $16.2 \pm 0.2$ & $5.2 \pm 0.1$ \\
\bottomrule
\end{tabular}
\label{tab: results}
\end{table}
A visual depiction of the change in the belief state of an agent in the Tag environment after incorporating an action suggestion is depicted in \cref{fig: tag belief change}. \Cref{fig: before up} shows the belief the agent has of where the target is located. With that belief, the best action according to the agent's policy would be to move north. However, the agent receives a suggestion from a suggester to move west. \Cref{fig: after up} shows the updated belief after using the noisy rational approach with $\lambda=1$. There are multiple states to the west of the agent where the target might be located. The update process was able to incorporate the implied information of the \textit{west} suggestion by shifting the distribution towards the west side of the grid.
\begin{figure*}[tb]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{before_update}
\end{scaletikzpicturetowidth}
\caption{\small Before received action suggestion.}%
\label{fig: before up}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{after_update}
\end{scaletikzpicturetowidth}
\caption{\small After belief update with $o^s = \text{west}$.}%
\label{fig: after up}
\end{subfigure}
\caption{\small Changes in the belief state of an agent after incorporating a recommended action. Using the original belief $b$ the agent's policy $\pi$ returns an action of \textit{north}. The recommended action was to move west. \Cref{fig: after up} depicts the updated belief $b^\prime$ after incorporating the suggested action. After the update, the policy produces an action of \textit{west}. The belief update used a noisy rational approach with $\lambda=1$.}
\label{fig: tag belief change}
\end{figure*}
Collaborators often are not capable of continuously providing suggestions to an agent. The percentage of suggestions received was varied to investigate the effectiveness of the agents in the presence of intermittent action suggestions. A suggester with perfect state knowledge was used but the message reception rate was varied. The results for the Tag environment are shown in \cref{fig: msg rate}. The change in reward is shown in \cref{fig: msg rate rew}. \Cref{fig: msg rate step} shows how the ratio of the number of suggestions to the number of steps required to tag changes. The increased reward and lower suggestion ratio further emphasizes the benefit of our approach.
\begin{figure*}[tb]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{mr_rew_tag}
\end{scaletikzpicturetowidth}
\caption{\small Message rate versus mean reward.}%
\label{fig: msg rate rew}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{mr_sug_tag}
\end{scaletikzpicturetowidth}
\caption{\small Message rate versus suggestions per step.}%
\label{fig: msg rate step}
\end{subfigure}
\caption{\small Tag performance with varying message reception rates.}
\label{fig: msg rate}
\end{figure*}
\subsection{Robustness Results}
The previous results assumed a perfectly rational suggester, but this is not always realistic. We evaluated the robustness of the different agents by adjusting the randomness of the suggester. The chance of a random action suggestion (uniformly picked from the action space) was varied from $0$ to $1$ representing a perfect suggester to a completely random suggester. The results from these simulations are shown for both the Tag environment and RockSample$(8,4,10,-1)$ in \cref{fig: robustness}.
As expected, the naive agent that follows all suggestions performs poorly as the randomness of the suggester increases. Decreasing $\nu$ increases the robustness, but sacrifices performance. The scaled and noisy approaches perform well even with high trust parameter settings ($\tau$ and $\nu$). These results demonstrate the value of not naively following suggestions and the benefit of balancing the agent's initial belief state with the information received from the action suggestion.
\begin{figure*}[tb]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{random_tag}
\end{scaletikzpicturetowidth}
\caption{\small Tag}%
\label{fig: placeholder3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{random_rs}
\end{scaletikzpicturetowidth}
\caption{\small RockSample$(8,4,10,-1)$}%
\label{fig: placebolder4}
\end{subfigure}
\caption{\small Robustness to suboptimal action suggestions.}
\label{fig: robustness}
\end{figure*}
\section{Conclusion} \label{sec: conclusion}
We developed a new method to increase collaboration between heterogeneous agents through the use of action suggestions. Based on the idea that a suggested action conveys information about the suggester's belief, we used the agent's current policy to transform the suggested action into a distribution that we used to update the agent's belief. We applied this approach in simulation and demonstrated the increased efficiency by requiring fewer suggestions and suggestion rate. We further demonstrated our approach was robust to suboptimal decisions and could still perform better than an unassisted agent with more than \SI{50}{\percent} of the suggestions being random. This methodology does not rely on knowledge of other agents. The only requirement is a shared objective and communication through the agent's set of actions. This low threshold of coordination increases collaboration opportunities with other systems including humans.
Integrating action suggestions as observations requires a change to the belief update process of the agent. The independence of the observations allow the update process to occur simultaneously or in any order which allows for asynchronous and unreliable suggestions without added complexity. This approach also scales linearly with the number of suggesters because the agent must only approximate the distribution over action suggestions for each suggester.
A critical assumption that enables this process with no knowledge of the suggester is that the suggester is collaborative. The scaled and noisy rational approaches do factor in suboptimal decisions but ultimately assume the suggester is providing actions in the best interest of the agent. If a suggester was maleficent, the belief would be skewed towards distributions that would explain the suggestions. The proposed approach is designed to be robust to suboptimal suggestions, but not resilient to bad actors.
There are many opportunities of future work expanding on these ideas. As previously discussed, this approach was formalized and applied to offline methods, but the same concept can be extended to online solvers. Online approaches also offer unique opportunities to use a suggested action to help balance exploration and exploitation while searching for an action. The experiment results demonstrated a trade-off of performance and robustness when changing the hyperparameters. Finding a way to learn the quality of the suggester and adjust the parameters real time is a promising idea. The benefit of modifying the agent's belief through an action suggestion is not limited to situations when the subsequent, updated belief is more accurate. A modified belief could be beneficial if the resulting actions are optimal regardless of the accuracy of the belief. This idea opens up possibilities of using action suggestions to increase performance when an agent's policy does not match the environment.
\printbibliography
\pagebreak
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{\Cref{sec: incorporating recs} and \cref{sec: incorporating recs} provide the key contributions and \cref{sec: experiments} documents the experimentation results.}
\item Did you describe the limitations of your work?
\answerYes{Key assumptions were documented when applying them and a key limitation of being robust and not resilient was revisited in \cref{sec: conclusion}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerNo{This paper provides a general approach to enable collaboration. Based on the guidelines and it being more of a foundational research paper, we did not discuss societal impacts as the list would include most applications of collaboration using a POMDP framework.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{The step to integrate the action suggestion as an independent observation was shown in \cref{sec: incorporating recs}. The assumption of dependence only on the state was mentioned in that section as well.}
\item Did you include complete proofs of all theoretical results?
\answerNA{The derivation of an independent update in a Bayes' filter is well established, but the key steps were shown in \cref{sec: incorporating recs}.}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{The link to the code is provided in the footnote on page 1. All policies for the environments were not provided based on file size, but the policies can be generated with a provided script.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{Experiment details are documented in \cref{sec: experiments}.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{See \cref{sec: experiments}.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerNo{Our approach does not require a lot of computation. Computation time and amount was not a concern and not provided.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{Experiments were built on the POMDPs.jl framework and documented in \cref{sec: implement details}.}
\item Did you mention the license of the assets?
\answerNo{POMDPs.jl is available free to use, copy, modify, merge, publish, distribute, and/or sublicense and is distributed with an MIT "Expat" License.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\pagebreak
\begin{appendices}
\section{Different Quality Suggester Results} \label{sec: diff qual sugg}
This section presents results on RockSample$(8,4,10,-1)$ when the suggester is not always all-knowing. In our approach, we formulated the belief update based on assuming the suggester observed the environment. These results demonstrate that our approach extends beyond an all-knowing suggester and can incorporate information from suggestions developed from different beliefs of the state.
\Cref{tab: results var sugg reward} contains the mean rewards and \cref{tab: results var sugg sugg} contains the mean number of suggestions considered by the agent. The details of the agents are provided in \cref{sec: implement details}. In these simulations, instead of having an all-knowing suggester, the suggester maintained a belief on the state of the rocks and provided suggestions based on that belief. The suggester's initial belief varied and was determined based on two parameters. We represent a suggester with different parameters as $(\text{G} \mid \text{B})$. The parameters can be thought of as G being the initial belief that good rocks are good and B being the initial belief that bad rocks are good. For example, a suggester with parameters of $(1.0 \mid 0.0)$ would represent perfect knowledge of the environment while $(0.5 \mid 0.5)$ would be an initial belief of a uniform distribution. Examples of different initial beliefs based on different parameters are provided in \cref{tab: init belief table}. In \cref{tab: init belief table}, the examples are based on a true state of $[1, 1, 0, 0]$ where one represents a \textit{good} rock and zero represents a \textit{bad} rock.
\begin{table}[h!]
\scriptsize
\centering
\ra{1.3}
\caption{Example initial beliefs for different suggesters. The true state for this example is $[1, 1, 0, 0]$. Beliefs for $(0.75 \mid 0.25)$ are rounded to ease display.}
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{State} & \multicolumn{4}{c}{Suggester Initial Belief Parameters} \\
\cmidrule{2-5}
& $(1.0 \mid 0.0)$ & $(1.0 \mid 0.5)$ & $(0.75 \mid 0.25)$ & $(0.5 \mid 0.5)$ \\
\midrule
$[0, 0, 0, 0]$ & $0.00$ & $0.00$ & $0.0352$ & $0.0625$ \\
$[0, 0, 0, 1]$ & $0.00$ & $0.00$ & $0.1055$ & $0.0625$ \\
$[0, 0, 1, 0]$ & $0.00$ & $0.00$ & $0.1055$ & $0.0625$ \\
$[0, 0, 1, 1]$ & $0.00$ & $0.00$ & $0.3164$ & $0.0625$ \\
$[0, 1, 0, 0]$ & $0.00$ & $0.00$ & $0.0117$ & $0.0625$ \\
$[0, 1, 0, 1]$ & $0.00$ & $0.00$ & $0.0351$ & $0.0625$ \\
$[0, 1, 1, 0]$ & $0.00$ & $0.00$ & $0.0351$ & $0.0625$ \\
$[0, 1, 1, 1]$ & $0.00$ & $0.00$ & $0.1055$ & $0.0625$ \\
$[1, 0, 0, 0]$ & $0.00$ & $0.00$ & $0.0117$ & $0.0625$ \\
$[1, 0, 0, 1]$ & $0.00$ & $0.00$ & $0.0352$ & $0.0625$ \\
$[1, 0, 1, 0]$ & $0.00$ & $0.00$ & $0.0352$ & $0.0625$ \\
$[1, 0, 1, 1]$ & $0.00$ & $0.00$ & $0.1055$ & $0.0625$ \\
$[1, 1, 0, 0]$ & $1.00$ & $0.25$ & $0.0039$ & $0.0625$ \\
$[1, 1, 0, 1]$ & $0.00$ & $0.25$ & $0.0117$ & $0.0625$ \\
$[1, 1, 1, 0]$ & $0.00$ & $0.25$ & $0.0117$ & $0.0625$ \\
$[1, 1, 1, 1]$ & $0.00$ & $0.25$ & $0.0352$ & $0.0625$ \\
\bottomrule
\end{tabular}
\label{tab: init belief table}
\end{table}
\begin{table}[htb]
\tiny
\centering
\ra{1.3}
\setlength{\tabcolsep}{3.1pt}
\caption{Mean reward for various suggester agents on RockSample$(8,4,10,-1)$.}
\begin{tabular}{@{}lcccccccccc@{}}
\toprule
\multicolumn{2}{l}{\multirow{2}{*}{Agent Type}} & \multicolumn{9}{c}{Suggester Initial Belief Parameters} \\
\cmidrule{3-11}
&& $(1.0 \mid 0.0)$ & $(1.0 \mid 0.25)$ & $(1.0 \mid 0.5)$ & $(0.75 \mid 0.0)$ & $(0.75 \mid 0.25)$ & $(0.75 \mid 0.5)$ & $(0.5 \mid 0.0)$ & $(0.5 \mid 0.25)$ & $(0.5 \mid 0.5)$ \\
\midrule
\multicolumn{2}{l}{Naive} \\
& $\nu=1.00$ & $16.8 \pm 0.1$ & $16.5 \pm 0.1$ & $16.0 \pm 0.1$ & $13.7 \pm 0.1$ & $13.7 \pm 0.1$ & $13.0 \pm 0.1$ & $13.3 \pm 0.1$ & $12.9 \pm 0.1$ & $10.1 \pm 0.1$ \\
& $\nu=0.75$ & $14.6 \pm 0.1$ & $14.5 \pm 0.1$ & $14.0 \pm 0.1$ & $13.1 \pm 0.1$ & $13.0 \pm 0.1$ & $12.4 \pm 0.1$ & $12.7 \pm 0.1$ & $12.4 \pm 0.1$ & $10.3 \pm 0.1$ \\
& $\nu=0.50$ & $12.9 \pm 0.1$ & $12.9 \pm 0.1$ & $12.5 \pm 0.1$ & $12.2 \pm 0.1$ & $12.3 \pm 0.1$ & $11.9 \pm 0.1$ & $11.9 \pm 0.1$ & $11.8 \pm 0.1$ & $10.1 \pm 0.1$ \\
\multicolumn{2}{l}{Scaled} \\
& $\tau=0.99$ & $16.5 \pm 0.1$ & $16.3 \pm 0.1$ & $15.2 \pm 0.2$ & $14.7 \pm 0.1$ & $14.6 \pm 0.1$ & $14.7 \pm 0.1$ & $14.1 \pm 0.1$ & $12.7 \pm 0.1$ & $10.1 \pm 0.1$ \\
& $\tau=0.75$ & $16.3 \pm 0.1$ & $16.2 \pm 0.1$ & $15.1 \pm 0.2$ & $14.7 \pm 0.1$ & $14.5 \pm 0.1$ & $12.8 \pm 0.1$ & $13.9 \pm 0.1$ & $12.7 \pm 0.1$ & $10.1 \pm 0.1$ \\
& $\tau=0.50$ & $16.3 \pm 0.1$ & $16.2 \pm 0.1$ & $15.3 \pm 0.2$ & $14.3 \pm 0.1$ & $14.4 \pm 0.1$ & $12.9 \pm 0.1$ & $13.7 \pm 0.1$ & $12.4 \pm 0.1$ & $10.1 \pm 0.1$ \\
\multicolumn{2}{l}{Noisy} \\
& $\lambda=5.0$ & $16.4 \pm 0.1$ & $16.2 \pm 0.1$ & $15.3 \pm 0.1$ & $13.7 \pm 0.1$ & $13.4 \pm 0.1$ & $12.4 \pm 0.1$ & $12.8 \pm 0.1$ & $12.7 \pm 0.1$ & $10.2 \pm 0.1$ \\
& $\lambda=2.0$ & $16.4 \pm 0.1$ & $15.9 \pm 0.1$ & $15.1 \pm 0.1$ & $14.1 \pm 0.1$ & $13.4 \pm 0.1$ & $12.7 \pm 0.1$ & $13.6 \pm 0.1$ & $12.9 \pm 0.1$ & $10.1 \pm 0.1$ \\
& $\lambda=1.0$ & $16.4 \pm 0.1$ & $16.2 \pm 0.1$ & $15.4 \pm 0.1$ & $13.8 \pm 0.1$ & $13.7 \pm 0.1$ & $12.8 \pm 0.1$ & $13.4 \pm 0.1$ & $13.2 \pm 0.1$ & $10.1 \pm 0.1$ \\
\bottomrule
\end{tabular}
\label{tab: results var sugg reward}
\end{table}
\begin{table}[htb]
\tiny
\centering
\ra{1.3}
\setlength{\tabcolsep}{2.5pt}
\caption{Mean number of differing suggestions for various suggesters on RockSample$(8,4,10,-1)$.}
\begin{tabular}{@{}lrccccccccc@{}}
\toprule
\multicolumn{2}{l}{\multirow{2}{*}{Agent Type}} & \multicolumn{9}{c}{Suggester Initial Belief Parameters} \\
\cmidrule{3-11}
&& $(1.0 \mid 0.0)$ & $(1.0 \mid 0.25)$ & $(1.0 \mid 0.5)$ & $(0.75 \mid 0.0)$ & $(0.75 \mid 0.25)$ & $(0.75 \mid 0.5)$ & $(0.5 \mid 0.0)$ & $(0.5 \mid 0.25)$ & $(0.5 \mid 0.5)$ \\
\midrule
\multicolumn{2}{l}{Naive} \\
& $\nu=1.00$ & $8.40 \pm 0.10$& $6.96 \pm 0.08$& $6.39 \pm 0.09$& $4.00 \pm 0.08$& $2.97 \pm 0.04$& $1.84 \pm 0.03$& $3.20 \pm 0.08$& $1.39 \pm 0.03$ & $0.0 \pm 0.0$ \\
& $\nu=0.75$ & $7.67 \pm 0.10$& $6.36 \pm 0.09$& $5.86 \pm 0.10$& $3.99 \pm 0.08$& $2.88 \pm 0.04$& $1.93 \pm 0.04$& $3.17 \pm 0.07$& $1.59 \pm 0.04$ & $0.0 \pm 0.0$ \\
& $\nu=0.50$ & $7.58 \pm 0.15$& $6.25 \pm 0.12$& $5.54 \pm 0.12$& $4.38 \pm 0.10$& $2.94 \pm 0.05$& $2.06 \pm 0.04$& $3.36 \pm 0.09$& $1.68 \pm 0.05$ & $0.0 \pm 0.0$ \\
\multicolumn{2}{l}{Scaled} \\
& $\tau=0.99$ & $2.77 \pm 0.02$& $2.69 \pm 0.02$& $2.54 \pm 0.03$& $3.55 \pm 0.04$& $3.17 \pm 0.04$& $2.97 \pm 0.04$& $2.70 \pm 0.04$& $2.20 \pm 0.03$ & $0.0 \pm 0.0$ \\
& $\tau=0.75$ & $2.78 \pm 0.02$& $2.72 \pm 0.02$& $2.57 \pm 0.03$& $3.31 \pm 0.04$& $3.02 \pm 0.04$& $2.72 \pm 0.04$& $2.65 \pm 0.04$& $2.18 \pm 0.03$ & $0.0 \pm 0.0$ \\
& $\tau=0.50$ & $3.14 \pm 0.02$& $2.95 \pm 0.03$& $2.84 \pm 0.03$& $2.97 \pm 0.04$& $2.82 \pm 0.04$& $2.50 \pm 0.05$& $2.48 \pm 0.04$& $1.97 \pm 0.03$ & $0.0 \pm 0.0$ \\
\multicolumn{2}{l}{Noisy} \\
& $\lambda=5.0$ & $4.50 \pm 0.05$& $4.12 \pm 0.05$& $4.05 \pm 0.06$& $3.39 \pm 0.04$& $3.27 \pm 0.05$& $2.94 \pm 0.05$& $2.79 \pm 0.04$& $2.16 \pm 0.05$ & $0.0 \pm 0.0$ \\
& $\lambda=2.0$ & $4.49 \pm 0.04$& $4.23 \pm 0.03$& $3.85 \pm 0.04$& $3.05 \pm 0.04$& $2.52 \pm 0.03$& $2.20 \pm 0.04$& $3.04 \pm 0.04$& $3.04 \pm 0.05$ & $0.0 \pm 0.0$ \\
& $\lambda=1.0$ & $5.22 \pm 0.04$& $4.41 \pm 0.04$& $3.98 \pm 0.05$& $3.21 \pm 0.06$& $2.31 \pm 0.04$& $1.59 \pm 0.03$& $2.77 \pm 0.05$& $2.10 \pm 0.05$ & $0.0 \pm 0.0$ \\
\bottomrule
\end{tabular}
\label{tab: results var sugg sugg}
\end{table}
\section{Environment Differences}
\subsection{Tag Differences} \label{sec: tag diff}
Our implementation of the Tag environment differs slightly from the original implementation. The difference is in the transition function and results in our scores differing from those presented in other papers \cite{Pineau2003PointbasedVI, Kurniawati2008SARSOPEP}. The original problem described the opponent's transition as moving away from the agent \SI{80}{\percent} of the time and staying in the same cell otherwise \cite{Pineau2003PointbasedVI}. The original implementation considered all actions away from the agent as valid and did not consider if an action would result in hitting a wall. Additionally, the actions were considered in pairs (i.e. east/west and north/south) with $\SI{40}{\percent}$ allocated to each pair. This implementation choice often results in the opponent staying in the same cell greater than \SI{20}{\percent} of the time despite having valid moves away from the agent.
Our implementation only considered actions that would result in a valid move away. The move away probability was distributed equally to each valid action. This change results in the opponent moving away from the agent more than the original implementation and results in a slightly more challenging scenario. An example of the differences in the opponent transition probabilities in a $3 \times 3$ environment is shown in \cref{fig: tx diff}.
\begin{figure*}[tb]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{3x3_orig}
\end{scaletikzpicturetowidth}
\caption{\small Legacy Implementation.}%
\label{fig: orig}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{scaletikzpicturetowidth}{\textwidth}
\input{3x3_new}
\end{scaletikzpicturetowidth}
\caption{\small Our Implementation.}%
\label{fig: our}
\end{subfigure}
\caption{\small Example demonstrating the differences in the transition probabilities for the opponent in Tag. The agent is depicted in orange and the opponent is shown in red. The numbers are the probability that the opponent moves to that grid square. \Cref{fig: orig} shows the transition probabilities for the legacy implementation and \cref{fig: our} shows the transition probabilities of our implementation.}
\label{fig: tx diff}
\end{figure*}
\subsection{RockSample Additions}
Our implementation of the RockSample environment is the same as previous work except for an additional penalty for performing the \textit{sense} action (i.e. we added a penalty when the agent uses the sensor to gather information about a rock). We added this penalty when formulating a problem to emphasize the importance of each action selection. Our implementation is identical to other works when the penalty is set to zero and can be verified by our RockSample$(7, 8, 20, 0)$ experiments.
\end{appendices}
\end{document}
|
2,877,628,088,565 | arxiv | \section{Introduction}
Consider a generic slow-fast system on the two-dimensional torus
\begin{equation}\label{eq:slow_fast_system}
\left\{
\begin{aligned}
\dot{x}&=f(x,y,\varepsilon)\\
\dot{y}&=\varepsilon g(x,y,\varepsilon)\\
\end{aligned}
\right.
\hspace{40pt}
(x,y)\in\mathbb{T}^2\cong \mathbb{R}^2/(2\pi\mathbb{Z}^2),\quad \varepsilon\in(\mathbb R_+,0)
\end{equation}
Assume that $f$ and $g$ are smooth enough and $g>0$. The dynamics of this system is guided by the \emph{slow curve}:
$$M=\{(x,y)\mid f(x,y,0)=0\}.$$
It consists of equilibrium points of the fast motion (i.e. motion determined by system~\eqref{eq:slow_fast_system} for $\varepsilon=0$). Particularly, one can consider two parts of the slow curve: one is stable (consists of attracting hyperbolic equilibrium points) and the other is unstable (consists of repelling hyperbolic equlibrium points). On the plane $\mathbb R^2$, there is rather simple description of the generic trajectory of~\eqref{eq:slow_fast_system}: it consists of interchanging phases of slow motion along stable parts of the slow curve and fast jumps along straight lines $y=const$ near the folds of the slow curve~\cite{MR}. On the two-torus, more complicated behaviour can be locally generic.
\begin{definition}
A solution (or trajectory) is called \emph{canard} if it contains an arc of length bounded away from zero uniformly in $\varepsilon$ that keeps close to the unstable part of the slow curve and simultaniously contains an arc (also of length bounded away from zero uniformly in $\varepsilon$) that keeps close to the stable part of the slow curve.
\end{definition}
This definition is a bit informal, more rigorous one will be given in \autoref{section:mainResults} (see \autoref{def:canard}). Canards are not generic on the plane: one have to introduce an additional parameter to get an attracting canard cycle. (See e.g.~\cite{KS}.) However, they are generic on the two-torus, as was conjectured in~\cite{GI} and strictly proved in~\cite{IS1}.
Let us explain this phenomena briefly. Assume that there exists global cross-section $\Gamma=\{y=const\}$ transversal to the field. Then one can define the Poincar\'e map $P_\varepsilon\colon\Gamma\to\Gamma$. It is a diffeomorphism of a circle.
The rotation number $\rho(\varepsilon)$ of the map $P_\varepsilon$
continuously depends on $\varepsilon$.
For generic system \eqref{eq:slow_fast_system} function $\rho(\varepsilon)$ is a Cantor function (also known as devil's staircase) whose horizontal steps occur at rational values which (in the general case) corresponds to the existence of hyperbolic periodic points of the map $P_\varepsilon$. These in turn corresponds to limit cycles of the original vector field. More precisely, if the Poincar\'e map has a rotation number with a denominator $n$ then the initial vector field has a limit cycle which makes $n$ full passes along the slow direction of the torus $y$. In particular, fixed points of the Poincar\'e map correspond to limit cycles which make only one round along the slow direction of the torus.
While hyperbolic limit cycles present, the rotation number is preserved under small perturbations. So when the rotation number increases, the limit cycles have to bifurcate through saddle-node (parabolic) bifurcation. Near the critical value of the parameter, the derivative of the Poincare map for both colliding cycles have to be close to 1. This is possible only if the cycles spend comparable time near the stable and the unstable parts of the slow curve, and thus they are canards.
The next natural question is to provide an estimate for the number of canard cycles that can born in a generic slow-fast system on the two-torus. The answer to this question for the case of integer rotation number and a rather wide class of systems was given in~\cite{IS2}.
\begin{theorem}\label{thm:old}
For generic slow-fast system on the two-torus with contractible nondegenerate connected slow curve the number of limit cycles that make one pass along the axis of slow motion is bounded by the number of fold points of the slow curve. This estimate is sharp in some open set in the space of slow-fast systems on the two-torus.
\end{theorem}
In the present paper we consider the case of non-integer rotation number that is not covered by \autoref{thm:old}. We also conjecture that our arguments can be applied to systems with unconnected slow curves (see \autoref{sec:conjectures}). The latter case is of special interest because slow-fast systems with unconnected slow curve appear naturally in physical applications, e.g. in the modelling of circuits with Josephson junction~\cite{KRS}.
Our main result state that in contrast with~\autoref{thm:old} for non-integer rotation number there are \emph{no geometric constraints} on the number of (canard) limit cycles.
Particularly, for any desired odd number of limit cycles $l$ we construct open set in the space of the slow-fast systems on the two-torus with \emph{convex} slow curve (i.e. having only two fold points) with exactly $l$ canard cycles that make two passes along the axis of slow motion. (The corresponding Poincar\'e map has half-integer rotation number.) See \autoref{thm:main}.
\paragraph{Acknowledgements.} The authors are grateful to Yu. S. Ilyashenko, John Guckenheimer, Victor Kleptsyn and Alexey Klimenko for fruitful discussions and valuable comments.
\section{Main results}\label{section:mainResults}
In this section we state our main results. We are interested only in the phase curves of system~\eqref{eq:slow_fast_system}, so one can divide it by $g$ and consider without loss of generality case of $g\equiv 1$.
\begin{definition}
Slow curve $M$ is called \emph{simple} if it is smooth and connected, its lift to the covering coordinate plane is contained
in the interior of the fundamental square $\{|x|<\pi,\ |y|<\pi\}$ and is
convex.
\end{definition}
We will only consider simple slow curves. This, in particular, implies that there are two jump points
(forward and backward jumps), which are the far right and the far left
points of $M$ (see \autoref{fig:general-view}; here and below we will assume that fast coordinate $x$ is vertical and slow coordinate $y$ is horizontal). We denote them $G^-$ and
$G^+$ respectively.
We assume without loss of generality that $G^\pm=(0,\mp 1)$. (Here and below every equation with $\pm$'s and $\mp$'s corresponds to a couple of equations: with all top and all bottom signs.) This can be achieved by
appropriate change of coordinates respecting fibration $\{y=const\}$.
\begin{definition}\label{def:nondeg}
Simple slow curve $M$ is called \emph{nondegenerate} if and only if
\begin{enumerate}
\item The following nondegenericity assumption holds in every point $(x,y)\in
M\setminus\{G^+,G^-\}$:
\begin{equation}\label{eq-nondeg-nondeg}
\padi{f(x,y,0)}{x}\ne 0.
\end{equation}
\item \label{enum-cond-last} The following nondegenericity assumptions hold
in the jump points:
\begin{equation}\label{eq-nondeg-main}
\left.\padi{^2 f(x,y,0)}{x^2}\right|_{G^\pm} \ne 0,\quad
\left.\padi{f(x,y,0)}{y}\right|_{G^\pm} \ne 0
\end{equation}
\end{enumerate}
\end{definition}
\begin{definition}\label{def:canard}
Fix some small $\delta>0$ (to be chosen later). Denote by $I_\delta$ the segment $I_\delta=[-1+\delta, 1-\delta]=:[\widetilde \alpha^-,\widetilde \alpha^+]$. Denote also the segments
$$\Sigma^-=\{(0,y)\mid y\in I_\delta\},$$
and
$$\Sigma^+=\{(\pi,y)\mid y\in I_\delta\}.$$
Every trajectory that cross either of these two segments will be called \emph{canard}.
\end{definition}
This definition of canard differs from those used
in~\cite{GI,IS1,IS2}. Our definition is stricter than the former and is more
classical: the trajectory that pass almost no time near
the stable part of the slow curve is not canard according to our definition.
Now we are ready to state the main result.
\begin{ttheorem}\label{thm:main}
For every desired odd number of limit cycles $l\in 2\mathbb N+1$ there exists an open set in the space of slow-fast systems on the two-torus with the following properties.
\begin{enumerate}
\item Slow curve $M$ is simple and nondegenerate.
\item For every system from this set there exists a sequence of intervals $\{R_n\}_{n=0}^\infty\subset \{\varepsilon>0\}$, accumulating at zero, such that for every $\varepsilon\in R_n$ there exist exactly $l$ canard limit cycles.
\end{enumerate}
\end{ttheorem}
\begin{remark}
One can construct the desired example for any prescribed simple
nondegenerate slow curve. Moreover, the requirement of convexity can be
easily replaced with the less restrictive requirement: $M$ has only two fold points. This can be achieved by smooth coordinate change preserving fibration $\{y=const\}$.
\end{remark}
\begin{remark}
No upper estimate on the number of non-canard limit cycles is given. At least
one non-canard cycle exists in our settings.
\end{remark}
The paper is organized as follows. In \autoref{sec:lc} we discuss the settings
and give a heuristic proof of the main result. In \autoref{section:proof} we
construct a family with the desired properties relying on two technical lemmas
about the asymptotics of some Poincar\'e map. In \autoref{sec:techn} we prove
the technical lemmas. In \autoref{sec:conjectures} we state a conjecture on
slow-fast systems on the two-torus with unconnected slow curves.
\section{The vertical Poincar\'e map: heuristic proof}\label{sec:lc}
In this section we provide the proof of the main result on the heuristic level: only main ideas are highlighted and all the technical details are postponed to the rest of the paper. We investigate the ``vertical'' Poincar\'e map $Q_\varepsilon$ from the segment
$\Sigma^-$ to itself (see \autoref{def:canard}; the complete settings are given in
\autoref{section:notation}). When some special requirements satisfied, this map can be
approximated by some simple function. We provide necesary preliminary results in \autoref{section:preliminary} and describe the approximation in \autoref{ssec:heur}. Then we give the outline of the proof of \autoref{thm:main} (see \autoref{section:outline}).
\subsection{Settings and notations}\label{section:notation}
Let $M$ be a slow curve. It consists of stable ($M^-$) and unstable ($M^+$) parts and two jump points: the forward jump point~$G^-$ and the backward jump point~$G^+$, see~\autoref{fig:general-view}:
$$M=M^+\sqcup \{G^+\}\sqcup M^-\sqcup\{G^-\}.$$
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.9]{general-view.pdf}
\caption{The slow curve and jump points. Note that horizontal axis is $y$ and vertical is $x$}\label{fig:general-view}
\end{center}
\end{figure}%
Recall that $G^\pm=(0,\mp 1)$.
We will also use the notation $M^\pm(y)$ assuming that~$M^\pm$ here are functions, whose graph $x=M^\pm(y)$ defines unstable and stable parts of the slow curve.
Call $\Pi=S^1\times I_\delta$ a \emph{basic strip}, where $I_{\delta}$ is the same as in \autoref{def:canard}
Fix a vertical segment $J^+$ (resp., $J^-$) which intersects $M^+$ ($M^-$) close enough to the jump point $G^-$ ($G^+$), and does not intersect $M^-$ ($M^+$). Let
$$y(J^\pm)=:\alpha^\pm=\pm 1\mp\delta^\pm,$$
where $\delta^\pm$ are small and $\delta^\pm<\delta$ (here $\delta$ is the same as in~\autoref{def:canard}). Note that the definition of $J^\pm$ differs from one in~\cite{GI, IS1}: instead of placing $J^+$ near $G^+$ we place it near $G^-$ and do opposite with $J^-$.
We have to reproduce the notation on oriented arcs on a circle and Poincar\'e maps from~\cite{IS1}. Consider arbitrary points~$a$ and~$b$ on the oriented circle~$S^1$. They split
the circle into two arcs. Denote the arc from point $a$ to point $b$ (in the
sense of the orientation of the circle) by~$\rarc{a}{b}$. The orientation of this
arc is induced by the orientation of the circle. Also denote the same arc with
the reversed orientation by~$\larc{a}{b}$ (see~\autoref{fig:arcs}).
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.8]{arcs.pdf}
\caption{Orientation of the arcs}\label{fig:arcs}
\end{center}
\end{figure}
Denote also the Poincar\'e map along the phase curves of the main system~\eqref{eq:slow_fast_system}
from the cross-section~$y=a$ to the cross-section~$y=b$ in the forward time by $\rPeab{a}{b}$. Also, let
$\lPeab{b}{a}=(\rPeab{a}{b})^{-1}$: this is the Poincar\'e map from the
cross-section $y=b$ to the cross-section $y=a$ in the backward time. This fact is
stressed by the notation: the direction of the angle bracket shows the time
direction.
\subsection{Preliminary results}\label{section:preliminary}
Denote
$$D_\eps^+:=\lPeab{\alpha^+}{-\pi}(J^+),\quad D_\eps^-=\rPeab{\alpha^-}{\pi}(J^-)$$
It is proved in~\cite{GI,IS1} (see the Shape Lemmas there) that $|D_\eps^\pm|=O(e^{-C/\varepsilon})$. Note that as $\varepsilon$ decrease to 0, $D_\eps^+$ moves downward and $D_\eps^-$ moves upward, making infinitely many rotations (see the Monotonicity Lemmas in~\cite{GI,IS1}) and meet each other infinitely many times. The values of $\varepsilon$ for which $D_\eps^+$ and $D_\eps^-$ have nonempty intersection forms intervals $R_n$:
$$\{R_n\}_{n=0}^\infty=\{\varepsilon>0\colon D_\eps^+\capD_\eps^-\ne \varnothing\}.$$
As it was shown in~\cite{GI,IS1}, intervals $R_n$ has exponentially small length and accumulate at zero. If one pick any sequence $\varepsilon_n\in R_n$, $n=1,2,\ldots$, then
$$\varepsilon_n=O\left(\frac{1}{n}\right).$$
Let $\varepsilon\in R_n$ and pick some point $q\in D_\eps^+\capD_\eps^-$. Consider the trajectory through~$q$. In the forward time, this trajectory makes several (about $O(1/\varepsilon)$) rotations, then performs backward jump, follows unstable part of the slow curve $M^+$ and finally intersects $J^+$. In the backward time, this trajectory (again after several rotations) passes near the stable part of the slow curve $M^-$ and finally intersects~$J^-$. We will call this trajectory \emph{grand canard} despite the fact that this is not a canard according to~\autoref{def:canard}.
Let $U$ be a segment of the stable or unstable part of the slow curve $M$. Introduce the notation:
\begin{equation}\label{eq:Phi}
\Phi (U)=\int_U f'_{x} dy.
\end{equation}
Denote also for brevity
$$\Phi^\pm\rarc{y_1}{y_2}:=\Phi(U),$$
where $U$ is an arc of $M^\pm$ projected on $\rarc{y_1}{y_2}$.
Function $\Phi$ describes the expansion (if positive) or contraction (if
negative) accumulated while passing near the corresponding arc of the slow curve. Formally, the following theorem holds:
\begin{theorem}[See \cite{GI}]\label{thm:P'est}
Let $U=[A,B]\subset M^\pm$ and $X=[x_1,x_2]\times \{y(A)\}$ be a segment that contains $A$ and do not cross $M$ in points different from $A$. Then
$$\left.\log \left(\rPeab{y(A)}{y(B)}(x)\right)'_x\right|_{X}=\frac{\Phi(U)+O(\varepsilon)}{\varepsilon}.$$
\end{theorem}
Moreover, similar (but a little weaker) estimate holds for trajectories extended through the jump point even after they make $O(1/\varepsilon)$ rotations along the $x$-axis after the jump. The exact statement follows.
\begin{theorem}[See \cite{IS1,IS2}]\label{thm:P'est-full}
Let $U=[A,G^-]\subset M^-\cup\{G^-\}$, $X$ as in previous Theorem and $y_1$ is a point outside of the projection of $M$ to $y$-axis, such that there is no other points of that projection on arc $\rarc{1}{y_1}$. Than the following holds:
$$\left.\log \left(\rPeab{y(A)}{y_1}(x)\right)'_x\right|_{X}=\frac{\Phi(U)+O(\varepsilon^\nu)}{\varepsilon},$$
where $\nu\in (0,1/4]$.
\end{theorem}
Reverting the time, one can obtain a similar result for $M^+$ and $G^+$.
\subsection{Approximation of the Poincar\'e map}\label{ssec:heur}
\begin{definition}\label{def:def-of-beta}
Choose arbitrary $y\in I_\delta$ and assume that $|\Phi^-\rarc{y}{1}|<\Phi^+\rarc{-1}{1}$. Then due to the nondegenericity assumptions \eqref{eq-nondeg-nondeg} there exists unique \emph{singular release point} $\beta=\beta(y)\in\rarc{-1}{1}$ such that
\begin{equation}\label{eq:def-of-beta}
\Phi^-\rarc{y}{1}+\Phi^+\rarc{-1}{\beta(y)}=0.
\end{equation}
\end{definition}
\begin{lemma}\label{prop:Q-decomp}
Let $\varepsilon\in R_n$ for some $n$ and therefore there exists grand canard.
Then the Poincar\'e map $Q_\varepsilon\colon \Sigma^-\to \Sigma^-$ can be decomposed in
the following way:
$$\Sigma^-\stackrel{Q^-_\varepsilon}{\to}\Sigma^+\stackrel{Q^+_\varepsilon}{\to}\Sigma^-,$$
and maps $Q^+_\varepsilon$ and $Q^-_\varepsilon$ both have the following asymptotics:
\begin{equation}\label{eq:QandBeta}
Q^\pm_\varepsilon(y)=\beta(y)+O(\varepsilon^\nu)
\end{equation}
for some $\nu\in (0,1/4)$. The Poincar\'e map $Q_\varepsilon$ is defined at the point $y$ for small $\varepsilon$ if and only if $\beta(y)$ and $\beta(\beta(y))$ are
defined and contained in the interior of $I_\delta$.
\end{lemma}
\begin{proof}[Heuristic proof of \autoref{prop:Q-decomp}]
Consider the map $Q^-_\varepsilon$ (the map $Q^+_\varepsilon$ can be considered in the same way). Let~$p=(0,y_0)\in \Sigma^-$ and therefore $p$ is bounded away from $M$. Consider a trajectory $\psi$ through $p$.
In the forward time, it attracts to $M^-$ after time $O(1)$ (``falls'') in the negative direction (``downwards''), see~\autoref{fig:general-view}.
After the fall, the trajectory follows~$M^-$ exponentially close to the grand canard (being ``above'' the grand canard) until the jump point.
It follows from \autoref{thm:P'est-full} that after the jump the distance between the trajectory $\psi$ and the grand canard is exponentially small and its $\log$ is approximately $\varepsilon^{-1}\Phi^-\rarc{y_0}{1}<0$.
After the jump, the trajectory follows grand canard during the rotation phase, then performs backward jump and passes near some segment of the unstable part of the slow curve $M^+$.
It is possible that the trajectory will be released from the grand canard (and thus $M^+$) at some point and then attracted to $M^-$ again before leaving the basic strip. This release will be made in the positive direction (``upward''), because the trajectory $\psi$ is above the grand canard. In this case the trajectory intersects $\Sigma^+$ and therefore $Q^-_\varepsilon(y_0)$ is defined.
We will show that the release is possible only near $\beta(y_0)$. Indeed, the release occurs at the point where the contraction rate accumulated
during the passage near the stable part of the slow curve is compensated by the
expansion accumulated near the unstable part of the curve. The latter is
exponentially large and its $\log$ is approximately $\varepsilon^{-1}\Phi^+\rarc{-1}{\beta(y_0)}>0$.
Relation~\eqref{eq:def-of-beta} says that $\beta(y_0)$ is the point for which the contraction is compensated by the expansion. Thus the value of the Poincar\'e map $Q^-_\varepsilon(y_0)$ is approximately equal to $\beta(y_0)$. The error of all calculations is of order $O(\varepsilon^\nu)$ as in~\autoref{thm:P'est-full}.
Rigorous proof will be given in \autoref{ssec:approx}.
\end{proof}
\subsection{The outline of the proof of the Main Result}\label{section:outline}
\autoref{prop:Q-decomp} allows one to control the image $Q_\varepsilon(y_0)$ of some
point $y_0$ controlling the value of the
integrals $\Phi^+$ and $\Phi^-$ over the appropriate arcs of the slow curve: making them larger or smaller (relatively to each other) one can shift the image
$Q_\varepsilon(y_0)$ to the left or to the right. Using this idea one can impose finite
number of inequalities on the values of the integrals that guarantee the
existence of several trapping segments for the Poincar\'e map $Q_\varepsilon$ or its inverse with fixed points inside each
of them, see \autoref{lemma:lowerbound}. Moreover, one can show that the derivative of $Q_\varepsilon$ can also be
approximated by the derivative of an ``ideal'' Poincar\'e map $\beta\circ\beta$,
see \autoref{ssec:Q-deriv}. This allows one to make the Poincar\'e map
or its inverse \emph{contracting} on the trapping intervals and thereofure
guarantee that every such interval contains exactly one fixed point, see
\autoref{lemma:everFailed}.
All the conditions imposed during the construction are open (strict
inequalities) and therefore one have an open set in the space of slow-fast
systems with the desired properties. This completes the heuristic proof of the
main result. In the following section we give all the details needed.
\section{Construction of the family}\label{section:proof}
In this section we construct a family \eqref{eq:slow_fast_system} with the desired number of two-pass limit cycles.
\autoref{lemma:lowerbound} constructs a function $f$ in \eqref{eq:slow_fast_system} and estimates the number of the closed trajectories from below. \autoref{lemma:everFailed} proves that all constructed closed trajectories are hyperbolic limit cycles with known stability type and there is no other canard cycles except constructed ones.
Thus constructed family \eqref{eq:slow_fast_system} satisfies both assertions of \autoref{thm:main}. During the contruction we only imply the finite number of strict inequalities on the function $f$ defining the family. Therefore we obtain an open set of such families in the space of all slow-fast systems on the two-torus. Thus \autoref{thm:main} will be proved.
As the dynamics is governed mostly by the values of integrals $\Phi^{\pm}$ of the derivative $f'_x$ over the segments of the slow curve, denote for brevity
$\lambda^{\pm}(y):=\left.f'_x\right|_{M^{\pm}}(y)$.
\subsection{Lower estimate}
\begin{lemma}\label{lemma:lowerbound}
For any given $2n+1$ segments
$\omega^a_i=[a_{2i-1},a_{2i}]$,
$\omega^b_i=[b_{2i},b_{2i-1}]$, $i=1,\ldots,n$, and
$\omega^{a}_{n+1}=[a_{2n+1},b_{2n+1}]$
such that
$$
-1<a_1<a_{2}<a_{3}\ldots<a_{2n+1}<b_{2n+1}<\ldots<b_1<1
$$
there exists smooth function $f\colon\mathbb{T}^2\to\mathbb{R}$ and $\varepsilon_{0}>0$ such that for any $\varepsilon\in(0,\varepsilon_0)$ system \eqref{eq:slow_fast_system} with given $f$ and $g\equiv 1$ satisfies the following properties:
\begin{enumerate}
\item For every $i=1,\ldots,n$ there exist at least two closed trajectories: the first one intersects
$\{\pi\}\times\omega^a_i$ and $\{0\}\times\omega^b_i$ and the second one intersects
$\{0\}\times\omega^a_i$ and $\{\pi\}\times\omega^b_i$ respectively.
\item At least one closed trajectory intersects segments $\{0,\pi\}\times\omega^{a}_{n+1}$.
\item
No closed trajectories intersect
$$
\{0,\pi\}\times
\left(
[a_0,b_0]
\setminus
\left(
\bigcup\limits_{i=0}^n
(\omega^a_i\cup\omega^b_i)
\cup \omega^a_{n+1}
\right)
\right)
$$
\item Derivative $\lambda^\pm$ is constant on the segments $\omega^a_i$, $\omega^b_i$, $i=1,\ldots,n$ and $\omega^{a}_{n+1}$ and is monotonic between them.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of \autoref{lemma:lowerbound}]
Denote $a_{-1}=-1$.
Fix arbitrary points $a_{0}$, $b_{0}$, $b_{-1}$, such that $-1<a_{0}<a_{1}$, $b_{1}<b_{0}<b_{-1}<1$.
Denote for unity
\begin{equation}\label{eq:conventions}
b_{2n+2}=a_{2n+1},~a_{2n+2}=b_{2n+1},~a_{2n+3}=b_{2n},~\omega^b_{n+1}= \omega^a_{n+1}.
\end{equation}
So indexes from $-1$ to $2n+3$ are in use.
\begin{proposition}\label{prop:reduceToBeta}
Consider inclusions for $i=1,\ldots,n+1$
\begin{equation}\label{eq:betaInclusions}
\begin{aligned}
\text{For odd $i$:}~
\beta([a_{2i-2},a_{2i+1}]) \Subset \omega^b_i,\\
\beta([b_{2i+1},b_{2i-2}]) \Subset \omega^a_i,\\
\text{For even $i$:}~
[a_{2i-2},a_{2i+1}] \Subset \beta(\omega^b_i),\\
[b_{2i+1},b_{2i-2}] \Subset \beta(\omega^a_i).\\
\end{aligned}
\end{equation}
Notation $A\Subset B$ here means that the closure of $A$ is a subset of the interior of $B$.
Inclusions \eqref{eq:betaInclusions} implies the first three assertions of \autoref{lemma:lowerbound}.
\end{proposition}
\begin{proof}[Proof of \autoref{prop:reduceToBeta}]
Due to notation $a_{2n+2} = b_{2n+1}$ $a_{2n+3}=b_{2n}$, segment $\omega^a_{n+1}$ can be rewritten as $[a_{2n+1},a_{2n+2}]$ or as $[b_{2n+2},b_{2n+1}]$.
Note that due to conventions \eqref{eq:conventions} inclusions provided by \eqref{eq:betaInclusions} coincides for $i=n+1$.
Recall that $\beta$ doesn't depend on $\varepsilon$.
Due to \eqref{eq:QandBeta} there exists $\varepsilon_0>0$ such that for any $\varepsilon\in(0,\varepsilon_0)$ inclusions \eqref{eq:betaInclusions} hold for $Q_{\varepsilon}^{\pm}$ instead of $\beta$.
Recall that $Q_{\varepsilon} = Q_{\varepsilon}^+\circ Q_{\varepsilon}^-$.
Therefore for odd $i$ the following inclusions take place:
\begin{equation}\nonumber
Q_{\varepsilon}\left([a_{2i-2},a_{2i+1}]\right)
\Subset
\omega^a_i,~
Q_{\varepsilon}([b_{2i+1},b_{2i-2}])
\Subset
\omega^b_i,
\end{equation}
and for the even $i$
\begin{equation}\nonumber
[a_{2i-2},a_{2i+1}]
\Subset
Q_{\varepsilon}(\omega^a_i),~
[b_{2i+1},b_{2i-2}]
\Subset
Q_{\varepsilon}(\omega^b_i).
\end{equation}
For any $i$ the following inclusions take place:
\begin{equation}\nonumber
\omega^a_i\Subset[a_{2i-2},a_{2i+1}],~
\omega^b_i\Subset[b_{2i+1},b_{2i-2}],
\end{equation}
Hence there exists at least one fixed point of the map $Q_{\varepsilon}$ in each of the segments $\omega^a_i$ for $i=1,\ldots,n+1$, in $\omega^b_i$ for $i=1,\ldots,n$. Moreover, no fixed points belong to segments $[a_{2i},a_{2i+1}]$, $[b_{2i+1},b_{2i}]$ for $i=0,\ldots,n$.
Any fixed point of the Poincar\'e map $Q_{\varepsilon}$ corresponds to the closed trajectory.
This implies the first and the third assertions of \autoref{lemma:lowerbound}.
Note that due to notation $[a_{2n+1},a_{2n+2}]=[b_{2n+2},b_{2n+1}]$ second assertion of \autoref{lemma:lowerbound} is also proved (see inclusions for $i=n+1$ and segment $\omega^a_{n+1}$).
\end{proof}
Now consider the following systems of inequalities:
\begin{equation
\label{eq:attractingCycleStrong}
\begin{aligned}
\text{for odd $i=1,3\ldots,n+1$}&:\\
|\Phi^+\rarc{-1}{b_{2i}}|&<|\Phi^-\rarc{a_{2i+1}}{a_{2i+2}}|,\\
|\Phi^+\rarc{-1}{a_{2i-1}}|&<|\Phi^-\rarc{b_{2i-2}}{b_{2i-3}}|,\\
\end{aligned}
\end{equation}
\begin{equation
\label{eq:repellingCycleStrong}
\begin{aligned}
\text{for even $i=0,2,4\ldots,n+1$}&:\\
|\Phi^-\rarc{a_{2i}}{1}|&<|\Phi^+\rarc{b_{2i+2}}{b_{2i+1}}|,\\
|\Phi^-\rarc{b_{2i-1}}{1}|&<|\Phi^+\rarc{a_{2i-3}}{a_{2i-2}}|.\\
\end{aligned}
\end{equation}
Skip inequality that contains any index out of range $[-1,2n+2]$.
\begin{proposition}\label{prop:reduceToInequalities}
Inequalities \eqref{eq:attractingCycleStrong}, \eqref{eq:repellingCycleStrong} imply inclusions $\eqref{eq:betaInclusions}$.
\end{proposition}
\begin{proof}[Proof of \autoref{prop:reduceToInequalities}]
Indeed, $[a_{2i+1},a_{2i+2}]\subset[a_{2i+1},1]$ and $[b_{2i+2},b_{2i+1}]\subset[-1,b_{2i+1}]$ and therefore the first inequalities in the systems \eqref{eq:attractingCycleStrong}, \eqref{eq:repellingCycleStrong} imply:
\begin{equation
\label{eq:1a1,2}
\text{for odd $i$:~}
|\Phi^+\rarc{-1}{b_{2i}}| <
|\Phi^-\rarc{a_{2i+1}}{a_{2i+2}}| <
|\Phi^-\rarc{a_{2i+1}}{1}|,
\end{equation}
\begin{equation
\label{eq:2a1,2}
\text{for even $i$:~}
|\Phi^-\rarc{a_{2i}}{1}| <
|\Phi^+\rarc{b_{2i+2}}{b_{2i+1}}| <
|\Phi^+\rarc{-1}{b_{2i+1}}|.
\end{equation}
And the last inequalities in the systems \eqref{eq:attractingCycleStrong} and \eqref{eq:repellingCycleStrong} imply:
\begin{equation
\label{eq:1b1,2}
\text{for odd $i$:}~
|\Phi^+\rarc{-1}{a_{2i-1}}| <
|\Phi^-\rarc{b_{2i-2}}{b_{2i-3}}| <
|\Phi^-\rarc{b_{2i-2}}{1}|,
\end{equation}
\begin{equation
\label{eq:2b1,2}
\text{for even $i$:}~
|\Phi^-\rarc{b_{2i-1}}{1}| <
|\Phi^+\rarc{a_{2i-3}}{a_{2i-2}}| <
|\Phi^+\rarc{-1}{a_{2i-2}}|.
\end{equation}
Consider an odd index $i$.
Due to \eqref{eq:def-of-beta},
$|\Phi^-\rarc{y}{1}|= \Phi^+\rarc{-1}{\beta(y)}$.
Rewrite inequalities
\eqref{eq:1a1,2}, \eqref{eq:2a1,2},
\eqref{eq:1b1,2}, \eqref{eq:2b1,2}
appropriately with $\rarc{-1}{\beta(y)}$ instead of $\rarc{y}{1}$ and
substitute odd index $i$ into \eqref{eq:1a1,2},
even index $i-1$ to \eqref{eq:2a1,2}, odd index $i$ into \eqref{eq:1b1,2}, even index $i+1$ into~\eqref{eq:2b1,2}. We get
\begin{equation
\label{eq:beta-b-compare-integrals}
\begin{aligned}
|\Phi^+\rarc{-1}{b_{2i}}| &<
|\Phi^+\rarc{-1}{\beta(a_{2i+1})}|, \\
%
|\Phi^+\rarc{-1}{\beta(a_{2i-2})}| &<
|\Phi^+\rarc{-1}{b_{2i-1}}|, \\
%
|\Phi^+\rarc{-1}{a_{2i-1}}| &<
|\Phi^+\rarc{-1}{\beta(b_{2i-2})}| ,\\
%
|\Phi^+\rarc{-1}{\beta(b_{2i+1})}| &<
|\Phi^+\rarc{-1}{a_{2i}}|.
\end{aligned}
\end{equation}
Function $\Phi^+$ monotonically depends on the right boundary of the given segment.
Therefore system \eqref{eq:beta-b-compare-integrals} implies the desired inclusions for odd $i$:
\begin{equation}\label{eq:b-beta-compare
\begin{aligned}
&b_{2i}<\beta(a_{2i+1}),~
\beta(a_{2i-2}) < b_{2i-1},~
\beta([a_{2i-2},a_{2i+1}]) \Subset \omega^b_i,\\
&a_{2i-1}<\beta(b_{2i-2}),~\beta(b_{2i+1})<a_{2i},~
\beta([b_{2i+1},b_{2i-2}]) \Subset \omega^a_i,
\end{aligned}
\end{equation}
Drawing the conclusion for the even index $i$ repeats literally. One should substitute odd index $i-1$ to \eqref{eq:1a1,2}, even index $i$ into \eqref{eq:2a1,2}, odd index $i+1$ into \eqref{eq:1b1,2}, even index $i$ into \eqref{eq:2b1,2}.
\end{proof}
\begin{proposition}\label{prop:constructF}
There exists smooth function $f\colon \mathbb{T}^2\to \mathbb{R}$ that satisfies inequalities \eqref{eq:attractingCycleStrong}, \eqref{eq:repellingCycleStrong} and the forth assertion of \autoref{lemma:lowerbound}.
\end{proposition}
\begin{proof}[Proof of the \autoref{prop:constructF}]
Below we construct the desired function $f$.
We consequently define values of its derivative for some arcs of $M^\pm$. Then we extend it smoothly to the whole torus.
Trivial \autoref{prop:bumpTrick} below describes the main tool of the construction.
\begin{proposition}\label{prop:bumpTrick}
For any segment $[A,B]\subset (-1,1)$, any large constant $I>0$ and any small $\Delta>0$ there exists $\delta_{0}>0$ and a function $\lambda$ defined in the neighbourhood of the segment $[A,B]$ such that $\lambda|_{[A,B]}$ is constant and the following two properties hold for any positive $\delta<\delta_{0}$:
$$\left|\int\limits_{A}^{B}\lambda(y) dy\right| > I,$$
$$
\left|{}
\left(
\int\limits_{A-\delta/2}^{A} +
\int\limits_{B}^{B+\delta/2}
\right)
\lambda
\right|
<\Delta.
$$
\end{proposition}
\begin{proof}[Proof of \autoref{prop:bumpTrick}]
Put $|\lambda||_{[A,B]} = \frac{I+1}{|[A,B]|},$ and $\delta_{0}=\frac{|[A,B]|\cdot \Delta}{I+1}$. Put also
$$
|\lambda||_{[A-\delta,A-\delta/2]}
=
|\lambda||_{[B+\delta/2,B+\delta]}
=2
$$
and take $\lambda$ smooth and monotonic in the rest two segments $[A-\delta/2,A]$, $[B,B+\delta/2]$. See \autoref{bump}.
Direct calculations prove the proposition immediately.
\begin{figure}[ht!]
\centering
\includegraphics[ width=70mm]{bumpFunction.pdf}
\caption{Bump function: $S_1+S_2<\Delta$.}
\label{bump}
\end{figure}
\end{proof}
Now proceed with the construction of $\lambda^{\pm}$.
First of all put $\lambda^{\pm}$ to be equal $\pm2$ in some neighborhood of the segment $[a_0,b_{-1}]$.
On the other segments $[-1,a_0]$, $[b_{-1},1]$ put it monotonic and smooth enough (except the jump points) such that at the jump points $\lambda^+=\lambda^-=f'_x=0$, $f_x''\ne 0$ and $f$ can be extended to the whole torus.
Satisfy inequalities \eqref{eq:attractingCycleStrong}, \eqref{eq:repellingCycleStrong} in two phases, redefining the derivative $\lambda^{\pm}$ in some subsegments of $(-1,1)$, see \autoref{fig:solvingOrder}.
During the first phase satisfyby induction the \emph{last} inequalities in \eqref{eq:attractingCycleStrong} and \eqref{eq:repellingCycleStrong} for $i=1,\ldots,n+1$.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{solvingOrder}
\caption{Choice of the segments in the first part of the inductive proof
of~\autoref{prop:constructF}. For $i=1,\ldots, n+1$ we choose two segments of the different parts of the slow curve (one is bold on the figure and the other is thin) and consequently put large values of the
derivative $\lambda^\pm$ on the bold segment in such a
way that the integral $\Phi^\pm$ over this segment is greater by
absolute value than the integral over the corresponding thin segment. Every step does not affect the
previous ones. On the figure first five steps of inductive proof are shown.}\label{fig:solvingOrder}
\end{figure}
Base of induction: $i=1$, inequality
$\Phi^+\rarc{-1}{a_1}<|\Phi^-\rarc{b_0}{b_{-1}}|$ is desired.
The second inequality of \eqref{eq:repellingCycleStrong} does not exist for $i=0$ due to index constraints.
Redefine the value of $\lambda^{-}$ in the neighborhood of the segment $[b_0,b_{-1}]$: apply \autoref{prop:bumpTrick} to that segment, put $I=\Phi^+\rarc{-1}{a_1}$ and take any small $\Delta$.
Step of induction.
Let the last inequalities in \eqref{eq:attractingCycleStrong} and \eqref{eq:repellingCycleStrong} are satisfied for $i=1,\ldots,k-1$.
Satisfy them for $i=k$.
Let $k$ be even.
Inequality
$|\Phi^-\rarc{b_{2k-1}}{1}| < \Phi^+\rarc{a_{2k-3}}{a_{2k-2}}$
is desired.
Value of its left-hand side is defined by the initial definition $(f^{-})'_x=-2$ and by the previous steps of induction (for the even $i=k-1,k-3,\ldots$).
Now apply \autoref{prop:bumpTrick} to the function $\lambda^+$, segment $\omega^a_{k-1}=[a_{2k-3},a_{2k-2}]$, $I=|\Phi^-\rarc{b_{2k-1}}{1}|$ and some $\Delta$ (choose it later).
Thus current inequality is satisfied.
Check that inequalities satisfied on the previous steps of induction are preserved.
Any change of the derivative $\lambda^{\pm}$ in $\delta(\Delta)$-neighborhood of the segment $\omega^a_{k-1}$ has no influence on the inequalities for $i=1,\ldots,k-2$.
Consider $i=k-1$.
We change the derivative at the segment $[a_{2k-3}-\delta,a_{2k-3}]$ which is a subsegment of $[1,a_{2k-3}]$.
Integral $\Phi^-\rarc{1}{a_{2k-3}}$ stays in the right-hand side of the second inequality in \eqref{eq:attractingCycleStrong} for odd $i=k-1$.
We change its value no more than at $\Delta$.
Inequality for $i=k-1$ is strict, therefore there exists sufficiently small $\Delta=\Delta(i)$ such that inequality for $i=k-1$ is also preserved.
Let $k$ be odd.
Apply \autoref{prop:bumpTrick} to the function $\lambda^{-}$, segment $\omega^b_{k-1}=[b_{2k-2},b_{2k-3}]$, $I=\Phi^+\rarc{-1}{a_{2k-1}}$ and (as in previous case) sufficiently small $\Delta$.
Step of induction is over.
During the second phase satisfy the \emph{first} inequalities in
\eqref{eq:attractingCycleStrong} and \eqref{eq:repellingCycleStrong}.
This can be done in a similar way. Now consequently decrease index $i=n,\ldots,1,0$.
For the odd index $i$ apply \autoref{prop:bumpTrick} to the function $\lambda^{-}$, the segment $\omega^a_{i+1}=[a_{2i+1},a_{2i+2}]$ and $I=\Phi^{+}\rarc{-1}{b_{2i}}$.
For the even index $i$ apply \autoref{prop:bumpTrick} to the function $\lambda^{+}$, the segment $\omega^b_{i+1}=[b_{2i+2},b_{2i+1}]$ and $I=|\Phi^{-}\rarc{a_{2i}}{1}|$.
So the inequalities are satisfied and derivative $f'_x$ is defined all over the $M$.
By construction $\lambda^{\pm}$ is a constant on the chosen segments and forth assertion of \autoref{lemma:first_derivative} is also satisfied.
\end{proof}
Due to \autoref{prop:reduceToInequalities} and \autoref{prop:reduceToBeta} function $f$ constructed above satisfies all assertions of \autoref{lemma:lowerbound}.
\end{proof}
\subsection{Sharp estimate}
\autoref{lemma:lowerbound} constructs system \eqref{eq:slow_fast_system} with some closed trajectories.
Below we prove that every closed trajectory is a limit cycle.
We also prove that there are no canard limit cycles other that constructed.
To this end we estimate the derivative of the Poncar\'e map $Q_{\varepsilon}$ and show that it (or its inverse) is strictly contracting on the segments where fixed points are located.
\begin{lemma}
\label{lemma:everFailed}
For every $i=1,\ldots,n$ there exists exactly two limit cycles: the first one intersects segments $\{0\}\times\omega^a_i$ and $\{\pi\}\times\omega^b_i$, the second one intersects $\{\pi\}\times\omega^a_i$ and $\{0\}\times\omega^b_i$. For $i=n+1$ there exists one limit cycle which intersects $\{0\}\times\omega^a_{n+1}$ and $\{\pi\}\times\omega^a_{n+1}$.
\end{lemma}
\begin{proof}[Proof of \autoref{lemma:everFailed}]
Due to inclusions \eqref{eq:betaInclusions} and approximation \eqref{eq:QandBeta} if for some $i=1,\ldots,n$ a closed trajectory intersects one of the segments $\{0,\pi\}\times\omega^a_{i}$ then it also intersects one of $\{\pi,0\}\times \omega^b_{i}$ respectively;
for $i=n+1$ a closed trajectory intersects both segments $\{0,\pi\}\times\omega^a_{n+1}$.
The following \autoref{lemma:first_derivative} estimates the derivative of the Poincar\'e map $Q_{\varepsilon}$ in these segments.
\begin{lemma}\label{lemma:first_derivative}
There exists constants $c^a_i$, $c^b_i$ such that the following estimates take place:
\begin{equation}\nonumber
\left.
(Q_{\varepsilon})'
\right|_{\omega^a_i}
=
c^a_i\cdot(1+o(1)),~i=1,\ldots,n+1
\end{equation}
\begin{equation}\nonumber
\left.
(Q_{\varepsilon})'
\right|_{\omega^b_i}
=
c^b_i\cdot(1+o(1)),~i=1,\ldots,n
\end{equation}
\end{lemma}
We prove \autoref{lemma:first_derivative} later: see \autoref{ssec:Q-deriv}.
Consider some fixed point $y_0$ of the map $Q_{\varepsilon}$ that corresponds to some closed trajectory.
Due to \autoref{lemma:first_derivative} the derivative $Q_{\varepsilon}'$ tends to some constant in the neighborhood of $y_0$ as $\varepsilon$ tends to zero.
Due to \autoref{prop:Q-decomp} the following approximation takes place: $Q_{\varepsilon} = \beta\circ\beta + O(\varepsilon^\nu)$.
Map $\beta$ has no dependence on $\varepsilon$.
If for some segment $\sigma$ from the set
$$
\bigcup\limits_{i=1}^{n}\{\omega^a_i, \omega^b_i\}\cup \omega^a_{n+1}
$$ one of the corresponding transversal segments $\{0,\pi\}\times\sigma$ intersects closed trajectory, then by construction (see inclusions \eqref{eq:betaInclusions}) map $\beta\circ\beta$ sends the corresponding segment \emph{strictly} into or onto itself.
So for such segments the derivative $Q_{\varepsilon}$ does not tend to $1$ and is bounded away from $1$ for sufficiently small $\varepsilon$.
More exactly, there exists a constant $c>1$ such that
$\left.Q'_{\varepsilon}\right|_{\sigma}>c$ or $\left.Q'_{\varepsilon}\right|_{\sigma}<c^{-1}$.
Then either $Q_{\varepsilon}$ or $Q_{\varepsilon}^{-1}$ is a contracting map of a segment and it possesses an unique hyperbolic fixed point.
Therefore the closed trajectory intersecting $\omega^a_i$ for $i=1,\ldots,n+1$ and $\omega^b_i$ for $i=1,\ldots,n$ is an attracting or repelling limit cycle depending on parity of index $i$. Moreover, due to \autoref{lemma:lowerbound} there is no other closed trajectories.
(The same proof literally repeats for the segment $[a_{2n+1},b_{2n+1}]$.)
So function $f$ satisfies all the assertions of \autoref{thm:main} and main result is proved modulo \autoref{prop:Q-decomp} and \autoref{lemma:first_derivative}.
\end{proof}
\section{Proof of approximations of the Poincar\'e map}\label{sec:techn}
\subsection{Asymptotics}\label{ssec:approx}
In this section we provide a rigorous proof of \autoref{prop:Q-decomp}. See the
statement and the heuristic proof in \autoref{ssec:heur}.
\begin{proof}
The proof goes in 3 steps.
\step{No too early releases}
First we show that the trajectory $\psi$ with the initial condition $\psi(y_0)=0$ cannot dettach from the grand canard
at some fixed point to the left of $\beta(y_0)$.
Indeed, consider arbitrary point
$\beta_1\in (\widetilde\alpha^-,\beta(y_0))$.
Due to the definition of the map $\beta$ (see \eqref{eq:def-of-beta}) and the monotonicity of $\Phi^+\rarc{y_1}{y_2}$ with respect to~$y_2$, the following estimate holds:
\begin{equation}\label{eq:beta_1}
\Phi^-\rarc{y_0}{1}+\Phi^+\rarc{-1}{\beta_1}<0.
\end{equation}
Consider a vertical segment $J_0=\{y_0\}\times(x_1,x_2)$ that intersects $M^-$ and $\Sigma_-$ but does not intersect $M^+$. The trajectory $\psi$ intersects $J_0$.
Let the segments of grand canard in the basic strip be given by functions $x^{gc}_+(y)$
(for the part near~$M^+$) and $x^{gc}_-(y)$ (for the part near $M^-$).
It follows from
the geometric singular perturbation theory~\cite{Fe} that outside of some
neighbourhood of the jump points, $$x^{gc}_\pm(y)=M^\pm(y)+O(\varepsilon).$$ Therefore,
the
grand canard intersects $J_0$ at the point $x^{gc}_-(y_0)<0$. Let
$$J_0'=\rPeab{y_0}{\pi}([x^{gc}_-(y_0),0]).$$
It follows from~\autoref{thm:P'est-full} that
\begin{equation}\label{eq:J_0}
|J_0'|\le C_1e^{\frac{\Phi^-\rarc{y_0}{\pi}+o(1)}{\varepsilon}},
\end{equation}
for some constant $C_1$.
Consider now arbitrary fixed vertical segment
$J_1=\{\beta_1\}\times(x_1',x_2')$
that intersects $M^+$ and does not intersect~$M^-$. As previously, the grand
canard intersects $J_1$ in some point $x^{gc}_+(\beta_1)$ for $\varepsilon$ small enough. We will prove that the
trajectory $\psi$ intersects~$J_1$ as well. Let
$$J_1'=\lPeab{\beta_1}{-\pi}([x^{gc}_+(\beta_1),x_2']).$$
It follows
from~\autoref{thm:P'est-full} that
\begin{equation}\label{eq:J_1}
|J_1'|\ge C_2 e^{\frac{\Phi^+\rarc{-1}{\beta_1}+o(1)}{\varepsilon}}
\end{equation}
The lower borders of the segments $J_0'$ and $J_1'$ coincide: it's the point of the
intersection of the grand canard and the cross-section $\{y=\pm \pi\}$. It
follows from~\eqref{eq:beta_1}, \eqref{eq:J_0} and~\eqref{eq:J_1} that $|J_1'|\gg
|J_0'|$ and therefore $J_0'\subset J_1'$. As trajectory $\psi$ intersects $J_0'$
it therefore intersects $J_1'$ and $J_1$.
\step{No too late releases}
Prove that $Q^-_\varepsilon(y_0)$ exists and moreover $Q^-_\varepsilon(y_0)<\beta_2$ for any fixed $\beta_2\in(\beta(y_0),\widetilde \alpha^+)$ and sufficiently small $\varepsilon$. On the previous step we proved that the trajectory $\psi$ intersects vertical segment $J_2$ to the left of $\beta(y_0)$. Now consider a segment $J_3=\{\beta_2\}\times(0, \pi)$ to the right of $\beta(y_0)$. Then the trajectory $\psi$ does not intersect $J_3$. This can be proved by application of the previous step to the system with the time reversed.
Now consider a region bounded by the circles $y=\beta_1$, $y=\beta_2$, the grand canard $x=x^{gc}_+(y)$ and
the segment $\Sigma^+$. The trajectory $\psi$ enters this region in some point of the segment $J_1$. As $y$ monotonically increase, it has to leave the region. It cannot intersect $y=\beta_2$ and the grand canard. Therefore, it intersects $\Sigma^+$ in some point $(Q^+(y_0),\pi)$.
\step{Asymptotics of $Q^-(y_0)$}
Assume that $Q^-(y_0)$ is defined. Let
$$R^2_\varepsilon=\rPeab{-\pi}{Q^-(y_0)}\circ\rPeab{y_0}{\pi}.$$
Then
$$\frac{R^2_\varepsilon(0)-R^2_\varepsilon(x^{gc}_-(y_0)}{0-x^{gc}_-(y_0)}=\frac{\pi-M^+(y_0)+O(\varepsilon)}{0-M^-(y_0)+O(\varepsilon)}.$$
Due to Mean Value Theorem, it follows that the derivative of $R_\varepsilon$ at some point~$x^*$ have to be of order constant. However Theorem~\ref{thm:P'est-full} and the chain rule imply that
\begin{equation}\label{eq:Reps2}
\log (R^2_\varepsilon)'_x=\frac{\Phi^+\rarc{y_0}{1}+\Phi^-\rarc{-1}{Q^-(y_0)}+O(\varepsilon^\nu)}{\varepsilon}.
\end{equation}
The right-hand side of~\eqref{eq:Reps2} is bounded away from zero and infinity if the following estimate holds:
$$\Phi^+\rarc{y_0}{1}+\Phi^-\rarc{-1}{Q^-(y_0)}=O(\varepsilon^\nu).$$
which differs from equation~\eqref{eq:def-of-beta} only on $O(\varepsilon^\nu)$. Inverse Function Theorem finishes the proof.\end{proof}
\subsection{Estimate of the derivative}\label{ssec:Q-deriv}
In this section we prove \autoref{lemma:first_derivative}.
Denote the coordinates on $\Sigma^{\pm}$ as $y_{\pm}$.
The Poincar\'e map $Q_{\varepsilon}$ will be decomposed in the following way:
\begin{equation}\label{eq:Q-decomp2}
Q_{\varepsilon}\colon y_{-}^{(1)}\mapsto y_{+}^{(2)}\mapsto y_{-}^{(3)}.
\end{equation}
Consider trajectory that intersects $\Sigma_{-}$ at the point $y^{(1)}_{-}$, after that $\Sigma_{+}$ at the point $y^{(2)}_{+}$, after that $\Sigma_{-}$ at the point $y^{(3)}_{-}$.
Instead of the statement of \autoref{lemma:first_derivative} given above we will prove the following estimates:
\begin{equation}\nonumber
(Q_{\varepsilon}^{-})'
=
\dfrac {f'_x(x^{gc}_{-}(y_{-}^{(1)}),y_{-}^{(1)})}
{f'_x(x^{gc}_{+}(y_{+}^{(2)}),y_{+}^{(2)})}
\cdot (1+O(\varepsilon))
\end{equation}
\begin{equation}\nonumber
(Q_{\varepsilon}^{+})'
=
\dfrac {f'_x(x^{gc}_{-}(y_{+}^{(2)}),y_{+}^{(2)})}
{f'_x(x^{gc}_{+}(y_{-}^{(3)}),y_{-}^{(3)})}
\cdot (1+O(\varepsilon))
\end{equation}
Due to decomposition \eqref{eq:Q-decomp2} and assertion $4$ of \autoref{lemma:lowerbound} it implies \autoref{lemma:first_derivative}.
Introduce some notation.
Fix parameter $\varepsilon\in R_n$ such that grand canard exists.
Fix the corresponding grand canard. Consider its part which is bounded by the intersections with $J^+$ and $J^-$. It can be described as double-valued function $y\mapsto x$. Denote it as $x=x^{gc}(y)$. We will also use the notation $x=x^{gc}_+(y)$ and $x=x^{gc}_-(y)$ in the neighborhood of $M^+$ and $M^-$ respectively.
Recall Theorem 3 from \cite{GI} and equation (3.5) from \cite{IS2}:
\begin{theorem}
\label{thm:linearization}
Family \eqref{eq:slow_fast_system} for small $\varepsilon>0$ in the neighborhood of the slow curve (and outside any small fixed neighborhood of jump points) is smoothly orbitally equivalent to the family
$$
\dot{x}=f'_x(s(y,\varepsilon),y,\varepsilon),~
\dot{y}=\varepsilon.
$$
\end{theorem}
Due to \cite{IS2} the function $x=s(y,\varepsilon)$ above represents the true slow curve; it is defined in non-unique way but all true slow curves are exponentially close to each other and one can pick arbitrary one.
We choose segments of the grand canard bounded away from the jump points as this true slow curve, so put $s(y,\varepsilon)=x^{gc}_{\pm}(y,\varepsilon)$.
Below we will omit $\varepsilon$ in $x^{gc}_{\pm}(y,\varepsilon)$ for brevity.
In \autoref{thm:linearization} take the neighborhood of the jump points sufficiently small such that $M^{\pm}(a_0)$ and $M^{\pm}(b_{-1})$ are strictly inside the linearized area.
Consider two vertical transversal segments $\hat{J}^{\pm}$ which intersects $M^+$ and $M^-$ respectively in the neighbourhood of the jump points and inside the linearized area. Denote their $y$-coordinates as $\hat{\alpha}^{+}$ and $\hat{\alpha}^{-}$, $\hat{\alpha}^{\pm} = \alpha^{\pm}(\varepsilon)$.
\begin{proposition}
\label{prop:alpha-exists}
One can take parameters
$\hat{\alpha}^{\pm}(\varepsilon)$
such that
$\hat{\alpha}^{+}<a_0$,
$b_{-1}<\hat{\alpha}^{-}$
and the following equality takes place
\begin{equation}\label{eq:int-zero-gc}
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}
f'_x
(x^{gc}(y),y,\varepsilon)
dy
= 0.
\end{equation}
\end{proposition}
\begin{proof}[Proof of \autoref{prop:alpha-exists}]
Function $\Phi^{\pm}\rarc{A}{B}=\int_{A}^{B}\lambda^{\pm}(M^{\pm}(y),y)$ is continuous and monotonic in $A$, $B$ and tends to zero as $B$ tends to $A$.
Take any $y$ close to $-1$, $-1<y<a_0$.
Take any $\hat{\alpha}^{-}$ close to $1$ such that
$b_{-1}<\hat{\alpha}^{-}<1$
and
$\Phi^{+}\rarc{-1}{y} > \left|\Phi^{-}\rarc{\hat{\alpha}^{-}}{1}\right|$.
Because of continuity there exists function
$
\hat{\alpha}^{+}_{0}=\hat{\alpha}^{+}_{0}(\hat{\alpha}^{-})<y
$
such that
$
\Phi^{+}\rarc{-1}{\hat{\alpha}^{+}_{0}} = \left|\Phi^{-}\rarc{\hat{\alpha}^{-}}{1}\right|
$.
Therefore due to eqution in variations and \autoref{thm:P'est-full}
\begin{equation}\label{eq:alphaIntEst}
\int_{\hat{\alpha}^{-}}^{\hat{\alpha}^+_{0}}
f'_x(x^{gc}(y),y,\varepsilon){}
dy
=
\varepsilon\cdot\log
\left(
P_{\varepsilon}^{\rarc{\hat{\alpha}_0^-}{\hat{\alpha}_0^+}}
\right)'_x
=
O(\varepsilon^{\nu}),~\nu\in(0,1)
\end{equation}
Consider an interval $\hat{U}$ in the domain of $\hat{\alpha}^-$.
Consider sufficiently small $\hat{\delta}$ such that for every $\hat{\alpha}^-$ from that interval $\hat{\delta}$-neighborhood $U_{\hat{\delta}}(\hat{\alpha}^+_0)\subset(-1,a_0)$ .
Take linearized area large enough to satisfy the following assertions:
it should contain $\hat{U}$ and contain $U_{\hat{\delta}}(\hat{\alpha}^+_0)$ for every $\hat{\alpha}^-\in\hat{U}$.
The derivative $f'_x$ inside the linearized area is bounded away from zero. Therefore there exists constant $\hat{c}$ such that the following takes place as $\varepsilon\to 0$:
\begin{equation}
\max
\left(
\int
\limits _{\hat{\alpha}^+_0}
^{\hat{\alpha}^+_0 + \hat{\delta}}
f'_x(x^{gc}(y),y,\varepsilon)
dy,
\int
\limits _{\hat{\alpha}^+_0 - \hat{\delta}}
^{\hat{\alpha}^+_0}
f'_x(x^{gc}(y),y,\varepsilon)
dy
\right)
> \hat{c}.
\end{equation}
Therefore there exists $\hat{\varepsilon} = \hat{\varepsilon}(\hat{\delta})>0$ such that for any $\varepsilon\in(0,\hat{\varepsilon})$ the left-hand side of \eqref{eq:alphaIntEst} is smaller than $\hat{c}$.
So for $\varepsilon\in(0,\hat{\varepsilon})$ there exists function $\hat{\alpha}^{+} = \hat{\alpha}^{+}(\hat{\alpha}^{-})$ such that $\hat{\alpha}^{+}\in U_{\hat{\delta}}(\alpha^+_0)$ and the following equality takes place:
\begin{equation}\label{eq:hatAlphaDef}
\int
\limits _{\hat{\alpha}^+_{0}}
^{\hat{\alpha}^+}
f'_x(x^{gc}(y),y,\varepsilon)
dy
=
-
\int
\limits _{\hat{\alpha}^-_{0}}
^{\hat{\alpha}^+_{0}}
f'_x(x^{gc}(y),y,\varepsilon)
dy,
\end{equation}
therefore equation \eqref{eq:int-zero-gc} takes place. Due to construction $\hat{\alpha}^+$ satisfies all assertions of \autoref{prop:alpha-exists}.
\end{proof}
Denote by $\eta_{\pm}$ the vertical component of the linearizing charts in the neighborhood of $M^{\pm}$ respectively.
Moreover, due to the choice of $s(y,\varepsilon)$ in \autoref{thm:linearization}, grand canard $x^{gc}(y)$ is given by the equation $\eta_{\pm}=0$ in the neighborhood of $M^{\pm}$.
Take small constant $c$ and let $\xi_{\pm}$ be $y$-coordinate on the $\{\eta_{\pm}=c\}$.
Decompose $Q_{\varepsilon}^-$ as
$$
Q_{\varepsilon}^-\colon
\Sigma^-\to \{\eta_-=c\}\to \hat{J}^- \to \hat{J}^+
\to \{\eta_+=c\}\ \to \Sigma^+,
$$
$$
Q_{\varepsilon}^-\colon
y_{-}\mapsto\xi_{-}\mapsto\eta_{-}\mapsto
\eta_{+}\mapsto\xi_{+}\mapsto y_{+}.
$$
\begin{figure}[t!]
\centering
\includegraphics{coordinates.pdf}
\caption{Coordinates in \autoref{lemma:first_derivative}}
\end{figure}
\begin{proof}[Proof of \autoref{lemma:first_derivative}]
Below we prove only the first equation from the statement of \autoref{lemma:first_derivative}. Proof for the second and the third ones is similar.
We have to estimate the derivative
\begin{equation} \label{eq:Q-'}
{Q_{\varepsilon}^{-}}'=
\dfrac{dy_{+}}{dy_{-}} =
\dfrac{dy_{+}}{d\xi_{+}}\cdot
\dfrac{d\xi_{+}}{d\eta_{+}}\cdot
\dfrac{d\eta_{+}}{d\eta_{-}}\cdot
\dfrac{d\eta_{-}}{d\xi_{-}} \cdot
\dfrac{d\xi_{-}}{dy_{-}}.
\end{equation}
Constant $c$ in the definition of the transversal $\{\eta_{\pm}=c\}$ does not depend on $\varepsilon$ and the trajectory spends bounded time between the intersections with $\Sigma^{\pm}$ and $\{\eta_{\pm}=c\}$, therefore
\begin{equation}\label{eq:xi-y}
\xi_{\pm}(y_{\pm}) = y_{\pm} + \varepsilon K_{\pm}(\varepsilon,y_{\pm}),~
\text{$K_{\pm}$ is smooth.}
\end{equation}
Denote
$$
\Phi^{\pm}_{\varepsilon}\rarc{A}{B} = \int_{A}^{B}f'_x(x^{gc}_{\pm}(y),y).
$$
Charts $(\eta_{\pm},\xi_{\pm})$ are linearized, therefore due to \autoref{thm:linearization} the following takes place:
\begin{equation}
\eta_{-}(\xi_{-})
=
c
\exp
\frac
{\Phi^-_{\varepsilon}\rarc{\xi_{-}}{\hat{\alpha}^{-}}}
{\varepsilon},~
\eta_{+}(\xi_{+})
=
c
\exp\left(
-\frac
{\Phi^+_{\varepsilon}\rarc{\hat{\alpha}^{+}}{\xi_{+}}}
{\varepsilon}
\right)
\end{equation}
Calculate the derivatives:
\begin{equation
\label{eq:eta_1}
\begin{aligned}
%
\frac{d\eta_{-}}{d\xi_{-}} & =
-\frac{c}{\varepsilon}
f'^{-}_x(x^{gc}_{-}(\xi_{-}),\xi_{-})
\exp{
\frac
{\Phi^-_{\varepsilon}\rarc{\xi_{-}}{\hat{\alpha}^{-}}}
{\varepsilon}
}, \\
%
\frac{d\eta_{+}}{d\xi_{+}} & =
-\frac{c}{\varepsilon}
f'^{+}_x(x^{gc}_{+}(\xi_{+}),\xi_{+})
\exp{\left(
-\frac
{\Phi^+_{\varepsilon}\rarc{\hat{\alpha}^{+}}{\xi_{+}}}
{\varepsilon}
\right)
}. \\
\end{aligned}
\end{equation}
Below we omit $\xi_{\pm}$ in the argument of the $x^{gc}_{\pm}$ for brevity.
Now estimate the factors in the decomposition \eqref{eq:Q-'} for some trajectory $x(y)$ using \eqref{eq:xi-y} and \eqref{eq:eta_1} :
\begin{multline}\label{eq:Q'decomp}
{Q_{\varepsilon}^{-}}' =
\dfrac{1+\varepsilon K_{-}}{1+\varepsilon K_{+}}\cdot
\dfrac {f'^{-}_x(x^{gc}_{-},\xi_{-})}
{f'^{+}_x(x^{gc}_{+},\xi_{+})}\cdot
\exp{
\dfrac{1}{\varepsilon}
\left(
\Phi^-_{\varepsilon}\rarc{\xi_{-}}{\hat{\alpha}^{-}} +
\Phi^+_{\varepsilon}\rarc{\hat{\alpha}^{+}}{\xi_{+}}
\right)
}\cdot
\dfrac{d\eta_{+}}{d\eta_{-}}
\end{multline}
Estimate the last factor:
\begin{equation
\label{eq:eta_factor}
\left.
\dfrac
{d\eta_{+}}
{d\eta_{-}}
\right|_{(x(y),y)}
=
\exp{
\left(
\frac
{1}
{\varepsilon}
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}
f'_x(x(y),y,\varepsilon)
dy
\right)
}
\end{equation}
Consider integral in the argument of $\exp$-function.
By construction the trajectory $x(y)$ intersects $\{\eta_-=c\}$ at point $\xi_-$ inside segment $[a_0,b_0]$, and $\xi_-<\hat{\alpha}^-$.
Due to \autoref{thm:P'est-full} for any $y\in\rarc{1}{-1}$
the following approximations take place:
\begin{equation}\label{eq:693}
\left(
P^{\rarc{\xi_-}{y}}_{\varepsilon}
\right)_x'
=
\exp
\frac
{\Phi^-([\xi_-,1])+O(\varepsilon^{\nu})}
{\varepsilon},
\end{equation}
$$
\left(
P^{\rarc{y}{\hat{\alpha}^+}}_{\varepsilon}
\right)_x'
=
\exp
\left(
\frac
{\Phi^+([-1,\hat{\alpha}^+])+O(\varepsilon^{\nu})}
{\varepsilon}
\right).
$$
Due to the definition of $\hat{\alpha}^{\pm}$ the following holds: there exists constant $M>0$ such that
\begin{equation}\label{eq:694}
\Phi^-([\xi_-,\hat{\alpha}^-]) +
\Phi^-([\hat{\alpha}^-,1]) +
\Phi^+([-1,\hat{\alpha}^+]) <-M.
\end{equation}
Therefore for any $y\in(\hat{\alpha}^-,\hat{\alpha}^+)$ the following estimate takes place:
$$
x(y)-x^{gc}(y) <
\exp\left(-\frac{M}{\varepsilon}\right),~
\text{$M$=const}
$$
Indeed, for $y\in(\xi_-,1)$ it follows from \eqref{eq:693} and for $y\in(1,\alpha^+)$ it follows from \eqref{eq:694}.
Further,
\begin{equation}\label{eq:o-eps-compare}
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}
\left(
f'_x(x(y),y,\varepsilon)-
f'_x(x^{gc}(y),y,\varepsilon)
\right)
dy
<
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}
\int_0^{\exp\left({-\frac{M}{\varepsilon}}\right)}
f''_{xx}
dx
dy
=
o(\varepsilon).
\end{equation}
Due to \autoref{prop:alpha-exists} the left-hand side of the inequality \eqref{eq:o-eps-compare} equals
$
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}
f'_x(x(y),y,\varepsilon)
dy
$.
Therefore due to \eqref{eq:eta_factor} the
following estimate takes place
$$
\left.
\dfrac
{d\eta_{+}}
{d\eta_{-}}
\right|_{(x(y),y)}
=
1+o(1).
$$
Now estimate the factor with $\exp$-function in \eqref{eq:Q'decomp}:
\begin{equation}\label{eq:thirdFactor}
\exp{
\dfrac{1}{\varepsilon}
\left(
\Phi^-_{\varepsilon}\rarc{\xi_{-}}{\hat{\alpha}^{-}} +
\Phi^+_{\varepsilon}\rarc{\hat{\alpha}^{+}}{\xi_{+}}
\right)
}
\end{equation} We use the following obvious statement:
\begin{proposition}
\label{prop:x*}
Let trajectory $x(y)$ passes through points $(\xi_{-},c)$ and $(\xi_{+},c)$. Then there exists trajectory $x^*(y)$ such that
\begin{equation}\label{eq:int-zero-*}
\int_{\xi_{-}}^{\xi_{+}}
f'_x
(x^*(y),y,\varepsilon)
dy
= 0.
\end{equation}
\end{proposition}
\begin{proof}[Proof of the \autoref{prop:x*}]
Grand canard passes through $(\xi_{-},0)$ and $(\xi_{+},0)$.
Therefore by Mean Value Theorem for Poincar\'e map from the segment $[(\xi_{-},0),(\xi_{-},c)]$ to the segments $[(\xi_{+},0),(\xi_{+},c)]$ there exists a trajectory $x^*(y)$ with the derivative of the Poincar\'e map equals to $1$. Equation in variations implies \eqref{eq:int-zero-*}.
\end{proof}
Let us decompose \eqref{eq:int-zero-*} in the sum of three integrals:
$$
\int_{\xi_{-}}^{\xi_{+}} f'_x (x^*(y),y,\varepsilon) dy =
\int_{\xi_{-}}^{\hat{\alpha}^-} +
\int_{\hat{\alpha}^-}^{\hat{\alpha}^+}+
\int_{\hat{\alpha}^+}^{\xi_{+}} =0.
$$
Literally as in \eqref{eq:o-eps-compare} the second integral has an order $o(\varepsilon)$.
Charts in the neighbourhood of the slow curve are linearized so one can replace $x^*(y)$ by
$x^{gc}(y)$. So the following takes place
$$
\int\limits_{\xi_{-}}^{\hat{\alpha}^{-}}f'^-_x(x^{gc}_{-},y)dy +
o(\varepsilon)+
\int\limits_{\hat{\alpha}^{+}}^{\xi_{+}}f'^+_x(x^{gc}_{+},y)dy = 0
$$
therefore the sum of integrals in \eqref{eq:thirdFactor} has an order of $o(\varepsilon)$.
Therefore the third factor has an order $\exp(\frac{o(\varepsilon)}{\varepsilon})=1+o(1)$.
Derivative $f'_{x}$ is smooth therefore due to \eqref{eq:xi-y} the following takes place:
$f'_{x}(x^{gc}_{\pm}(y_{\pm}), y_{\pm})=
f'_{x}(x^{gc}_{\pm}(\xi_{\pm}),\xi_{\pm}) + O(\varepsilon)$.
Finally
$$
\dfrac{dy_{+}}{dy_{-}}
=
\dfrac {f'_x(x^{gc}_{-},y_{-})}
{f'_x(x^{gc}_{+},y_{+})}
\cdot (1+o(1))
\cdot (1+O(\varepsilon)).
$$
\end{proof}
\section{Conjectures on unconnected slow curves}\label{sec:conjectures}
Previously, only the case of connected slow curve was studied. We believe that the
methods of the present paper can be applied also to the case of unconnected
slow curve. In this section we propose a related conjecture and outline its
proof.
Consider system \eqref{eq:slow_fast_system} on the two-torus with the slow curve $M$ that satisfies the following properties:
\begin{enumerate}
\item Curve $M$ has two connected components $M_1$ and $M_2$. Projection of $M$ on the $y$-axis along $x$-axis is two disjoint arcs.
\item Each of the components $M_1$, $M_2$ has two jump points $G_1^{\pm}$ and $G_2^{\pm}$.
\item Components $M_1$, $M_2$ are nondegenerate in the sense
of~\autoref{def:nondeg}.
\end{enumerate}
We will call such slow curve \emph{simple unconnected}.
One can state the following conjecture
\begin{conjecture}\label{conj1}
For every desired number $l\in\mathbb{N}$ there exists a local topologically generic set in the space of slow-fast systems on the two-torus with the following property:
\begin{enumerate}
\item Slow curve is nondegenerate and simple unconnected.
\item For every system from this set there exists a sequence of intervals
$\{R_n\}_{n=0}^{\infty}\subset\{\varepsilon>0\}$,
accumulating at zero,
such that for every $\varepsilon\in R_n$
there exists at least $l$ canard limit cycles, which makes one pass along the slow direction.
\end{enumerate}
\end{conjecture}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{unconnected_slow_curve_first.pdf}
{\vskip 0.3cm}
\includegraphics[width=10cm]{unconnected_slow_curve_second.pdf}
\caption{Simple unconnected slow curve: contractible (top) and noncontractivle (bottom) cases}\label{fig:unconnected_slow_curve}
\end{center}
\end{figure}%
Components of the simple unconnected slow curve may be both contractible or both noncontractible, see \autoref{fig:unconnected_slow_curve}. Consider first the case of contractible ones, see \autoref{fig:unconnected_slow_curve}, top. Note that this figure is very similar to one studied in the present paper (see \autoref{fig:general-view}) but contains two copies of the slow curve. In fact one can obtain such picture as a two-leaf cover of the original system~\eqref{eq:slow_fast_system}: the fundamental domain should be extended twice along the slow direction. If we obtain the new system from the original one in this way all the results about two-pass limit cycles for the original system can be rewritten in terms of one-pass limit cycles for the new system. This is not a huge success because of strong nongenericity of the new system (shift symmetry). However, it is easy to see that the only requirement that needed for our proof to work in the new settings is the existence of two grand canards for the same value of $\varepsilon$. For shift-symmetric system we obtain them ``for free'', for generic system it seem to be much rarer event. Nevertheless, if the generic system possesses two grand canards like is shown on the Figure, the same arguments we used to prove \autoref{thm:main} can be applied verbatim to the proof of \autoref{conj1}.
We also note that the same arguments work for the case of noncontractible components, again, provided that two grand canards exists. This
can be of particular interest because slow-fast systems with
noncontractible slow curve (like in \autoref{fig:unconnected_slow_curve}, bottom),
appear naturally in physical applications~\cite{KRS}.
Thus the key question is: can two grand canards coexist for the same value of
$\varepsilon$ for generic system? We conjecture that for \emph{topologically generic}
systems the answer is affirmative.
\begin{conjecture}
For \emph{locally topologically generic} slow-fast system on the two-torus there
exists a sequence of intervals on the ray $\{\varepsilon >0 \}$ accumulating at 0
such that for every $\varepsilon$ from these intervals there exists two grand
canards.
\end{conjecture}
\begin{proof}[Outline of the proof]
Each component $M_{1}$, $M_{2}$ has repelling and attracting parts. Denote them as $M_{1}^{\pm}$ and $M_{2}^{\pm}$ respectively.
Take $\alpha_1^{\pm}$ close to $y(G_1^{\pm})$, $\alpha_1^{\pm}\in \left(y(G_1^+),y(G_1^-)\right)$ and
$\alpha_2^{\pm}$ close to $y(G_2^{\pm})$, $\alpha_2^{\pm}\in \left(y(G_2^+),y(G_2^-)\right)$.
As before (see \autoref{section:notation}) consider corresponding vertical transversal segments $J_{1}^{\pm}$, $J_{2}^{\pm}$ with the following properties: for $i=1,2$ segment $J_i^{\pm}$ intersects $M_i^{\pm}$ respectively and does not intersect $M_i^{\mp}$; moreover,
$$
y(J_i^{\pm}) = \alpha_i^{\pm},~i=1,2.
$$
Denote $y_{1} = (y(G_1^-) + y(G_2^+))/2$,
$y_{2} = (y(G_2^-) + y(G_1^+))/2$ and consider
vertical global transversal circles
$y_1 \times \mathbb{S}^1$ and
$y_2 \times \mathbb{S}^1$.
As before (see \autoref{section:preliminary}) denote
$$
D_{1,\varepsilon}^+ = P_{\varepsilon}^{\rarc{\alpha_1^+}{y_1}} J_1^+,~
D_{1,\varepsilon}^- = P_{\varepsilon}^{\larc{y_2}{\alpha_1^-}} J_1^-,
$$
$$
D_{2,\varepsilon}^+ = P_{\varepsilon}^{\rarc{\alpha_2^+}{y_2}} J_2^+,~
D_{2,\varepsilon}^- = P_{\varepsilon}^{\larc{y_1}{\alpha_2^-}} J_2^-.
$$
If intersection $D_{1,\varepsilon}^+\cap D_{2,\varepsilon}^-$ is not empty, than there exists grand canard which intersects both $J_1^+$ and $J_2^-$. The same holds for intersection $D_{1,\varepsilon}^-\cap D_{2,\varepsilon}^+$ and grand canard which intersects both $J_1^-$ and $J_2^+$.
Further, these intersections forms a series of segments accumulating at zero.
$$
\mathcal R^1=\{R_n^1\} _{n=0}^{\infty},\quad \mathcal R^2= \{R_n^2\}_{n=0}^{\infty}
$$
$$
\bigcup_{n=0}^\infty R_n^1=\{\varepsilon\mid D_{1,\varepsilon}^+\cap D_{2,\varepsilon}^- \ne \emptyset\},\quad
\bigcup_{n=0}^\infty R_n^2= \{\varepsilon\mid D_{1,\varepsilon}^-\cap D_{2,\varepsilon}^+ \ne \emptyset\}.
$$
\begin{definition}
System \eqref{eq:slow_fast_system} is called \emph{good} if for any $\varepsilon>0$ there exists $n,m\in \mathbb{N}$ such that
$R_n^1\cap R_m^2 \ne \emptyset$ and $R_n^1,~R_m^2\subset (0,\varepsilon)$. Otherwise call system \emph{bad}.
\end{definition}
We conjecture that \emph{good} systems are topologically generic in the space of slow-fast systems\footnote{The authors are grateful to Victor
Kleptsyn for these arguments.}. Indeed, for a couple of intervals $R_k^1$ and $R_l^2$ one can perturb system slightly (increasing or decreasing the fast component of the vector field in the ``rotation phase'') to make them intersect each other by a segment. If $k$ and $l$ are large enough the perturbation is small enough. Therefore, the set of systems with one intersection between $\mathcal R^1$ and $\mathcal R^2$ is open and dense. In a similar way one can show that the set of systems with any finite number of intersections between these two series of segments is open and dense. It follows that the set of systems with \emph{infinitely many} intesections is residual and the corresponding property is topologically generic.
\end{proof}
|
2,877,628,088,566 | arxiv | \section{Introduction}
Planar structures with periodically arranged subwavelength elements, coined as metasurfaces~\cite{Tretyakov,Holloway,Glybovski,bian_metasurfaces,chenreview}, have attracted considerable attention in the last decade (see e.g.~\cite{Estakhri,FranciscoCuesta,Eleftheriades101,XWangAalto,mirmoosaaalto,FuAalto,kivshar2015,asadchy2016,yang2014,capasso2011}). One important application of such artificial surfaces is in controlling surface waves~\cite{Tcvetkova,sun,Smith,engheta}, including a single metasurface~\cite{Tretyakov} or a metasurface over a grounded slab~\cite{grbic2,Elek}. This guiding property occurs since we can engineer the surface impedance of such structures~\cite{bilow,hessel,sievenpiper1,sievenpiper2}. Analogous to these planar waveguides, new designs for open waveguide structures can be introduced, which are formed by two parallel and penetrable metasurfaces separated by a finite distance~\cite{xin}. Such open waveguides confine the electromagnetic energy while the corresponding fields are attenuated away from the structure (note that these guiding structures can be intriguingly invisible under plane-wave illumination~\cite{FranciscoCuesta}). An interesting special case of those waveguides is the set of two penetrable metasurfaces which are Babinet-complementary. This structure is intriguing since the product of the surface impedances of two complementary sheets must be nondispersive and equal to $\eta_0^2/4$, where $\eta_0$ is the impedance of the background isotropic medium (free space, in this work). Moreover, in the limit of zero distance between the sheets, they combine into a continuous PEC sheet. In this paper, we study this special case of two complementary sheets and investigate the corresponding guiding waves.
For microwave waveguides, complementary inclusions are realized by interchanging metal and vacuum regions for a given planar structure, assuming the perfect electric conductor model for metals. Incorporation of such inclusions have been studied and applied also in planar waveguide structures~\cite{Zentgraf,Cui,Kumar,Bitzer}. However, earlier works studied planar waveguide structures which contain complementary inclusions only in one single layer~\cite{Dong,Landy,Pulido-Mancera,Gonzalez-Ovejero,Sievenpiper3}. Here, we instead impose the condition that the two metasurfaces are complementary with respect to each other. It is worth noting that some attention has been paid to the scattering characteristics of two parallel complementary sheets placed in close proximity of each other~\cite{Lockyer,Bukhari,Bukhari2,our_comp}. However, to the best of our knowledge, there is not enough knowledge on eigenmodes of coupled complementary metasurfaces operating as a single waveguiding structure.
The proposed guiding structure is shown in Fig.~\ref{fig:cs}. Two complementary metasurfaces are separated by the distance $d$, and the space between them is filled by air (vacuum). The surface impedances of the two metasurfaces are denoted as $Z_{\rm{s}1}$ and $Z_{\rm{s}2}$, respectively, and we assume that these values do not depend on the spatial coordinates in the sheet planes. Based on the Babinet principle, we have
\begin{equation}
Z_{\rm{s}1} \cdot Z_{\rm{s}2}=\frac{\eta_0^2}{4}.
\label{eq:babprin}
\end{equation}
We orient the $z$-axis along the direction of the wave propagation. Here, we classify the proposed structure into two different categories: non-resonant and resonant structures. For each category, we analyze the corresponding guided modes and study the extreme case when the distance between the two metasurfaces tends to zero. Furthermore, we also reveal a possibility of exciting two modes of different polarizations propagating with the same phase velocity (degeneracy state).
The paper is organized as follows: Sections~\ref{sec:nresis} and \ref{sec:rdis} study two non-resonant and resonant structures. Section~\ref{sec:PI} illustrates the polarization insensitivity, and finally Section~\ref{sec:conclusion} concludes the paper.
\section{Non-resonant dispersive impedance sheets}
\label{sec:nresis}
\begin{figure*}
\begin{minipage}[b]{\columnwidth}
\subfigure[]{\includegraphics[width=1.2\columnwidth,height=0.9\columnwidth]{1.pdf}\label{fig:cs}}
\end{minipage}
\hfill
\begin{minipage}[b]{\columnwidth}
\subfigure[]{\includegraphics[width=0.5\columnwidth,height=0.425\columnwidth]{2.pdf}\label{fig:ESnonresonant}}
\subfigure[]{\includegraphics[width=0.5\columnwidth,height=0.425\columnwidth]{17.pdf}\label{fig:ESresonant}}
\end{minipage}
\caption{(a)--The structure under study: Two complementary impedance sheets. (b)--Equivalent circuit model for two non-resonant dispersive impedance sheets. (c)--Equivalent circuit model for two resonant dispersive impedance sheets.}
\end{figure*}
In this category, the surface reactances are far from resonances (we assume a lossless structure). In other words, they are corresponding to the reactance of a single capacitance or inductance. If the two metasurfaces, shown in Fig.~\ref{fig:cs}, are complementary to each other, one of the metasurfaces should be inductive and the other metasurface should be capacitive. Under this condition, we keep the product of the two surface impedances constant (canceling out the frequency $\omega$) and satisfy the Babinet principle. As a consequence, there are two modes propagating along the waveguide and having transverse magnetic (TM) and transverse electric (TE) polarizations. Note that one single metasurface can support guided waves (surface waves) that possess only transverse electric or transverse magnetic polarization~\cite{Collin,Tretyakov}. However, in waveguides which consist of two parallel metasurfaces~\cite{xin}, there are always two simultaneous modes whose polarizations depend on the surface impedances. Two inductive (capacitive) sheets correspond to guiding of two TM modes (TE modes), and one inductive and one capacitive sheet correspond to guiding of co-existing TM and TE modes. The equivalent circuit model of two non-resonant dispersive impedance sheets is shown in Fig.~\ref{fig:ESnonresonant}. Practical realization of such structure can be in form of dense meshes of metallic strips for the inductive sheet and arrays of small metallic patches for the capacitive sheet. Let us denote the effective inductance of the inductive sheet by $L$, and the effective capacitance of the other sheet as $C$. From Eq.~\ref{eq:babprin}, we can immediately see that those effective parameters are related to each other as (see also Eq.~(\ref{eq:LC}) of Appendix):
\begin{equation}
C=\frac{4L}{\eta_0^2}.
\label{eq:cccc}
\end{equation}
Based on the above expression, the dispersion relation of the guided modes can be written in terms of $L$ only:
\begin{equation}
\frac{\alpha^2}{\epsilon_0}(1-e^{-2\alpha d})+\frac{\alpha}{2L}\eta_0^2=\omega^2(\mu_0+2\alpha L),
\label{eq:TMNonResonantCom}
\end{equation}
for the TM-polarized wave and
\begin{equation}
\frac{\alpha^2}{\epsilon_0}+\frac{\alpha}{2L}\eta_0^2=\omega^2\Big[\mu_0(1-e^{-2\alpha d})+2\alpha L\Big],
\label{eq:TENonResonantCom}
\end{equation}
for the TE-polarized wave. Here $\alpha$ denotes the field attenuation constant for fields outside the waveguide. Contemplating Eq.~\ref{eq:TENonResonantCom}, we see that there is a cut-off frequency for the TE wave, which is associated with the distance between the two metasurfaces. Studying the limit of $\alpha$ approaching zero, by using l'H{\^{o}}pital's rule we find the cut-off frequency:
\begin{equation}
f_{\rm{cut-off}}={{\frac{\eta_0}{4\pi}}\frac{1}{\sqrt{L(L+\mu_0 d)}}}.
\label{eq:NonResonantComCutOff}
\end{equation}
\begin{figure}[t!]
\centerline{\includegraphics[width=0.8\columnwidth]{3.pdf}}
\caption{Dispersion curves of two non-resonant sheets, when the distance between the sheets is $d=10\, {\rm{mm}}$, $d=1\, {\rm{mm}}$, and $d=0.00001\, {\rm{mm}}$.}
\label{fig:NonResonantDispersion}
\end{figure}
As an example, taking $L=2.5$~nH, we can readily find the corresponding value of $C$ in according to Eq.~(\ref{eq:cccc}). Suppose that $\beta$ is the phase constant along the wave propagation direction. Figure~\ref{fig:NonResonantDispersion} shows the analytically calculated dispersion curves (frequency versus $\beta$) for different distances between the two non-resonant sheets. Concurring with expectations, both TM and TE modes can simultaneously propagate along such waveguide. When $d=10$~mm, the TE and TM modes have approximately the same phase velocity within a certain frequency range, which we will explain later in Section~\ref{sec:PI}. Decreasing the distance causes that the dispersion curve corresponding to the TE mode (red line) moves counterclockwise towards the light line at high frequencies and the curve corresponding to the TM mode (blue line) moves clockwise becoming far away from the light line. Interestingly, the limit of $d\rightarrow 0$ leads to the appearance of a new resonance frequency for TM mode, which can be analytically found as
\begin{equation}
f_{\rm{mix}}\approx{{\frac{1}{2\pi}}\frac{1}{\sqrt{LC}}}.
\label{eq:NonResonantfmix}
\end{equation}
This new resonance is exactly the same as the cut-off frequency of the TE-polarized wave (Eq.~(\ref{eq:NonResonantComCutOff})) when $d=0$. The dispersion curve for this extreme case is illustrated in Fig.~\ref{fig:NonResonantExtreme}.
\begin{figure}[t!]
\centerline{\includegraphics[width=0.8\columnwidth]{4.pdf}}
\caption{Dispersion curves for two non-resonant sheets in which the distance between two sheets tends to zero.}
\label{fig:NonResonantExtreme}
\end{figure}
In order to get better physical insight into the electromagnetic behavior of the waveguide structure, Fig.~\ref{fig:NonResonantfield} plots the electric field distribution, and the following Figs.~\ref{fig:NonResMagCur}~and~\ref{fig:NonResPhaCur} show the surface current density. For the TM mode, the operating frequency is assumed to be $2.7$~GHz and two different distances are considered. For the TE modes, we assume that the operating frequency is $5$~GHz. In Fig.~\ref{fig:NonResonantfield}, note that the top metasurface is inductive and the bottom one is capacitive. Also, the two black solid lines show the positions of the two metasurfaces.
One can see that for the TM polarization the field is strongly tied to the inductive sheet and attenuates as the vertical distance from that sheet increases, while the field is strongly confined to the capacitive sheet for the TE polarization. When the two sheets get close to each other, strong coupling can be observed from the phenomenon that the magnitude of electric field does not change too much between the two sheets because of the small distance. The surface current corresponding to the TE-polarized mode flows along the $x$-axis, while it flows along the $z$-axis for the TM polarization. It is perceived that the unequal magnitude of surface current density on the two sheets can be attained due to the asymmetric impedances. Also, as Fig.~\ref{fig:NonResPhaCur} shows, the TM-polarized mode is characterized by out-of-phase surface currents. In contrast, for the TE-polarized mode, the surface currents are in phase.
\begin{figure}[t!]\centering
\subfigure[]{\includegraphics[width=0.11\textwidth]{5.pdf}}
\subfigure[]{\includegraphics[width=0.11\textwidth]{6.pdf}}
\subfigure[]{\includegraphics[width=0.11\textwidth]{7.pdf}}
\subfigure[]{\includegraphics[width=0.11\textwidth]{8.pdf}}
\caption{The distribution of the electric field of two non-resonant sheets. Here, (a) TM polarization, $d=1{~\rm{mm}}$, $\beta=117.4$ 1/m, (b) TM polarization, $d=0.00001{~\rm{mm}}$, $\beta=274$ 1/m, (c) TE polarization, $d=1{~\rm{mm}}$, $\beta=167.7$ 1/m, and (d) TE polarization, $d=0.00001{~\rm{mm}}$, $\beta=153.1$ 1/m. The operational frequency is $f=2.7$ GHz for TM modes and $f=5$ GHz for TE modes. (a) and (b) are the longitudinal component of the electric field. The vertical and horizontal axes are $y$- and $z$-axes, respectively. The metasurface positions are shown by solid black lines.}
\label{fig:NonResonantfield}
\end{figure}
\begin{figure*}[ht!]\centering
\subfigure[]{\includegraphics[width=0.19\textwidth]{9.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{10.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{11.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{12.pdf}}
\caption{Magnitude of surface currents for two non-resonant sheets. Here, (a) TM polarization, $d=1{~\rm{mm}}$, $\beta=117.4$ 1/m, (b) TM polarization, $d=0.00001{~\rm{mm}}$, $\beta=274$ 1/m, (c) TE polarization, $d=1{~\rm{mm}}$, $\beta=167.7$ 1/m, and (d) TE polarization, $d=0.00001{~\rm{mm}}$, $\beta=153.1$ 1/m. The operational frequency is $f=2.7$ GHz for TM modes and $f=5$ GHz for TE modes. }
\label{fig:NonResMagCur}
\end{figure*}
\begin{figure*}[ht!]\centering
\subfigure[]{\includegraphics[width=0.19\textwidth]{13.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{14.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{15.pdf}}
\subfigure[]{\includegraphics[width=0.19\textwidth]{16.pdf}}
\caption{Phase of surface currents for two non-resonant sheets. Here, (a) TM polarization, $d=1{~\rm{mm}}$, $\beta=117.4$ 1/m, (b) TM polarization, $d=0.00001{~\rm{mm}}$, $\beta=274$ 1/m, (c) TE polarization, $d=1{~\rm{mm}}$, $\beta=167.7$ 1/m, and (d) TE polarization, $d=0.00001{~\rm{mm}}$, $\beta=153.1$ 1/m. The operational frequency is $f=2.7$ GHz for TM modes and $f=5$ GHz for TE modes.}
\label{fig:NonResPhaCur}
\end{figure*}
\section{Resonant dispersive impedance sheets}
\label{sec:rdis}
In this category, the waveguide under study consists of metasurfaces having resonant properties. In other words, the surface impedance of each metasurface can be expressed as an equivalent impedance of a series or parallel connection of an inductance and capacitance. The unit-cell size of the structure is much smaller than the free-space wavelength (at the resonant frequency), which allows the structure to be modeled by homogenized impedances. From the circuit theory point of view, we explicitly know that the equivalent impedance of the series connection of an inductance and capacitance is capacitive and inductive below and above the resonance frequency, respectively. On the contrary, the equivalent impedance of the parallel connection is inductive below the resonance frequency and capacitive above it. Hence, following these considerations, one of the metasurfaces must be realized as a series connection and the other one as a parallel connection of two reactances, as shown in Fig.~\ref{fig:ESresonant}. Here, the surface impedances are characterized by effective inductances and capacitances denoted as $L_1$, $C_1$ and $L_2$, $C_2$ for each metasurface. Let us assume that $L_1$ and $C_1$ are in series connection and $L_2$, $C_2$ are in parallel connection. These values are obtained from the full-wave solution of a plane-wave reflection problem in the quasi-static limit~\cite{self-resonant,self-resonant2} for self-resonant structures. In Fig.~\ref{fig:ESresonant}, we should notice that the corresponding resonance frequencies of two complementary metasurfaces must be the same in order to realize the opposite reactances at all frequencies.
\begin{figure*}[ht!]\centering
\subfigure[]{\includegraphics[width=0.25\textwidth]{18.pdf}}
\subfigure[]{\includegraphics[width=0.25\textwidth]{19.pdf}}
\subfigure[]{\includegraphics[width=0.25\textwidth]{20.pdf}}
\caption{Dispersion curves of two resonant sheets, when the distances between the sheets is (a) $d=\lambda_{\rm{6GHz}}$, (b) $d=\lambda_{\rm{6GHz}}/50$, and (c) $d=\lambda_{\rm{6GHz}}/5000$.}
\label{fig:ResonantDispersion}
\end{figure*}
For the two complementary metasurfaces, the impedance of one metasurface is determined by the other metasurface impedance using the Babinet principle (the relations can be found in Eq.~(\ref{eq:LCresonat}) of Appendix). Accordingly, the dispersion relation for two complementary resonant sheets can be written in terms of $L_1$ and $C_1$ as
\begin{equation}
\begin{split}
&2\epsilon_0\alpha+\Big(1-e^{-2\alpha d}\Big)C_1\alpha^2+\Big(2\epsilon_0\alpha+\frac{\epsilon_0\mu_0}{L_1}\Big)\Big(\frac{\omega}{\omega_0}\Big)^4-\cr
&\Big[\frac{\epsilon_0\mu_0}{L_1}+(4+\frac{\eta_0^2C_1}{2L_1})\epsilon_0\alpha+\Big(1-e^{-2\alpha d}\Big)C_1\alpha^2 \Big]\Big(\frac{\omega}{\omega_0}\Big)^2=0,
\end{split}
\end{equation}
for the TM polarization. Analogously, the dispersion relation for the TE polarization reads
\begin{equation}
\begin{split}
&2\mu_0\alpha+\eta_0^2C_1\alpha^2+\Big[2\mu_0\alpha +\frac{\mu_0^2}{L_1}\Big(1-e^{-2\alpha d}\Big)\Big]\Big(\frac{\omega}{\omega_0}\Big)^4-\cr
&\Big[\frac{\mu_0^2}{L_1}\Big(1-e^{-2\alpha d}\Big)+(4+\frac{\eta_0^2C_1}{2L_1})\mu_0\alpha+\eta_0^2C_1\alpha^2)\Big]\Big(\frac{\omega}{\omega_0}\Big)^2=0,\cr
\end{split}
\end{equation}
where $\omega_0$ is the resonant angular frequency of both series and parallel connections. Below/above the resonance frequency, there are always two modes with TE and TM polarizations. Regarding the TM polarization, one of them suffers from a cut-off frequency due to the resonance. For the TE polarization, both of them have cut-off frequencies which can be calculated by (see Eq.~(\ref{eq:fLCresonatcut}) of Appendix)
\begin{equation}
f_{\rm{cut-off}}\approx{f_0\sqrt{\frac{L_1+\frac{\eta_0^2C_1}{8}+\frac{\mu_0d}{2} \pm \sqrt{\bigtriangleup}}{L_1+\mu_0d}}},
\end{equation}
where $\bigtriangleup=(\frac{\eta_0}{2\omega_0})^2+(\frac{\eta_0^2C_1}{8}+\frac{\mu_0d}{2})^2$. As an example, we assume $f_0=6$ GHz as the resonance frequency of resonant metasurfaces, and additionally $L_1=10$~nH and $C_1=0.07$~pF. The values of $L_2$ and $C_2$ are obtained by employing Eq.~(\ref{eq:LCresonat}) of Appendix, which gives $L_2=2.5$~nH and $C_2=0.28$~pF. Figure~\ref{fig:ResonantDispersion} presents the dispersion curves for different distances between the two resonant sheets. As expected, one TE-polarized mode and one TM-polarized mode exist simultaneously below/above the resonance frequency. When $d=\lambda_{\rm{6GHz}}$, the TE and TM modes have approximately the same phase velocity. However, as the distance decreases, the modes are being separated. The limit of $d\rightarrow 0$ brings about two new resonance frequencies $f_{\rm{mix1}}$ and $f_{\rm{mix2}}$ which are given by
\begin{equation}
f_{\rm{mix1}}=\frac{1}{2 \pi}\sqrt{\frac{\eta_0^2C_1+8L_1-\sqrt{\big(\eta_0^2C_1\big)^2+16\eta_0^2C_1L_1}}{8C_1L_1^2}},
\label{eq:fmix1}
\end{equation}
\begin{equation}
f_{\rm{mix2}}=\frac{1}{2 \pi}\sqrt{\frac{\eta_0^2C_1+8L_1+\sqrt{\big(\eta_0^2C_1\big)^2+16\eta_0^2C_1L_1}}{8C_1L_1^2}}.
\label{eq:fmix2}
\end{equation}
The dispersion curve of the extreme case is classified by $f_{\rm{mix1}}$, $f_0$ and $f_{\rm{mix2}}$, which are illustrated in Fig.~\ref{fig:ResonantExtreme}.
\begin{figure}[t!]
\centerline{\includegraphics[width=0.8\columnwidth]{21.pdf}}
\caption{Dispersion curves for two resonant sheets when the distance between the two sheets tends to zero.}
\label{fig:ResonantExtreme}
\end{figure}
\section{Polarization Insensitivity}
\label{sec:PI}
\begin{figure}[t!]\centering
\subfigure[]{\includegraphics[width=0.6\columnwidth]{22.pdf}}
\subfigure[]{\includegraphics[width=0.6\columnwidth]{23.pdf}}
\caption{Dispersion curves of practical realization for the square metal patches and their complementary grid, when the distance between two sheets is: (a) $d=10\, {\rm{mm}}$ and (b) $d=50\, {\rm{mm}}$. Here the length of square patch is 9.8 mm, the width of grid is 0.2 mm and period of unit cell is 10 mm.}
\label{fig:PracticalRealization10}
\end{figure}
Metasurface-based waveguides have been exploited for the applications of leaky-wave radiation and field focusing~\cite{grbic,holographic}. However, the limitations of applications exist due to polarization sensitivities. Designing a surface-wave waveguide which is insensitive to polarization becomes a significant challenge. Different kinds of waveguide topologies have been explored to propagate waves regardless of polarization at certain frequencies~\cite{Sievenpiper}. Let us define the polarization insensitivity as following: Both TM-polarized and TE-polarized waves can propagate inside the waveguide having the same phase velocity (degeneracy condition). Regarding our proposed waveguide structure, we can find this interesting property based on the dispersion relations written for TE and TM modes. For each value of $\alpha$, the value of $\omega$ is calculated from the general dispersion relations. Subsequently, it is enough to ensure that the solutions are equal to each other (see Appendix~\ref{sec:InseneitiveSupplement}) in order to find the condition which provides us with the property of polarization insensitivity. After some algebraic manipulations, we find the following condition:
\begin{equation}
Z_{\rm{s}1}\cdot Z_{\rm{s}2}=\frac{\eta_0^2}{4}(1-e^{-2\alpha d}).
\label{eq:SamePhaVel}
\end{equation}
However, according to the first equation of the paper, the multiplication of the surface impedances must be equal to $\eta_0^2/4$. Therefore, in the above equation, the expression inside the bracket should be unity meaning that the distance between the two metasurfaces must be considerable compared to the operating wavelength. This was explicitly shown in Figs.~\ref{fig:NonResonantDispersion} and~\ref{fig:ResonantDispersion}. When the distance between the two metasurfaces is not small, the dispersion curves, corresponding to both modes, match with each other (they have the same phase velocity meaning that the power is transferred regardless of the polarization).
For practical realization, small square metal patches and their complementary grids are chosen as metasurfaces topology. For different distances, numerical simulations are done using CST Microwave Studio. The simulation setting is similar to the simulation setting for electromagnetic band gap structures (in which after many numerical simulations and comparing the obtained results with analytical and experimental considerations, a practical rule for choosing background material and boundary setting is found for the correct computation of surface wave dispersion diagram~\cite{Yang}). Regarding the simulations, the key parameter in the computer model is the height of the air space outside the waveguide which emulates infinite free space outside of the structure (in ``Background Setting'' this space is ``Upper Z-distance'' and ``Lower Z-distance''). Based on numerous computer simulations, an airbox with the height of more than 10 times of the distance between two metasurfaces has to be placed over the unit cell to maintain high accuracy.
\begin{figure}[t!]\centering
\subfigure[]{\includegraphics[width=0.6\columnwidth]{24.pdf}}
\subfigure[]{\includegraphics[width=0.6\columnwidth]{25.pdf}}
\caption{Dispersion curves of practical realization for the square metal patches and their complementary grid, when the distance between two sheets is: (a) $d=1\, {\rm{mm}}$ and (b) $d=5\, {\rm{mm}}$. Here the length of square patch is 0.9 mm, the width of grid is 0.1 mm and the period is 1 mm.}
\label{fig:PracticalRealization1}
\end{figure}
Figures~\ref{fig:PracticalRealization10}~and~\ref{fig:PracticalRealization1} show the dispersion curves of complementary metasurfaces with 10 mm and 1 mm unit-cell sizes, respectively. In order to examine the accuracy of the results, in those figures the simulated results are compared with the analytical results. As it is seen, they are in good agreement. One can also see that the dispersion curves of the TM and TE modes overlap in a broad frequency range for larger distances between the two metasurfaces.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have investigated a waveguiding structure formed by two complementary metasurfaces. We have studied supported guided modes and derived dispersion equations for non-resonant and resonant dispersive impedance sheets. Furthermore, we have investigated the extreme scenarios when the two complementary sheets get close to each other and provided physical insight into the behavior of surface modes. In the limit of zero distance between the two sheets, two new resonance frequencies emerge and four modes are classified by three resonance frequencies. In addition, we have theoretically studied the condition on the sheet impedance to achieve the property of supporting both TM and TE modes with the same phase velocity. Waveguides with such characteristic have great potential to be used in many applications like holographic surfaces or leaky-wave antennas.
|
2,877,628,088,567 | arxiv | \section{Introduction}
This paper derives in full detail one of two kinds of asymptotic formulas for the $12j$ symbol, in the limit of one small and 11 large angular momenta. We will first briefly review some of the previous works on the Wigner $12j$ symbol. Its definition and exact formula are described in the textbooks on angular momentum theory \cite{edmonds1960, brink1968, biedenharn1981, yutsis1962}. Although it is used less often than the Wigner $3j$-, $6j$ symbols, it has applications in the theory of x-ray absorption branching ratios \cite{thole1988}, two-photons absorption spectroscopy \cite{ceulemans1993}, and loop quantum gravity \cite{carfora1993}. The $12j$ symbol was first defined in the paper by Jahn and Hope \cite{jahn1954} in 1954. In that paper, they listed two kinds of formulas for the special values of the $12j$ symbol when any one of its 12 arguments is zero. See Eq.\ (A8) and Eq.\ (A9) in \cite{jahn1954}. The complete symmetries of the $12j$ symbol are given in \cite{ord-smith1954}, as a result of the observation that the graphical representation of the triangular conditions of the $12j$ symbol is a M\"{o}bius strip. This M\"{o}bius strip is illustrated in Fig.\ \ref{ch7: fig_diagram_12j} below.
We note that Eq.\ (A8) in \cite{jahn1954} corresponds to placing the zero argument at the edge of this M\"{o}bius strip, and Eq.\ (A9) in \cite{jahn1954} corresponds to placing the zero argument at the center of the M\"{o}bius strip. Thus, we expect that, by placing the small angular momentum at the edge and center, respectively, of the M\"{o}bius strip, we will have two kinds of asymptotic formulas for the $12j$ symbol with one small angular momentum.
The main theoretical tool we use is a generalization of the Born-Oppenheimer approximation, in which the small angular momenta are the fast degrees of freedom and the large angular momenta are the slow degrees of freedom. The necessary generalization falls under the topic of multicomponent WKB theory. See \cite{littlejohn1991, littlejohn1992, weinstein1996} for the relevant background. The new techniques used in this paper are recently developed in the semiclassical analysis of the Wigner $9j$ symbol with small and large quantum numbers \cite{yu2011}. This paper makes extensive use of the results from that paper, and assumes a familiarity with it.
In analogy with the setup for the $9j$ symbol in \cite{yu2011}, we use exact linear algebra to represent the small angular momentum, and use the Schwinger's model to represent the large angular momenta. Each wave-function consists of a spinor factor and a factor in the form of a scalar WKB solution. For the $9j$ symbol, the scalar WKB solutions are represented by Lagrangian manifolds associated with a $6j$ symbol, which have been analyzed in \cite{littlejohn2010b} to reproduce the Ponzano Regge action \cite{ponzano-regge-1968}. In the problem of the $12j$ symbol with one small quantum number, the scalar WKB solutions are represented by Lagrangian manifolds associated with a $9j$ symbol. The actions for the $9j$ symbol are presented in \cite{littlejohn2010a}. We will quote their results in some of the semiclassical analysis of the $9j$ symbol in this paper.
We now give an outline of this paper. In section \ref{ch7: sec_12j_spin_network}, we display the spin network of the $12j$ symbol in the form of a M\"{o}bius strip, and decompose it into a scalar product of a bra and a ket. In section \ref{ch7: sec_12j_def}, we define the $12j$ symbol as a scalar product of two multicomponent wave-functions, whose WKB form are derived in section \ref{ch7: sec_12j_wave_fctn}. By following the procedure in \cite{yu2011}, we rewrite the multicomponent WKB wave-functions into their gauge-invariant forms in section \ref{ch7: sec_12j_gauge_inv_fctn}. In section \ref{ch7: sec_lag_mfd_actions}, we describe the path used in a semiclassical analysis of the Lagrangian manifolds associated with the $9j$ symbol by generalizing the paths used in \cite{littlejohn2010b}. We then obtain the action integral associated with this path by quoting the results from \cite{littlejohn2010a}. Finally, we calculate the spinor inner products at the intersections of the Lagrangian manifolds in section \ref{ch7: sec_spinor_products}. Putting the pieces together, we derive an asymptotic formula for the $12j$ symbol in section \ref{ch7: sec_12j_formula}, and display plots for this formula against exact values of the $12j$ symbol in section \ref{ch7: sec_plots}. The last section contains comments and discussions.
\section{\label{ch7: sec_12j_spin_network}Spin Network of the $12j$ Symbol}
The spin network \cite{yutsis1962} for the Wigner $12j$ symbol
\begin{equation}
\label{ch7: eq_12j_labeling}
\left\{
\begin{array}{cccc}
j_1 & j_2 & j_{12} & j_{125} \\
j_3 & j_4 & j_{34} & j_{135} \\
j_{13} & j_{24} & j_5 & j_6 \\
\end{array}
\right \}
\end{equation}
is illustrated in Fig.\ \ref{ch7: fig_diagram_12j}. In the spin network, each triangular condition of the $12j$ symbol is represented by a trivalent vertex. The spin network has the shape of a M\"{o}bius strip.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.33\textwidth]{ch7_fig1.eps}
\caption{The spin network of the Wigner $12j$ symbol.}
\label{ch7: fig_diagram_12j}
\end{center}
\end{figure}
The symmetries of the $12j$ symbol are associated with the symmetries of the M\"{o}bius strip, which are given by sliding along the M\"{o}bius strip and reflecting it about the vertical center of Fig.\ \ref{ch7: fig_diagram_12j}. Using these symmetries, any position in the center can be moved to any other position in the center, and any position on the edge can be moved to any other position on the edge. However, a center position cannot be move to an edge position, or vice versa. Thus, there are two inequivalent asymptotic limits of the $12j$ symbol with one small angular momentum, corresponding to placing the small angular momentum at the center or edge of the strip, respectively. In other words, we could either place the small angular momentum at $j_1, j_4, j_5$, or $j_6$ across the center of the strip, or place it at $j_2, j_3, j_{12}, j_{34}, j_{13}, j_{24}, j_{125}$, or $j_{135}$ along the edge of the strip. In this paper, we will focus on the case where the small angular momentum is placed at a center position at $j_5$.
One can decompose the spin network of the $12j$ symbol into two spin network states by cutting $j_2, j_3$ at the twist, and cutting $j_1, j_4, j_5, j_6$ along the center of the strip. Using this decomposition, illustrated in Fig.\ \ref{ch7: fig_12j_decomposition}, the $12j$ symbol is expressed as a scalar product between a bra and a ket, in the Hilbert space represented by the six angular momenta $j_1, \dots, j_6$. This is explicitly expressed in Eq.\ (\ref{ch7: eq_12j_definition}).
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.33\textwidth]{ch7_fig2.eps}
\caption{A decomposition of the spin network of the $12j$ symbol.}
\label{ch7: fig_12j_decomposition}
\end{center}
\end{figure}
\section{\label{ch7: sec_12j_def}Defining the $12j$ Symbol}
We use the decomposition of the spin network for the $12j$ symbol in Fig.\ \ref{ch7: fig_12j_decomposition} to write it as a scalar product. This is equivalent to Eq.\ (A4) in \cite{jahn1954}. We have
\begin{equation}
\label{ch7: eq_12j_definition}
\left\{
\begin{array}{cccc}
j_1 & j_2 & j_{12} & j_{125} \\
j_3 & j_4 & j_{34} & j_{135} \\
j_{13} & j_{24} & s_5 & j_6 \\
\end{array}
\right \}
= \frac{\braket{ b | a } }{ \{ [j_{12}][j_{34}][j_{13}][j_{24}] [j_{125}] [j_{135}] \}^{\frac{1}{2}}} \, ,
\end{equation}
where the square bracket notation $[\cdot]$ denotes $[c] = 2c + 1$, and $\ket{a}$ and $\ket{b}$ are normalized simultaneous eigenstates of lists of operators with certain eigenvalues. We will ignore the phase conventions of $\ket{a}$ and $\ket{b}$ for now, since we did not use them to derive our formula. In our notation, the two states are
\begin{equation}
\label{ch7: eq_a_state}
\ket{a} = \left|
\begin{array} { @{\,}c@{\,}c@{\,}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{}}
\hat{I}_1 & \hat{I}_2 & \hat{I}_3 & \hat{I}_4 & {\bf S}_5^2 & \hat{I}_6 & \hat{\bf J}_{12}^2 & \hat{\bf J}_{34}^2 & \hat{\bf J}_{125}^2 & \hat{\bf J}_{\text{tot}} \\
j_1 & j_2 & j_3 & j_4 & s_5 & j_6 & j_{12} & j_{34} & j_{125} & {\bf 0}
\end{array} \right> \, ,
\end{equation}
\begin{equation}
\label{ch7: eq_b_state}
\ket{b} = \left|
\begin{array} { @{\,}c@{\,}c@{\,}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{}}
\hat{I}_1 & \hat{I}_2 & \hat{I}_3 & \hat{I}_4 & {\bf S}_5^2 & \hat{I}_6 & \hat{\bf J}_{13}^2 & \hat{\bf J}_{24}^2 & \hat{\bf J}_{135}^2 & \hat{\bf J}_{\text{tot}} \\
j_1 & j_2 & j_3 & j_4 & s_5 & j_6 & j_{13} & j_{24} & j_{135} & {\bf 0}
\end{array} \right> \, .
\end{equation}
In the above notation, the large ket lists the operators on the top row, and the corresponding quantum numbers are listed on the bottom row. The hat is used to distinguish differential operators from their symbols, that is, the associated classical functions.
The states $\ket{a}$ and $\ket{b}$ live in a total Hilbert space of six angular momenta ${\mathcal H}_1 \otimes {\mathcal H}_2 \otimes {\mathcal H}_3 \otimes {\mathcal H}_4 \otimes {\mathcal H}_6 \otimes {\mathcal H}_s$. Each large angular momentum ${\bf J}_r$, $r = 1,2,3,4,6$, is represented by a Schwinger Hilbert space of two harmonic oscillators, namely, ${\bf H}_r = L^2 ({\mathbb R}^2)$ \cite{littlejohn2007}. The small angular momentum ${\bf S}$ is represented by the usual $2s+1$ dimensional representation of $\mathrm{SU}(2)$, that is, ${\mathcal H}_s = {\mathbb C}^{2s+1}$, where $s = s_5$.
Let us now define the lists of operators in Eqs.\ (\ref{ch7: eq_a_state}) and Eq.\ (\ref{ch7: eq_b_state}). First we look at the operators $\hat{I}_r$, $r = 1,2,3,4,6$, ${\bf J}_{12}^2$, ${\bf J}_{34}^2$, ${\bf J}_{13}^2$, ${\bf J}_{24}^2$, which act only on the large angular momentum spaces ${\mathcal H}_r$, each of which can be viewed as a space of wave-functions $\psi(x_{r1}, x_{r2})$ for two harmonic oscillators of unit frequency and mass. Let $\hat{a}_{r\mu} = (\hat{x}_{r\mu} + i \hat{p}_{r\mu})/\sqrt{2}$, and $\hat{a}_{r\mu}^\dagger = (\hat{x}_{r\mu} - i \hat{p}_{r\mu})/\sqrt{2}$, $\mu = 1,2$, be the usual annihilation and creation operators. The operators $\hat{I}_r$ and $\hat{J}_{ri}$ are constructed from these differential operators $\hat{a}$ and $\hat{a}^\dagger$ as follows,
\begin{equation}
\hat{I}_r = \frac{1}{2} \, \hat{a}_r^\dagger \hat{a}_r \, , \quad \quad \hat{J}_{ri} = \frac{1}{2} \, \hat{a}^\dagger_r \sigma_i \hat{a}_r \, ,
\end{equation}
where $i = 1,2,3$, and $\sigma_i$ are the Pauli matrices. The quantum numbers $j_r$, $r = 1,2,3,4,6$ specify the eigenvalues of both $\hat{I}_r$ and $\hat{\bf J}_r^2$, to be $j_r$ and $j_r(j_r + 1)$, respectively.
The operators $\hat{\bf J}_{12}^2$, $\hat{\bf J}_{34}^2$, $\hat{\bf J}_{13}^2$, and $\hat{\bf J}_{24}^2$ that define intermediate coupling of the large angular momenta are defined by partial sums of $\hat{\bf J}_r$,
\begin{equation}
\label{ch7: eq_J12_J34_vector}
\hat{\bf J}_{12} = \hat{\bf J}_1 + \hat{\bf J}_2 \, , \quad \quad \hat{\bf J}_{34} = \hat{\bf J}_3 + \hat{\bf J}_4 \, .
\end{equation}
\begin{equation}
\label{ch7: eq_J13_J24_vector}
\hat{\bf J}_{13} = \hat{\bf J}_1 + \hat{\bf J}_3 \, , \quad \quad \hat{\bf J}_{24} = \hat{\bf J}_2 + \hat{\bf J}_4 \, .
\end{equation}
The quantum numbers $j_{i}$ , $i = 12, 34, 13, 24$ specify the eigenvalues of the operators $\hat{\bf J}_{i}^2$ to be $j_{i} (j_{i}+1)$, for $i = 12, 34, 13, 24$. See \cite{littlejohn2007} for more detail on the Schwinger model.
Now we turn our attention to the operator $S^2$ that acts only on the small angular momentum space ${\mathbb C}^{2s+1}$. Let ${\bf S}$ be the vector of dimensionless spin operators represented by $2s+1$ dimensional matrices that satisfy the $\mathrm{SU}(2)$ commutation relations
\begin{equation}
[S_i, S_j] = i \, \epsilon_{ijk} \, S_k \, .
\end{equation}
The Casimir operator, ${\bf S}^2 = s (s+1)$, is proportional to the identity operator, so its eigenvalue equation is trivially satisfied.
The remaining operators ${\bf \hat{J}}_{125}^2$, ${\bf \hat{J}}_{135}^2$, and ${\bf \hat{J}}_{\text{tot}}$ are non-diagonal matrices of differential operators. They are defined in terms of the operators $\hat{I}_r$, $\hat{\bf J}_{ri}$, and ${\bf S}_i$ as follows,
\begin{eqnarray}
\label{ch7: eq_J125_square}
({\hat{J}}_{125}^2)_{\alpha \beta} &=& [ J_{12}^2+ \hbar^2 s(s+1) ] \delta_{\alpha \beta} + 2 {\bf \hat{J}}_{12} \cdot {\bf S}_{\alpha \beta} , \quad \quad \\
\label{J135_sq}
({\hat{J}}_{135}^2)_{\alpha \beta} &=& [ J_{13}^2 + \hbar^2 s(s+1) ] \delta_{\alpha \beta} + 2 {\bf \hat{J}}_{13} \cdot {\bf S}_{\alpha \beta} , \\
\label{ch7: eq_Jtot_vector}
({\bf \hat{J}}_{\text{tot}})_{\alpha \beta} &=& ( {\bf \hat{J}}_1 + {\bf \hat{J}}_2 + {\bf \hat{J}}_3 + {\bf \hat{J}}_4 + {\bf \hat{J}}_6 ) \delta_{\alpha \beta} + \hbar \, {\bf S}_{\alpha \beta} .
\end{eqnarray}
These three operators act nontrivially on both the large and small angular momentum Hilbert spaces.
\section{\label{ch7: sec_12j_wave_fctn}Multicomponent Wave-functions}
We follow the approach used in paper \cite{yu2011} to find a gauge-invariant form of the multicomponent wave-functions $\psi_{\alpha}^a (x) = \braket{x, \alpha | a}$ and $\psi_\alpha^b(x) = \braket{x, \alpha | b}$. Let us focus on $\psi_\alpha^a(x)$, since the treatment for $\psi^b$ is analogous. We will drop the index $a$ for now.
Let $\hat{D}_i$, $i = 1, \dots, 12$ denote the the operators listed in the definition of $\ket{a}$ in Eq.\ (\ref{ch7: eq_a_state}). We seek a unitary operator $\hat{U}$, such that $\hat{D}_i$ for all $i=1, \dots, 12$ are diagonalized when conjugated by $\hat{U}$. In other words,
\begin{equation}
\hat{U}^\dagger_{\alpha \, \mu} (\hat{D}_i)_{\alpha \, \beta} \, \hat{U}_{\beta \, \nu} = (\hat{\Lambda}_i)_{\mu \, \nu} \, ,
\end{equation}
where $\hat{\Lambda}_i$, $i =1, \dots, 12$ is a list of diagonal matrix operators. Let $\phi^{(\mu)}$ be the simultaneous eigenfunction for the $\mu^{\text{th}}$ diagonal entries $\hat{\lambda}_i$ of the operators $\hat{\Lambda}_i$, $i = 1, \dots, 12$. Then we obtain a simultaneous eigenfunction $\psi_\alpha^{(\mu)}$ of the original list of operators $\hat{D}_i$ from
\begin{equation}
\psi_\alpha^{(\mu)} = \hat{U}_{\alpha \, \mu} \, \phi^{(\mu)} \, .
\end{equation}
Since we are interested in $\psi_\alpha$ only to first order in $\hbar$, all we need are the zeroth order Weyl symbol matrix $U$ of $\hat{U}$, and the first order symbol matrix $\Lambda_i$ of $\hat{\Lambda}_i$. The resulting asymptotic form of the wave-function $\psi(x)$ is a product of a scalar WKB part $B e^{iS}$ and a spinor part $\tau$, that is,
\begin{equation}
\label{ch7: eq_general_wave-function}
\psi_\alpha^{(\mu)} (x) = B(x) \, e^{i \, S(x) / \hbar} \, \tau_{\alpha}^{(\mu)} (x, p) \, .
\end{equation}
Here the action $S(x)$ and the amplitude $B(x)$ are simultaneous solutions to the Hamilton-Jacobi and the transport equations, respectively, that are associated with the Hamiltonians $\lambda^{(\mu)}_i$. The spinor $\tau^{\mu}$ is the $\mu^{\text{th}}$ column of the matrix $U$,
\begin{equation}
\label{ch7: eq_U_and_tau}
\tau_{\alpha}^{(\mu)} (x, p) = U_{\alpha \mu} (x, p) \, ,
\end{equation}
where $p = \partial S(x) / \partial x$.
Now let us apply the above strategy to the $12j$ symbol. The Weyl symbols of the operators $\hat{I}_r$ and $\hat{J}_{ri}$, $r = 1,2,3,4,6$, are $I_r - 1/2$ and $J_{ri}$, respectively, where
\begin{equation}
\label{symbol_I_J}
I_r = \frac{1}{2} \, \sum_\mu \overline{z}_{r\mu} z_{r\mu} \, , \quad \quad J_{ri} = \frac{1}{2} \, \sum_{\mu\nu} \overline{z}_{r\mu} (\sigma^i)_{\mu\nu} z_{r\nu} \, ,
\end{equation}
and where $z_{r\mu} = x_{r\mu} + ip_{r\mu}$ and $\overline{z}_{r\mu} = x_{r\mu} - ip_{r\mu}$ are the symbols of $\hat{a}$ and $\hat{a}^\dagger$, respectively. The symbols of the remaining operators have the same expressions as Eqs.\ (\ref{ch7: eq_J12_J34_vector}), (\ref{ch7: eq_J13_J24_vector}), (\ref{ch7: eq_J125_square})-(\ref{ch7: eq_Jtot_vector}), but without the hats.
Among the operators $\hat{D}_i$, $\hat{J}_{125}^2$ and the vector of the three operators $\hat{\bf J}_{\rm tot}$ are non-diagonal. By looking at Eq.\ (\ref{ch7: eq_J125_square}), the expression for $\hat{J}_{125}^2$, we see that the zeroth order term of the symbol matrix $J_{125}^2$ is already proportional to the identity matrix, so the spinor $\tau$ must be an eigenvector for the first order term ${\bf J}_{12} \cdot {\bf S}$. Let $\tau^{(\mu)}({\bf J}_{12})$ be the eigenvector of the matrix ${\bf J}_{12} \cdot {\bf S}$ with eigenvalue $\mu J_{12}$, that is, it satisfies
\begin{equation}
\label{ch7: eq_JS_eigenvector}
({\bf J}_{12} \cdot {\bf S})_{\alpha \beta} \, \tau^{(\mu)}_\beta = \mu J_{12} \, \tau^{(\mu)}_\beta \, ,
\end{equation}
where $\mu = -s,\, \dots\, , \,+s$. In order to preserve the diagonal symbol matrices $J_{12}$ through the unitary transformation, we must choose the spinor $\tau^{(\mu)}$ to depend only on the direction of ${\bf J}_{12}$. One possible choice of $\tau^{(\mu)}$ is the north standard gauge, (see Appendix A of \cite{littlejohn1992}), in which the spinor $\delta_{\alpha\, \mu}$ is rotated along a great circle from the $z$-axis to the direction of ${\bf J}_{12}$. Explicitly,
\begin{equation}
\label{tau_north_standard_gauge}
\tau^{(\mu)}_\alpha ({\bf J}_{12}) = e^{i (\mu - \alpha) \phi_{12}} \, d^{(s)}_{\alpha \, \mu} (\theta_{12}) \, ,
\end{equation}
where $(\theta_{12}, \phi_{12})$ are the spherical coordinates that specify the direction of ${\bf J}_{12}$. Note that this is not the only choice, since Eq.\ (\ref{ch7: eq_JS_eigenvector}) is invariant under a local $\mathrm{U}(1)$ gauge transformations. In other words, any other spinor $\tau' = e^{i g({\bf J}_{12})} \, \tau$ that is related to $\tau$ by a $\mathrm{U}(1)$ gauge transformation satisfies Eq.\ (\ref{ch7: eq_JS_eigenvector}). This local gauge freedom is parametrized by the vector potential,
\begin{equation}
\label{def_A_vec}
{\bf A}^{(\mu)}_{12} = i (\tau^{(\mu)} )^\dagger \, \frac{\partial \tau^{(\mu)}}{\partial {\bf J}_{12}} \, ,
\end{equation}
which transforms as ${\bf A}^{(\mu)'} = {\bf A}^{(\mu)} - \nabla_{{\bf J}_{12}} (g)$ under a local gauge transformation. Moreover, the gradient of the spinor can be expressed in terms of the vector potential, (Eq.\ (A.22) in \cite{littlejohn1992}), as follows,
\begin{equation}
\label{ch7: eq_tau_derivative}
\frac{\partial \tau^{(\mu)}}{\partial {\bf J}_{12}} = i \left( - {\bf A}_{12}^{(\mu)} + \frac{{\bf J}_{12} \times {\bf S}}{J_{12}^2} \right) \, \tau^{(\mu)} \, .
\end{equation}
Once we obtain the complete set of spinors $\tau^{(\mu)}$, $\mu = -s, \dots, s$, we can construct the zeroth order symbol matrix $U$ of the unitary transformation $\hat{U}$ from Eq.\ (\ref{ch7: eq_U_and_tau}).
Now let us show that all the transformed symbol matrices of the operators in Eq.\ (\ref{ch7: eq_a_state}), namely, the $\Lambda_i$, are diagonal to first order. Let us write $\hat{\Lambda} [ \hat{D} ]$ to denote the operator $\hat{U}^\dagger \hat{D} \hat{U}$, and write $\Lambda [ \hat{D} ]$ for its Weyl symbol. First, consider the operators $\hat{I}_r$, $r = 1,2,3, 4, 6$, which are proportional to the identity matrix. Using the operator identity
\begin{equation}
[\hat{\Lambda}(\hat{I}_r)]_{\mu\nu} = \hat{U}^\dagger_{\alpha \mu} ( \hat{I}_r \delta_{\alpha \beta} ) \hat{U}_{\beta \nu} = \hat{I}_r \delta_{\mu \nu} - \hat{U}^\dagger_{\alpha \mu} [ \hat{U}_{\alpha \nu} , \, \hat{I}_r] \, ,
\end{equation}
we find
\begin{equation}
\label{ch7: eq_symbol_trick}
[\Lambda(\hat{I}_r)]_{\mu\nu} = (I_r - 1/2) \delta_{\mu \nu} - i \hbar U_{0\alpha \mu}^* \, \{ U_{0 \alpha \nu} , \, I_r \} \, ,
\end{equation}
where we have used the fact that the symbol of a commutator is a Poisson bracket. Since $U_{\alpha \mu} = \tau^{(\mu)}_\alpha$ is a function only of ${\bf J}_{12}$, and since the Poisson brackets $\{ {\bf J}_{12}, I_r \} = 0$ vanish for all $r = 1,2,3, 4, 6$, the second term in Eq.\ (\ref{ch7: eq_symbol_trick}) vanishes. We have
\begin{equation}
[\Lambda(\hat{I}_r)]_{\mu\nu} = (I_r - 1/2) \, \delta_{\mu\nu} \, .
\end{equation}
Similarly, because $\{ {\bf J}_{12}, J_{12}^2 \} = 0$ and $\{ {\bf J}_{12}, J_{34}^2 \} = 0$, we find
\begin{equation}
[\Lambda(\hat{J}_{12}^2 ) ]_{\mu\nu} = J_{12}^2 \, \delta_{\mu\nu} \, , \quad \quad
[\Lambda(\hat{J}_{34}^2 ) ]_{\mu\nu} = J_{34}^2 \, \delta_{\mu\nu} \, .
\end{equation}
Now we find the symbol matrices $\Lambda({\bf \hat{J}}_{125})$ for the vector of operators ${\bf \hat{J}}_{125}$, where
\begin{equation}
[\hat{\Lambda}({\bf \hat{J}}_{125})]_{\mu \nu} = \hat{U}^\dagger_{\alpha \mu} ( {\bf \hat{J}}_{12} \delta_{\alpha \beta}) \hat{U}_{\beta \nu} + \hbar \, \hat{U}^\dagger_{\alpha \mu} {\bf S}_{\alpha \beta} \hat{U}_{\beta \nu} \, .
\end{equation}
After converting the above operator equation to Weyl symbols, we find
\begin{eqnarray}
\label{ch7: eq_J_125_Lambda}
&& [\Lambda({\bf \hat{J}}_{125})]_{\mu\nu} \\ \nonumber
&=& {\bf J}_{12} \delta_{\mu \nu} - i \hbar U_{\alpha \mu}^* \{ U_{\alpha\mu } , \, {\bf J}_{12} \} + \hbar \, U_{\alpha \mu}^* {\bf S}_{\alpha \beta} U_{\beta \nu} \\ \nonumber
&=& {\bf J}_{12} \delta_{\mu \nu} - i \hbar \tau^{(\mu)*}_\alpha \{ \tau^{(\nu)}_\alpha , \, {\bf J}_{12} \} + \hbar \, \tau^{(\mu)*}_\alpha {\bf S}_{\alpha \beta} \tau^{(\nu)}_\beta \, .
\end{eqnarray}
Let us denote the second term above by $T^i_{\mu \nu}$, and use Eq.\ (\ref{ch7: eq_tau_derivative}), the orthogonality of $\tau$,
\begin{equation}
\tau^{(\mu)*}_\alpha \, \tau^{(\nu)}_\alpha = \delta_{\mu \nu} \, ,
\end{equation}
to get
\begin{eqnarray}
\label{ch7: eq_T_i}
T^i_{\mu \nu} &=& - i \hbar \tau^{(\mu)*}_\alpha \{ \tau^{(\nu)}_\alpha , \, J_{12i} \} \\ \nonumber
&=& - i \hbar \tau^{(\mu)*}_\alpha [ \{ \tau^{(\nu)}_\alpha , \, J_{1i} \} + \{ \tau^{(\nu)}_\alpha , \, J_{2i} \} ] \\ \nonumber
&=& - i \hbar \tau^{(\mu)*}_\alpha \epsilon_{kji} \left( J_{1k} \frac{\partial \tau^{(\nu)}_\alpha}{ \partial J_{1j} } + J_{2k} \frac{\partial \tau^{(\nu)}_\alpha}{ \partial J_{2j} } \right) \\ \nonumber
&=& - i \hbar \tau^{(\mu)*}_\alpha \epsilon_{kji} J_{12k} \frac{\partial \tau^{(\nu)}_\alpha}{ \partial J_{12j} } \\ \nonumber
&=& \hbar ({\bf A}_{12}^{(\mu)} \times {\bf J}_{12})_i \, \delta_{\mu \nu} + \hbar \frac{\mu {J}_{12i}}{J_{12}} \delta_{\mu \nu} - \hbar \, \tau^{(\mu)*}_\alpha {S}_{\alpha \beta} \tau^{(\nu)}_\beta \, ,
\end{eqnarray}
where in the third equality, we have used the reduced Lie-Poisson bracket (Eq.\ (30) in \cite{littlejohn2007}) to evaluate the Poisson bracket $\{ \tau, {\bf J}_1 \}$ and $\{ \tau, {\bf J}_2 \}$, and in the third equality we used $\partial \tau / J_1 = \partial \tau / J_{12}$ and $\partial \tau / J_1 = \partial \tau / J_{12}$ from the chain rule, and in the fifth equality, we used Eq.\ (\ref{ch7: eq_tau_derivative}) for $\partial \tau / \partial {\bf J}_{12}$. Notice the term involving ${\bf S}$ in $T^i_{\mu\nu}$ in Eq.\ (\ref{ch7: eq_T_i}) cancels out the same term in $\Lambda({\bf \hat{J}}_{125})$ in Eq.\ (\ref{ch7: eq_J_125_Lambda}), leaving us with a diagonal symbol matrix
\begin{equation}
\label{ch7: eq_J125_vector}
[\Lambda({\bf \hat{J}}_{125})]_{\mu\nu} = {\bf J}_{12} \left[ 1 + \frac{\mu \hbar}{J_{12}} \right] + \hbar \, {\bf A}_{12}^{(\mu)} \times {\bf J}_{12} \, .
\end{equation}
Taking the square, we obtain
\begin{equation}
[ \Lambda({\bf \hat{J}}_{125}^2) ]_{\mu\nu}
= ( J_{12} + \mu \hbar)^2 \delta_{\mu \nu} \, .
\end{equation}
Finally, let us look at the last three remaining operators ${\bf \hat{J}}_{\text{tot}}$ in Eq.\ (\ref{ch7: eq_Jtot_vector}). Since each of the the symbols ${\bf J}_r$ for $r = 3,4,6$ defined in Eq.\ (\ref{symbol_I_J}) Poisson commutes with ${\bf J}_{12}$, that is, $\{ {\bf J}_{12}, {\bf J}_r \} = 0$, we find $\Lambda({\bf \hat{J}}_r) = {\bf J}_r - i \hbar U_0^\dagger \{ U_0({\bf J}_{12}), {\bf J}_r \} = {\bf J}_r$, for $r = 3, 4, 6$. Using $\Lambda({\bf \hat{J}}_{125})$ from Eq.\ (\ref{ch7: eq_J125_vector}), we obtain
\begin{eqnarray}
&& [\Lambda({\bf \hat{J}}_{\text{tot}})]_{\mu\nu} \\ \nonumber
&=& \left[ {\bf J}_{12} \left( 1 + \frac{\mu \hbar}{J_{12}} \right) + \hbar \, {\bf A}_{12}^{(\mu)} \times {\bf J}_{12} + ({\bf J}_3 + {\bf J}_4 + {\bf J}_6 ) \right] \delta_{\mu \nu} \, .
\end{eqnarray}
Therefore, all $\Lambda_i$, $i = 1,\dots, 12$, are diagonal.
The analysis above is completely analogous to those in \cite{yu2011}, except that the spinor is diagonalized in the direction of the intermediate angular momentum vector ${\bf J}_{12}$. We see that the procedure in \cite{yu2011} generalizes to the case of the $12j$ symbol wave-functions without any complication. This is because of the chain rule for differentiation and Poisson brackets. See the calculations in Eq.\ (\ref{ch7: eq_T_i}).
Not counting the trivial eigenvalue equation for $S^2$, we have $11$ Hamilton-Jacobi equations associated with the $\Lambda_i$ for each polarization $\mu$ in the $20$ dimensional phase space ${\mathbb C}^{10}$. It turns out that not all of them are functionally independent. In particular, the Hamilton-Jacobi equations $\Lambda(\hat{J}_{12}^2) = J_{12}^2 \hbar = ( j_{12} + 1/2 ) \hbar$ and $\Lambda(\hat{J}_{125}^2) = (J_{12} + \mu \hbar)^2 = ( j_{125} + 1/2 )^2 \hbar^2$ are functionally dependent. For them to be consistent, we must pick out the polarization $\mu = j_{125} - j_{12}$. This reduces the number of independent Hamilton-Jacobi equations for $S(x)$ from $11$ to $10$, half of the dimension of the phase space ${\mathbb C}^{10}$. These ten equations define the Lagrangian manifold associated with the action $S(x)$.
Now let us restore the index $a$. We express the multicomponent wave-function $\psi^a_\alpha(x)$ in the form of Eq.\ (\ref{ch7: eq_general_wave-function}),
\begin{equation}
\label{ch7: eq_general_wave-function_a}
\psi^a_\alpha (x) = B_a(x) \, e^{i S_a(x) / \hbar} \, \tau^a_\alpha(x, p) \, .
\end{equation}
Here the action $S_a(x)$ is the solution to the ten Hamilton-Jacobi equations associated with the $\mu^{\text{th}}$ entries $\lambda_i^a$ of ten of the symbol matrices $\Lambda_i^a$, given by
\begin{eqnarray}
\label{HJ_S_a}
I_1 &=& (j_1 + 1/2) \hbar \, , \\ \nonumber
I_2 &=& (j_2 + 1/2) \hbar \, , \\ \nonumber
I_3 &=& (j_3 + 1/2) \hbar \, , \\ \nonumber
I_4 &=& (j_4 + 1/2) \hbar \, , \\ \nonumber
I_6 &=& (j_6 + 1/2) \hbar \, , \\ \nonumber
J_{12}^2 &=& (j_{12} + 1/2)^2 \hbar^2 \, , \\ \nonumber
J_{34}^2 &=& (j_{34} + 1/2)^2 \hbar^2 \, , \\ \nonumber
{\bf J}_{\text{tot}}^{(a)} &=& {\bf J}_{12} \left[ 1 + \frac{\mu \hbar}{J_{12}} \right] + \hbar \, {\bf A}_{12} \times {\bf J}_{12} + ({\bf J}_3 + {\bf J}_4 + {\bf J}_6 ) = {\bf 0} \, ,
\end{eqnarray}
and $\tau^a = \tau^{(\mu)}$ with $\mu = j_{125}-j_{12}$. Note that all the Hamiltonians except the last three, ${\bf J}_{\text{tot}}^{(a)}$, preserve the value of ${\bf J}_{12}$ and ${\bf J}_6$ along their Hamiltonian flows.
We carry out an analogous analysis for $\psi^b(x)$. The result is
\begin{equation}
\label{ch7: eq_general_wave-function_b}
\psi^b_\alpha (x) = B_b(x) \, e^{i S_b(x) / \hbar} \, \tau^b_\alpha(x, p) \, ,
\end{equation}
where $S_b(x)$ is the solution to the following $10$ Hamilton-Jacobi equations:
\begin{eqnarray}
I_1 &=& (j_1 + 1/2) \hbar \, , \\ \nonumber
I_2 &=& (j_2 + 1/2) \hbar \, , \\ \nonumber
I_3 &=& (j_3 + 1/2) \hbar \, , \\ \nonumber
I_4 &=& (j_4 + 1/2) \hbar \, , \\ \nonumber
I_6 &=& (j_6 + 1/2) \hbar \, , \\ \nonumber
J_{13}^2 &=& (j_{13} + 1/2)^2 \hbar^2 \, , \\ \nonumber
J_{24}^2 &=& (j_{24} + 1/2)^2 \hbar^2 \, , \\ \nonumber
{\bf J}_{\text{tot}}^{(b)} &=& {\bf J}_{13} \left[ 1 + \frac{\nu \hbar}{J_{13}} \right] + \hbar \, {\bf A}_{13} \times {\bf J}_{13} + ({\bf J}_2 + {\bf J}_4 + {\bf J}_6 ) = {\bf 0} \, .
\end{eqnarray}
Here the spinor $\tau^b = \tau_b^{(\nu)}$ satisfies
\begin{equation}
\label{ch7: eq_JS_eigenvector_b}
({\bf J}_{13} \cdot {\bf S})_{\alpha \beta} \, ( \tau^{(\nu)}_b)_\beta = \nu J_{13} \, ( \tau^{(\nu)}_b)_\beta \, ,
\end{equation}
where $\nu = j_{135}-j_{13}$.
The vector potential ${\bf A}_{13}$ is defined by
\begin{equation}
{\bf A}_{13} = i ( \tau^b )^\dagger \, \frac{\partial \tau^b }{\partial {\bf J}_{13}} \, .
\end{equation}
Again, note that all the Hamiltonians except the last three, ${\bf J}_{\text{tot}}^{(b)}$, preserve the value of ${\bf J}_{13}$ and ${\bf J}_6$ along their Hamiltonian flows.
\section{\label{ch7: sec_12j_gauge_inv_fctn}The Gauge-Invariant Form of the Wave-functions}
We follow the procedure described by the analysis preceding Eq.\ (69) in \cite{yu2011} to transform the wave-functions into their gauge-invariant form. The result is a gauge-invariant representation of the wave-function,
\begin{equation}
\label{wave_fctn_factorization_a}
\psi^a(x) = B_a(x) \, e^{i S_a^{9j}(x) / \hbar} \, \left[ U_a(x) \, \tau^a(x_0) \right] \, .
\end{equation}
where the action $S_a^{9j}(x)$ is the integral of $p \, dx$ starting at a point $z_0$, which is the lift of a reference point $x_0$ in the Lagrangian manifold ${\mathcal L}_a^{9j}$. The Lagrangian manifold ${\mathcal L}_a^{9j}$ is defined by the following equations:
\begin{eqnarray}
\label{HJ_S_a_9j}
I_1 &=& (j_1 + 1/2) \hbar \, , \\ \nonumber
I_2 &=& (j_2 + 1/2) \hbar \, , \\ \nonumber
I_3 &=& (j_3 + 1/2) \hbar \, , \\ \nonumber
I_4 &=& (j_4 + 1/2) \hbar \, , \\ \nonumber
I_6 &=& (j_6 + 1/2) \hbar \, , \\ \nonumber
J_{12}^2 &=& (j_{12} + 1/2)^2 \hbar^2 \, , \\ \nonumber
J_{34}^2 &=& (j_{34} + 1/2)^2 \hbar^2 \, , \\ \nonumber
{\bf J}_{\text{tot}} &=& {\bf J}_1 + {\bf J}_2 + {\bf J}_3 + {\bf J}_4 + {\bf J}_6 = {\bf 0} \, .
\end{eqnarray}
The rotation matrix $U_a(x)$ that appears in Eq.\ (\ref{wave_fctn_factorization_a}) is determined by the $\mathrm{SO}(3)$ rotation that transforms the shape configuration of ${\bf J}_{12}$ and ${\bf J}_6$ at the reference point $z_0 = (x_0, p(x_0))$ on ${\mathcal L}_a^{9j}$ to the shape configuration of ${\bf J}_{12}$ and ${\bf J}_6$ at the point $z = (x, p(x))$ on ${\mathcal L}_a^{9j}$. Here ${\bf J}_{12}$ and ${\bf J}_6$ are functions of $z$ and are defined in Eq.\ (\ref{symbol_I_J}).
Similarly, the multicomponent wave-function for the state $\ket{b}$ has the following form,
\begin{equation}
\label{wave_fctn_factorization_b}
\psi^b(x) = B_b(x) \, e^{i S_b^{9j}(x) / \hbar} \, \left[ U_b(x) \, \tau^b(x_0) \right] \, ,
\end{equation}
where the action $S_b^{9j}(x)$ is the integral of $p \, dx$ starting at a point that is the lift of $x_0$ onto the Lagrangian manifold ${\mathcal L}_b^{9j}$. The Lagrangian manifold ${\mathcal L}_b^{9j}$ is defined by the following equations:
\begin{eqnarray}
\label{HJ_S_b_9j}
I_1 &=& (j_1 + 1/2) \hbar \, , \\ \nonumber
I_2 &=& (j_2 + 1/2) \hbar \, , \\ \nonumber
I_3 &=& (j_3 + 1/2) \hbar \, , \\ \nonumber
I_4 &=& (j_4 + 1/2) \hbar \, , \\ \nonumber
I_6 &=& (j_6 + 1/2) \hbar \, , \\ \nonumber
J_{13}^2 &=& (j_{13} + 1/2)^2 \hbar^2 \, , \\ \nonumber
J_{24}^2 &=& (j_{24} + 1/2)^2 \hbar^2 \, , \\ \nonumber
{\bf J}_{\text{tot}} &=& {\bf J}_1 + {\bf J}_2 + {\bf J}_3 + {\bf J}_4 + {\bf J}_6 = {\bf 0} \, .
\end{eqnarray}
The rotation matrix $U_b(x)$ that appears in Eq.\ (\ref{wave_fctn_factorization_b}) is determined by the $SO(3)$ rotation that transform the shape configuration of ${\bf J}_{13}$ and ${\bf J}_6$ at the reference point $z_0 = (x_0, p(x_0))$ on ${\mathcal L}_b^{9j}$ to the shape configuration of ${\bf J}_{13}$ and ${\bf J}_6$ at the point $z = (x, p(x))$ on ${\mathcal L}_b^{9j}$.
Taking the inner product of the wave-functions, and treating the spinors as part of the slowly varying amplitudes, we find
\begin{eqnarray}
\braket{b|a}
&=& e^{i\kappa} \sum_k \Omega_k \, \text{exp} \{i [S_a^{9j}(z_k) - S_b^{9j}(z_k) - \mu_k \pi /2 ] / \hbar \} \nonumber \\
&& \left( U_b^{0k} \tau^b(z_0)\right)^\dagger \left(U_a^{0k} \tau^a(z_0)\right) .
\label{ch7: eq_general_formula}
\end{eqnarray}
In the above formula, the sum is over the components of the intersection set ${\mathcal M}_k$ between the two Lagrangian manifolds ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$. The point $z_k$ is any point in the $k$th component. The amplitude $\Omega_k$ and the Maslov index $\mu_k$ are the results of doing the stationary phase approximation of the inner product without the spinors. Each rotation matrix $U_a^{0k}$ is determined by a path $\gamma^{a (0k)}$ that goes from $z_0$ to $z_k$ along ${\mathcal L}_a^{9j}$, and $U_b^{0k}$ is similarly defined. The formula Eq.\ (\ref{ch7: eq_general_formula}) is independent of the choice of $z_k$, because any other choice $z_k'$ will multiply both $U_a^{0j}$ and $U_b^{0j}$ by the same additional rotation matrix which cancels out in the product $(U_b^{0k})^\dagger U_a^{0k}$.
The above analysis is a straightforward application of the theoretical result developed in \cite{yu2011}. We present the detail of this analysis to show that the procedure outlined in \cite{yu2011} does generalize to higher $3nj$ symbols, such as the $12j$ symbol.
\section{\label{ch7: sec_lag_mfd_actions}The Lagrangian Manifolds and Actions}
We now analyze the Lagrangian manifolds ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$, defined by the Hamilton-Jacobi equations Eq.\ (\ref{HJ_S_a_9j}) and Eq.\ (\ref{HJ_S_b_9j}), respectively. We focus on ${\mathcal L}_a^{9j}$ first, since the treatment for ${\mathcal L}_b^{9j}$ is analogous. Let $\pi: \Phi_{5j} \rightarrow \Lambda_{5j}$ denote the projection of the large phase space $\Phi_{5j} = ({\mathbb C}^2)^5$ onto the angular momentum space $\Lambda_{5j} = ({\mathbb R}^3)^5$, through the functions ${\bf J}_{ri}$, $r = 1,2,3,4,6$. The first six equations, $I_r = j_r + 1/2$, $r = 1,2,3,4,6$ fix the lengths of the five vectors $|{\bf J}_r| = J_r$, $r = 1,2,3,4, 6$. The three equations for the total angular momentum,
\begin{equation}
\label{ch7: eq_total_J_0}
{\bf J}_{\text{tot}} = {\bf J}_1 + {\bf J}_2 + {\bf J}_3 + {\bf J}_4 + {\bf J}_6 = {\bf 0} \, ,
\end{equation}
constrains the five vectors ${\bf J}_i$, $i = 1, \dots, 6$ to form a close polygon. The remaining two equations
\begin{eqnarray}
J_{12}^2 &=& (j_{12} + 1/2)^2 \hbar^2 \, , \\
J_{34}^2 &=& (j_{34} + 1/2)^2 \hbar^2 \, ,
\end{eqnarray}
put the vectors ${\bf J}_1, {\bf J}_2$ into a 1-2-12 triangle, and put the vectors ${\bf J}_3, {\bf J}_4$ into a 3-4-34 triangle. Thus, the vectors form a butterfly shape, illustrated in Fig.\ \ref{ch7: fig__9j_config_a}. This shape has two wings $(J_1, J_2, J_{12})$ and $(J_3, J_4, J_{34})$ that are free to rotate about the $J_{12}$ and $J_{34}$ edges, respectively. Moreover, the Hamilton-Jacobi equations are also invariant under an overall rotation of the vectors. Thus the projection of ${\mathcal L}_a^{9j}$ onto the angular momentum space is diffeomorphic to $\mathrm{U}(1)^2 \times \mathrm{O}(3)$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.33\textwidth]{ch7_fig3.eps}
\caption{The configuration of a point on ${\mathcal L}_a^{9j}$, projected onto the angular momentum space $\Lambda_{5j}$, and viewed in a single ${\mathbb R}^3$.}
\label{ch7: fig__9j_config_a}
\end{center}
\end{figure}
The orbit of the group $\mathrm{U}(1)^5$ generated by $I_r$, $r = 1,2,3,4,6$ is a five-torus. Thus ${\mathcal L}_a^{9j}$ is a five-torus bundle over a sub-manifold described by the butterfly configuration in Fig.\ \ref{ch7: fig__9j_config_a}. Altogether there is a $\mathrm{U}(1)^7 \times \mathrm{SU}(2)$ action on ${\mathcal L}_a^{9j}$. If we denote coordinates on $\mathrm{U}(1)^7 \times \mathrm{SU}(2)$ by $(\psi_1, \psi_2, \psi_3, \psi_4, \psi_6, \theta_{12}, \theta_{34}, u)$, where $u \in \mathrm{SU}(2)$ and where the five angles are the $4\pi$-periodic evolution variables corresponding to $(I_1, I_2, I_3, I_4, I_6, {\bf J}_{12}^2, {\bf J}_{34}^2)$, respectively, then the isotropy subgroup is generated by three elements, say $x=(2 \pi, 2 \pi, 2 \pi, 2 \pi, 2 \pi, 0, 0, -1)$, $y = (0, 0, 2 \pi, 2 \pi, 2 \pi, 2 \pi, 0, -1)$, and $z = (2 \pi, 2 \pi, 0, 0, 2 \pi, 0, 2 \pi, -1)$. The isotropy subgroup itself is an Abelian group of eight elements, $({\mathbb Z}_2)^3 = \{ e, x, y, z, xy, xz, yz, xyz \}$. Thus the manifold ${\mathcal L}_a^{9j}$ is topologically $\mathrm{U}(1)^7 \times \mathrm{SU}(2) / ({\mathbb Z}_2)^3$. The analysis for ${\mathcal L}_b^{9j}$ is the same.
Now it is easy to find the invariant measure on ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$. It is $d \psi_1 \wedge d \psi_2 \wedge d\psi_3 \wedge d\psi_4 \wedge d \psi_6 \wedge d\theta_{12} \wedge d \theta_{34} \wedge du$, where $du$ is the Haar measure on $\mathrm{SU}(2)$. The volumes $V_A$ of ${\mathcal L}_a^{9j}$ and $V_B$ of ${\mathcal L}_b^{9j}$ with respect to this measure are
\begin{equation}
V_A = V_B = \frac{1}{8} \, (4 \pi)^7 \times 16 \pi^2 = 2^{15} \pi^9 \, ,
\end{equation}
where the $1/8$ factor compensates for the eight-element isotropy subgroup.
We now examine the intersections of ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$ in detail. Because the two lists of Hamilton-Jacobi equations (\ref{HJ_S_a_9j}) and (\ref{HJ_S_b_9j}) share the common equations $I_r = j_r + 1/2$, $r = 1,2,3,4,6$, the intersection in the large phase space $\Phi_{5j}$ is a five-torus fiber bundle over the intersection of the projections in the angular momentum space $\Lambda_{5j}$. The intersections of the projections in $\Lambda_{5j}$ require the five vectors ${\bf J}_r$, $r = 1,2,3,4,6$, to satisfy
\begin{eqnarray}
\label{ch7: eq_vector_equations}
| {\bf J}_r | = J_r \, , \quad\quad\quad \sum_r {\bf J}_r = {\bf 0} \, , \\ \nonumber
|{\bf J}_1 + {\bf J}_2 | = J_{12} \, , \quad \quad |{\bf J}_3 + {\bf J}_4 | = J_{34} \, , \\ \nonumber
|{\bf J}_1 + {\bf J}_3 | = J_{13} \, , \quad \quad |{\bf J}_2 + {\bf J}_4 | = J_{24} \, .
\end{eqnarray}
A nice way of constructing the vectors satisfying Eq.\ (\ref{ch7: eq_vector_equations}) follows the procedure given in the appendix of \cite{littlejohn2009}, which was generalized to apply to the symmetric treatment of the $9j$ symbol in \cite{littlejohn2010a}. For completeness, we summarize the construction in \cite{littlejohn2010a} using the unsymmetrical labeling of the $9j$ symbol in this paper in the next few paragraphs.
The construction uses the Gram matrix $G$ of dot products among the four vectors ${\bf J}_i$, $i = 1,2,3,4$. Some of the dot products are given by the length of the vectors $J_i$, $i = 1,2,3,4,6$, and the intermediate couplings $J_i$, $i = 12,34,13,24$. In particular, the diagonal elements are $J_i^2$, $i = 1,2,3,4$, and some of the off-diagonal elements are given by
\begin{eqnarray}
{\bf J}_1 \cdot {\bf J}_2 &=& \frac{1}{2} ( J_{12}^2 - J_1^2 - J_2^2 ) \, , \\
{\bf J}_3 \cdot {\bf J}_4 &=& \frac{1}{2} ( J_{34}^2 - J_3^2 - J_4^2 ) \, , \\
{\bf J}_1 \cdot {\bf J}_3 &=& \frac{1}{2} (J_{13}^2 - J_1^2 - J_3^2) \, , \\
{\bf J}_2 \cdot {\bf J}_4 &=& \frac{1}{2} (J_{24}^2 - J_2^2 - J_4^2 ) \, .
\end{eqnarray}
Let us denote the remaining two unknown dot products by $x = {\bf J}_1 \cdot {\bf J}_4$ and $y = {\bf J}_2 \cdot {\bf J}_3$. We have
\begin{widetext}
\begin{eqnarray}
G
&=& \left(
\begin{array}{cccc}
J_1^2 & \frac{1}{2} ( J_{12}^2 - J_1^2 - J_2^2 ) & \frac{1}{2} (J_{13}^2 - J_1^2 - J_3^2) & x \\
\frac{1}{2} ( J_{12}^2 - J_1^2 - J_2^2 ) & J_2^2 & y & \frac{1}{2} (J_{24}^2 - J_2^2 - J_4^2 ) \\
\frac{1}{2} (J_{13}^2 - J_1^2 - J_3^2) & y & J_3^2 & \frac{1}{2} ( J_{34}^2 - J_3^2 - J_4^2 ) \\
x & \frac{1}{2} (J_{24}^2 - J_2^2 - J_4^2 ) & \frac{1}{2} ( J_{34}^2 - J_3^2 - J_4^2 ) & J_4^2 \\
\end{array}
\right) \, . \label{ch7: eq_gram_matrix}
\end{eqnarray}
\end{widetext}
The unknown dot products $x$ and $y$ can be solved from a system of two equations. The first equation follows from Eq.\ (\ref{ch7: eq_total_J_0}). Moving ${\bf J}_6$ to the other side, and taking the square, it becomes
\begin{eqnarray}
J_6^2 &=& ({\bf J}_1 + {\bf J}_2 + {\bf J}_3 + {\bf J}_4)^2 \\ \nonumber
&=& J_1^2 + J_2^2 + J_3^2 + J_4^2 + 2 {\bf J}_1 \cdot {\bf J}_2 + 2 {\bf J}_3 \cdot {\bf J}_4 \\ \nonumber
&& \, + 2 {\bf J}_1 \cdot {\bf J}_3 + 2 {\bf J}_2 \cdot {\bf J}_4 + 2x + 2y \, ,
\end{eqnarray}
which gives us a linear relation between $x$ and $y$,
\begin{equation}
\label{ch7: eq_x_y_relation}
x + y = \frac{1}{2} (J_1^2 + J_2^2 + J_3^2 + J_4^2 + J_6^2 - J_{12}^2 - J_{34}^2 - J_{13}^2 - J_{24}^2) \, .
\end{equation}
This is the same equation as Eq.\ (6) in \cite{littlejohn2010a}, except for the relabelling of the vectors. The second equation comes from the fact that the Gram matrix of the dot products between any four vectors in ${\mathbb R}^3$ has a zero determinant. That is,
\begin{equation}
\label{ch7: eq_G_det_eq_0}
P(x, y)\equiv |G| = 0 \, .
\end{equation}
This constitutes a second equation for $x$ and $y$. Substituting the linear relation Eq.\ (\ref{ch7: eq_x_y_relation}) into Eq.\ (\ref{ch7: eq_G_det_eq_0}) leads to a quartic equation $Q(x) = 0$, which we can use to solve for $x$. Then we can use Eq.\ (\ref{ch7: eq_x_y_relation}) to solve for $y$. In general, we find two sets of real solutions of $(x, y) = (x_1, y_1)$ and $(x, y) = (x_2, y_2)$. See \cite{littlejohn2010a} for more detail.
For each set of solutions of $(x, y)$, we obtain all the dot products among the first four vectors. Assuming all the diagonal sub-determinants of order $3$ of the Gram matrix in Eq.\ (\ref{ch7: eq_gram_matrix}) are positive definite, we can follow the procedure outlined in the appendix of \cite{littlejohn2009} to obtain the vectors. Let $G_3$ be the first diagonal $3 \times 3$ sub-matrix of $G$. We use its singular decomposition to determine the vectors ${\bf J}_1, {\bf J}_2, {\bf J}_3$. We can then find ${\bf J}_4$ from the known dot products between ${\bf J}_i$, $i=1,2,3$ and ${\bf J}_4$. Finally, we obtain ${\bf J}_6$ from
\begin{equation}
{\bf J}_6 = - ({\bf J}_1 + {\bf J}_2 + {\bf J}_3 + {\bf J}_4 ) \, .
\end{equation}
Once we have ${\bf J}_i$, $i=1,2,3,4,6$, we add them up pairwise to find the intermediate vectors ${\bf J}_i$, $i = 12, 34, 13, 24$. This completes the construction of all nine vectors in ${\mathbb R}^3$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.33\textwidth]{ch7_fig4.eps}
\caption{The configuration of a point on the intersection $I_{11}$ set, projected onto the angular momentum space $\Lambda_{5j}$, and viewed in a single ${\mathbb R}^3$.}
\label{ch7: fig_9j_config_I_11}
\end{center}
\end{figure}
The construction of the vector configuration above not only gives explicit solutions for all the vectors at the intersection of ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$, we also find that there are generally two distinct solutions of dot products $(x_1, y_1)$ and $(x_2, y_2)$. This implies that the solution set consists of two sets of vector configurations that are not related by an $O(3)$ symmetry. Thus the solution set of Eq.\ (\ref{ch7: eq_vector_equations}) in $\Lambda_{5j}$ consists of four disconnected subsets, each diffeomorphic to $SO(3)$. These four sets can be grouped into two pairs according to the values of the dot products $(x_1, y_1)$ and $(x_2, y_2)$.
The intersections in $\Phi_{5j}$ are the lifts of the intersections in $\Lambda_{5j}$. Therefore, the intersection of ${\mathcal L}_a^{9j}$ consists of four disconnected subsets, where each subset is a five-torus bundle over $\mathrm{SO}(3)$. Let us denote the two sets corresponding to $(x_1, y_1)$ by $I_{11}, I_{12}$, and denote the two sets corresponding to $(x_2, y_2)$ by $I_{21}, I_{22}$. The vector configuration for a typical point in $I_{11}$ is illustrated in Fig.\ \ref{ch7: fig_9j_config_I_11}. Each intersection set is an orbit of the group $\mathrm{U}(1)^5 \times \mathrm{SU}(2)$, where $\mathrm{U}(1)^5$ represent the phases of the five spinors and $\mathrm{SU}(2)$ is the diagonal action generated by ${\bf J}_{\rm tot}$.
The isotropy subgroup of this group action is ${\mathbb Z}_2$, generated by the element $ \; $ $(2 \pi, 2 \pi, 2\pi, 2\pi, 2\pi, -1)$, in coordinates $(\psi_1, \psi_2, \psi_3, \psi_4, \psi_6, u)$ for the group $\mathrm{U}(1)^5 \times \mathrm{SU}(2)$, where $u \in \mathrm{SU}(2)$. The volume of the intersection manifold $I_{11}$, $I_{12}$, $I_{21}$, or $I_{22}$, with respect to the measure $d \psi_1 \wedge d \psi_2 \wedge d\psi_3 \wedge d\psi_4 \wedge d \psi_6 \wedge du$, is
\begin{equation}
V_I = \frac{1}{2} (4 \pi)^5 \times 16 \pi^2 = 2^{13} \pi^7 \, ,
\end{equation}
where the $1/2$ factor compensates for the two element isotropy subgroup.
The amplitude determinant is given in terms of a determinant of Poisson brackets among distinct Hamiltonians between the two lists of Hamilton-Jacobi equations in Eqs.\ (\ref{HJ_S_a_9j}) and (\ref{HJ_S_b_9j}). In this case, those are $(J_{12}, J_{34})$ from Eq.\ (\ref{HJ_S_a_9j}) and $(J_{13}, J_{24})$ from Eq.\ (\ref{HJ_S_b_9j}). Thus the determinant of Poisson brackets is
\begin{eqnarray}
&& \left|
\begin{array}{cc}
\{J_{12}, \, J_{13} \} & \{J_{12}, \, J_{24} \} \\
\{J_{34}, \, J_{13} \} & \{J_{34}, \, J_{24} \} \\
\end{array}
\right| \nonumber \\ \nonumber
&=& \frac{1}{J_{12} J_{23} J_{13} J_{24} } \,
\left|
\begin{array}{cc}
V_{123} & V_{214} \\
V_{341} & V_{432} \\
\end{array}
\right| \\
&=& \frac{1}{J_{12} J_{23} J_{13} J_{24} } | V_{123} V_{432} - V_{214} V_{341} | \, ,
\end{eqnarray}
where
\begin{equation}
V_{ijk} = {\bf J}_i \cdot ({\bf J}_j \times {\bf J}_k) \, .
\end{equation}
The amplitude $\Omega_k$ in Eq.\ (\ref{ch7: eq_general_formula}) can be inferred from Eq.\ (10) in \cite{littlejohn2010a}. In the present case, each $\Omega_k$ has the same expression $\Omega$. It is
\begin{eqnarray}
\Omega &=& \frac{(2 \pi i ) V_I}{\sqrt{V_A V_B} } \, \frac{\sqrt{J_{12} J_{23} J_{13} J_{24}} }{\sqrt{| V_{123} V_{432} - V_{214} V_{341} |} } \nonumber \\ \nonumber
&=& \frac{(2 \pi i ) 2^{13} \pi^7 }{ 2^{15} \pi^9 } \, \frac{\sqrt{J_{12} J_{23} J_{13} J_{24}} }{\sqrt{| V_{123} V_{432} - V_{214} V_{341} |} } \\
&=& \frac{i \sqrt{J_{12} J_{23} J_{13} J_{24}}}{2 \pi \sqrt{| V_{123} V_{432} - V_{214} V_{341} |} } \, .
\label{ch7: eq_amplitude}
\end{eqnarray}
We now outline the calculation of the relative phase between the exponents $S_a(z_{12}) - S_b(z_{12})$ and $S_a(z_{11}) - S_b(z_{11})$, which can be written as an action integral
\begin{equation}
S^{(1)} = (S_a(z_{12}) - S_b(z_{12}) ) - (S_a(z_{11}) - S_b(z_{11}) ) = \oint \, p \, dx \,
\end{equation}
around a closed loop that goes from $z_{11}$ to $z_{12}$ along ${\mathcal L}_a^{9j}$ and then back along ${\mathcal L}_b^{9j}$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.45\textwidth]{ch7_fig5.eps}
\caption{The loop from a point $p \in I_{11}$ to $q \in I_{12}$ along ${\mathcal L}_a^{9j}$, and then to $q' \in I_{12}$ along $I_{12}$, and then to $p' \in I_{11}$ along ${\mathcal L}_b^{9j}$, and finally back to $p$ along $I_{11}$.}
\label{ch7: fig_loop_large_space}
\end{center}
\end{figure}
We shall construct the closed loop giving the relative phase $S^{(1)}$ by following the Hamiltonian flows of various observables. This loop consists of four paths, and it is illustrated in the large phase space $\Phi_{5j}$ in Fig.\ \ref{ch7: fig_loop_large_space}. The loop projects onto a loop in the angular momentum space $\Lambda_{5j}$, which is illustrated in Fig.\ \ref{ch7: fig_loop_small_space}. We take the starting point $p \in I_{11}$ of Fig.\ \ref{ch7: fig_loop_large_space} to lie in the five-torus fiber above a solution of Eq.\ (\ref{ch7: eq_vector_equations}). Its vector configuration is illustrated in Fig.\ \ref{ch7: fig_loop_small_space}(a).
First we follow the ${\bf J}_{12}^2$ flow and then the ${\bf J}_{34}^2$ flow to trace out a path that takes us along ${\mathcal L}_a^{9j}$ from a point $p$ in $I_{11}$ to a point $q$ in $I_{12}$. Let the angles of rotations be $2 \phi_{12}$ and $2 \phi_{34}$, respectively, where $\phi_{12}$ is the angle between the triangles 1-2-12 and 12-34-6, and $\phi_{34}$ is the angle between the triangles 3-4-34 and 12-34-6. These rotations effectively reflect the triangles 1-2-12 and 3-4-34 across the triangle 12-34-6, as illustrated in Figs.\ \ref{ch7: fig_loop_small_space}(a) and \ref{ch7: fig_loop_small_space}(b). In addition, the triangle 13-24-6 is also reflected across its own plane. Thus, all five vectors ${\bf J}_r$, $r = 1,2,3,4,6$, are reflected across the triangle 12-34-6.
Next, we follow the Hamiltonian flow generated by $-{\bf j}_6 \cdot {\bf J}_{\rm tot}$ along $I_{12}$, which generates an overall rotation of all the vectors around $- {\bf j}_6$. Let the angle of rotation be $2 \phi_6$, where $\phi_6$ is the angle between the triangles 12-34-6 and 13-24-6. This brings the triangle 13-24-6 back to its original position. However, the triangle 12-34-6 is now rotated to the other side of triangle 13-24-6, as illustrated in Fig.\ \ref{ch7: fig_loop_small_space}(c). This corresponds to the point $q'$ in Fig. \ref{ch7: fig_loop_large_space}.
To bring the triangle 12-34-6 back to its original position, we follow the ${\bf J}_{13}^2$ flow and ${\bf J}_{24}^2$ flow along ${\mathcal L}_b^{9j}$. Let the angle of rotations be $2\phi_{13}$ and $2\phi_{24}$, respectively, where $\phi_{13}$ is the angle between the triangle 1-3-13 and the triangle 13-24-6, and $\phi_{24}$ is the angle between the triangle 2-4-24 and the triangle 13-24-6. These rotations effectively reflect all the vectors across the triangle 13-24-6. Thus we arrive at a point $p' \in I_{11}$, where the points $p$ and $p'$ have the same projection in the angular momentum space $\Lambda_{5j}$. This is illustrated in Figs.\ \ref{ch7: fig_loop_small_space}(a) and \ref{ch7: fig_loop_small_space}(d). Thus the two points $p$ and $p'$ differ only by the phases of the five spinors, which can be restored by following the Hamiltonian flows of $(I_1, I_2, I_3, I_4, I_6)$. This constitutes the last path from $p'$ to $p$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.45\textwidth]{ch7_fig6.eps}
\caption{The loop from Fig.\ \ref{ch7: fig_loop_large_space} projected onto a loop in $\Lambda_{5j}$, as viewed in a single ${\mathbb R}^3$.}
\label{ch7: fig_loop_small_space}
\end{center}
\end{figure}
To summarize the rotational history in the angular momentum space, we have applied the rotations
\begin{eqnarray}
\label{ch8: eq_rotations}
&& R_{13} ({\bf j}_{13}', 2 \phi_{13}) R_{24}({\bf j}_{24}', 2 \phi_{24}) R( - {\bf j}_6, 2 \phi_6) \\ \nonumber
&& R_{34} ({\bf j}_{34}, 2 \phi_{34}) R_{12}({\bf j}_{12}, 2 \phi_{12}) \, ,
\end{eqnarray}
where $R_{12}$ acts only on ${\bf J}_1$ and ${\bf J}_2$, $R_{34}$ acts only on ${\bf J}_3$ and ${\bf J}_4$, $R_{13}$ acts only on ${\bf J}_1$ and ${\bf J}_3$, $R_{24}$ acts only on ${\bf J}_2$ and ${\bf J}_4$, and $R( - {\bf j}_6, 2 \phi_6 )$ acts on all five vectors. The corresponding $SU(2)$ rotations, with the same axes and angles, take us from point $p$ in Fig.\ \ref{ch7: fig_loop_large_space} to another point $p'$ along the sequence $p \rightarrow q \rightarrow q' \rightarrow p'$.
To compute the final five phases required to close the loop, we use the Hamilton-Rodrigues formula \cite{whittaker1960}, in the same way as Eq.\ (46) in \cite{littlejohn2010b}. Let us start with vector ${\bf J}_1$. The action of the rotations on this vector can be written
\begin{equation}
\label{ch7: eq_J1_rotation}
R({\bf j}_{13}, 2 \phi_{13}) R(- {\bf j}_6, 2 \phi_6) R({\bf j}_{12}, 2 \phi_{12}) {\bf J}_1 = {\bf J}_1 \, .
\end{equation}
By inserting an edge ${\bf J}_{16} = {\bf J}_1 + {\bf J}_6$ as in part (c) of Fig.\ \ref{ch7: fig_angle_split_tetrahedra}, we split the angle $\phi_6$ that appears in the middle rotation in Eq.\ (\ref{ch7: eq_J1_rotation}) into two internal dihedral angles $\phi_{6a}$ and $\phi_{6b}$, of the tetrahedrons in Figs.\ \ref{ch7: fig_angle_split_tetrahedra}(a) and \ref{ch7: fig_angle_split_tetrahedra}(b), respectively.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.45\textwidth]{ch7_fig7.eps}
\caption{Decomposition of the angles $\phi_1$ and $\phi_6$ into dihedral angles in two tetrahedra.}
\label{ch7: fig_angle_split_tetrahedra}
\end{center}
\end{figure}
Then the rotations in Eq.\ (\ref{ch7: eq_J1_rotation}) become
\begin{eqnarray}
&& R({\bf j}_{13}, 2 \phi_{13}) R(- {\bf j}_6, 2 \phi_6) R({\bf j}_{12}, 2 \phi_{12}) \nonumber \\ \nonumber
&=& [R({\bf j}_{13}, 2 \phi_{13}) R(- {\bf j}_6, 2 \phi_{6a})] [R(- {\bf j}_6, 2 \phi_{6b}) R({\bf j}_{12}, 2 \phi_{12}) ] \\ \nonumber
&=& R({\bf j}_1, 2 \phi_{1a}) R({\bf j}_1, 2 \phi_{1b}) \\
&=& R({\bf j}_1, 2 \phi_1) \, ,
\label{ch7: eq_J1_holonomy}
\end{eqnarray}
where we have used the Hamilton-Rodrigues formula twice in the second equality. In the third equality, we used the fact that $\phi_1 = \phi_{1a} + \phi_{1b}$, where the angles $\phi_{1a}$ and $\phi_{1b}$ are internal dihedral angles for the tetrahedra in Figs.\ \ref{ch7: fig_angle_split_tetrahedra}(a) and \ref{ch7: fig_angle_split_tetrahedra}(b), respectively. Thus, we find that the product of the three rotations in Eq.\ (\ref{ch7: eq_J1_rotation}) is $R({\bf j}_1, 2 \phi_1)$, where $\phi_1$ is the angle between the triangle 1-2-12 and the triangle 1-3-13. We can lift the rotation Eq.\ (\ref{ch7: eq_J1_holonomy}) up to $SU(2)$ with the same axis and angle. Its action on the spinor at $p$ is a pure phase. To undo this pure phase, we follow the Hamiltonian flow of $I_1$ by an angle $-2 \phi_1$, modulo $2 \pi$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.30\textwidth]{ch7_fig8.eps}
\caption{The angles $\phi_r$ is the angle between the normals of the adjacent triangles sharing the edge $J_r$, where the normals are defined by the orientation of the triangles shown. This is essentially Fig.\ 2 in \cite{littlejohn2010a}, with an unsymmetrical labeling of the $9j$.}
\label{ch7: fig_outer_normals}
\end{center}
\end{figure}
Similarly, we can find the rotations acting on ${\bf J}_2, {\bf J}_3, {\bf J}_4$, and ${\bf J}_6$, and proceed to calculate the action integral as in \cite{littlejohn2010b}. Instead of completing the derivation of the action integral using our unsymmetrical labeling of the $9j$ symbol, we will quote the result from Eq.\ (12) in \cite{littlejohn2010a}. It is given by
\begin{equation}
\label{ch7: eq_action_integral}
S^{(1)} = 2 \sum_r \, J_r \psi_r^{(1)} \, ,
\end{equation}
where $\psi_r^{(1)} = \pi - \phi_r$ is the external dihedral angle between the normals of the two triangles adjacent to $J_r$. The orientations of the triangles are defined in Fig.\ \ref{ch7: fig_outer_normals}. The sum is over $r = 1,2,3,4,6, 12, 34, 13, 24$. The relative action integral that corresponds to the other solution $(x_2, y_2)$ of Eq.\ (\ref{ch7: eq_vector_equations}) is
\begin{equation}
\label{ch7: eq_action_integral_2}
S^{(2)} = 2 \sum_r \, J_r \psi_r^{(2)} \, ,
\end{equation}
which has the same expression as Eq.\ (\ref{ch7: eq_action_integral}), but we should note that the angles $\psi_r^{(2)}$ are different from $\psi_r^{(1)}$, because the vector configuration has a different set of dot products. As in \cite{littlejohn2010a}, we pick $S^{(1)}$ to correspond to the root in which $- \pi \le \psi_r \le \pi$, and pick $S^{(2)}$ to correspond to the root in which $0 \le \psi_r \le \pi$.
Altogether, the asymptotic formula for the $9j$ symbol when all $j$'s are large is given by Eq.\ (1) in \cite{littlejohn2010a}, which we reproduce here:
\begin{eqnarray}
\label{ch7: eq_9j_all_j_large}
&& \left\{
\begin{array}{ccc}
j_1 & j_2 & j_{12} \\
j_3 & j_4 & j_{34} \\
j_{13} & j_{24} & j_5 \\
\end{array}
\right\} \\ \nonumber
&=& \frac{1}{4 \pi \sqrt{| V_{123}^{(1)} V_{432}^{(1)} - V_{214}^{(1)} V_{341}^{(1)} |}} \cos (S^{(1)}) \\ \nonumber
&& + \frac{1}{ 4 \pi \sqrt{| V_{123}^{(2)} V_{432}^{(2)} - V_{214}^{(2)} V_{341}^{(2)} |}} \sin (S^{(2)}) \, .
\end{eqnarray}
It is found from Eq.\ (17) and Eq.\ (18) in \cite{littlejohn2010a} that, when the configuration goes to its time-reversed image, that is, when all the vectors reverse their directions, the actions transform according to $S^{(1)} \rightarrow - S^{(1)}$ and $S^{(2)} \rightarrow - S^{(2)} + 2 \pi (\sum_{r=1}^9 j_r) + 9 \pi $. As a result, the two terms $\cos(S^{(1)})$ and $\sin(S^{(2)})$ in the $9j$ formula (\ref{ch7: eq_9j_all_j_large}) are invariant under time-reversal symmetry. In the asymptotic formula (\ref{ch7: eq_main_formula_12j}) for the $12j$ symbol that we will derive below, the additional phases generated from the spinor products will break this time-reversal symmetry.
Putting the amplitudes $\Omega$ from Eq.\ (\ref{ch7: eq_amplitude}) and the relative actions $S^{(1)}$ and $S^{(2)}$ into Eq.\ (\ref{ch7: eq_general_formula}), we find
\begin{widetext}
\begin{eqnarray}
\braket{b|a}
&=& e^{i\kappa_1} \frac{ \sqrt{J_{12} J_{34} J_{13} J_{24}}}{2 \pi \sqrt{| V_{123}^{(1)} V_{432}^{(1)} - V_{214}^{(1)} V_{341}^{(1)} |} } \left[ ( \tau^b(z_{11}) )^\dagger (\tau^a(z_{11})) + e^{i ( S^{(1)} - \mu_1 \pi /2) / \hbar} \left( U_b^{(1)} \tau^b(z_{11}) \right)^\dagger \left(U_a^{(1)} \tau^a(z_{11})\right) \right]
\label{ch7: eq_general_formula_2} \\ \nonumber
&& + e^{i\kappa_2} \frac{ \sqrt{J_{12} J_{23} J_{13} J_{24}}}{2 \pi \sqrt{| V_{123}^{(2)} V_{432}^{(2)} - V_{214}^{(2)} V_{341}^{(2)} |} } \left[ ( \tau^b(z_{21}) )^\dagger (\tau^a(z_{21})) + e^{i ( S^{(2)} - \mu_2 \pi /2) / \hbar} \left( U_b^{(2)} \tau^b(z_{21}) \right)^\dagger \left(U_a^{(2)} \tau^a(z_{21})\right) \right]
\end{eqnarray}
\end{widetext}
where the superscripts $(1)$ and $(2)$ are labels used to distinguish the first and the second solutions to Eq.\ (\ref{ch7: eq_vector_equations}). Here we have factored out two arbitrary phases $e^{i \kappa_1}$ and $e^{i \kappa_2}$ for the two pairs of stationary phase contributions. The rotation matrices $U_a^{(i)}$, $i = 1,2$, are determined by the paths from $z_{i1}$ to $z_{i2}$ along ${\mathcal L}_a^{9j}$. Similarly the rotation matrices $U_b^{(i)}$, $i = 1,2$, are determined by the paths from $z_{i1}$ to $z_{i2}$ along ${\mathcal L}_b^{9j}$. See Eq.\ (76) in \cite{yu2011} for a similar, but simpler, expression for the case of the $9j$ symbol.
\section{\label{ch7: sec_spinor_products}Spinor Products}
We choose the vector configurations associated with $z_{11}$ to correspond to a particular orientation of the vectors. We put ${\bf J}_{12}$ along the $z$ axis, and put ${\bf J}_6$ inside the $xz$ plane, as illustrated in Fig.\ \ref{ch7: fig_config_z11}. Let the inclination and azimuth angles $(\theta, \phi)$ denote the direction of the vector ${\bf J}_{13}$. From Fig.\ \ref{ch7: fig_config_z11}, we see that $\phi$ is the angle between the $({\bf J}_{12}, {\bf J}_6)$ plane and the $({\bf J}_{12}, {\bf J}_{13})$ plane. We denote this angle by $\phi = \phi_{12}$. The inclination angle $\theta$ is the angle between the vectors ${\bf J}_{12}$ and ${\bf J}_{13}$.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.30\textwidth]{ch7_fig9.eps}
\caption{The vector configuration at the point $z_{11}$ in $I_{11}$.}
\label{ch7: fig_config_z11}
\end{center}
\end{figure}
The gauge choices for the spinors at the reference point $z_{11}$ are arbitrary, and they only contribute a phase that can be absorbed into $e^{i \kappa_1}$. To be concrete, since ${\bf J}_{12}$ points in the $z$ direction, we choose the spinor $\tau^a(z_{11})$ to be the $\mu^{\text th}$ standard eigenvector for $S_z$, that is,
\begin{equation}
\tau_\alpha^a ( z_{11} ) = \delta_{\alpha \mu} \, .
\end{equation}
For the spinor $\tau^b(z_{11})$, we choose it to be an eigenvector of ${\bf J}_{13} \cdot {\bf S}$ in the north standard gauge, that is,
\begin{equation}
\tau_\alpha^b(z_{11}) = e^{i (\alpha - \nu) \phi_{12}} \, d^s_{\nu \alpha } (\theta) \, ,
\end{equation}
where $(\phi_{12}, \theta)$ are the spherical angles of ${\bf J}_{13}$ in a reference frame where ${\bf J}_{12}$ is in the $z$-direction, and ${\bf J}_6$ is in the $xz$-plane. See Fig.\ \ref{ch7: fig_config_z11}. We denote the azimuthal angle by $\phi_{12}$, because it is also the angle at ${\bf J}_{12}$ between the $({\bf J}_{12}, {\bf J}_{13})$ plane and the $({\bf J}_{12}, {\bf J}_{6})$ plane.
Taking the spinor inner product, we obtain
\begin{equation}
\label{ch7: eq_spinor_prod_11}
(\tau^b(z_{11}))^\dagger (\tau^a(z_{11})) = e^{- i (\mu - \nu) \phi_{12}} \, d^s_{\nu \mu} (\theta) \, .
\end{equation}
To evaluate the other spinor product at $z_{12}$, we need to find the rotation matrices $U_a^{(1)}$ and $U_b^{(1)}$, which are generated from paths $\gamma_a$ and $\gamma_b$ from $z_{11}$ to $z_{12}$ along ${\mathcal L}_a^{9j}$ and ${\mathcal L}_b^{9j}$, respectively.
We choose the path $\gamma_a$ to be the path from $p$ to $q$ generated by the ${\bf J}_{12}^2$ flow and the ${\bf J}_{34}^2$ flow, which are illustrated in Fig.\ \ref{ch7: fig_loop_large_space} in the large phase space, in Fig.\ \ref{ch7: fig_loop_small_space}(a) in the angular momentum space. This path contains no flow generated by the total angular momentum, so
\begin{equation}
U_a^{(1)} = 1 \, .
\end{equation}
We choose the path $\gamma_b$ to be the inverse of the path from $q$ back to $p$ along ${\mathcal L}_b^{9j}$ in Fig.\ \ref{ch7: fig_loop_large_space}, which contains only one overall rotation around $- {\bf j}_6$. Thus
\begin{equation}
U_b^{(1)} = U( {\bf \hat{j}}_6, 2 \phi_6) \, .
\end{equation}
The rotation associated with $U_b^{(1)}$ is illustrated in Fig.\ \ref{ch7: fig_loop_small_space}(b). It effectively moves ${\bf J}_{13}$ to its mirror image ${\bf J}_{13}'$ across the 12-34-6 triangle in the $xz$-plane, which has the direction given by $(- \phi_{12}, \theta)$. Thus $U_b \, \tau^b(z_{11})$ is an eigenvector of ${\bf J}_{13}' \cdot {\bf S}$, and is up to a phase equal to the eigenvector of ${\bf J}_{13}' \cdot {\bf S}$ in the north standard gauge. Thus, we have
\begin{equation}
[U_b^{(1)} \, \tau^b(z_{11})]_\alpha = e^{i \nu H_{13}} \, e^{- i (\alpha - \nu) \phi_{12}} \, d^s_{\nu \alpha } (\theta) \, ,
\end{equation}
where $H_{13}$ is a holonomy phase factor equal to the area of a spherical triangle on a unit sphere; see Fig.\ \ref{ch7: fig_spherical_area}. Therefore, the spinor product at the intersection $I_{12}$ is
\begin{equation}
\label{ch7: eq_spinor_prod_12}
( U_b^{(1)} \, \tau^b(z_{11}))^\dagger (U_a \tau^a(z_{11})) = e^{i \nu H_{13}} \, e^{i (\mu - \nu) \phi_{12}} \, d^s_{\nu \mu} (\theta) \, .
\end{equation}
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.30\textwidth]{ch7_fig10.eps}
\caption{The phase difference between two gauge choices can be expressed as an area around a closed loop on the unit sphere.}
\label{ch7: fig_spherical_area}
\end{center}
\end{figure}
Let us denote the first term in Eq.\ (\ref{ch7: eq_general_formula_2}) by $T_1$. Substituting the spinor inner products of Eqs.\ (\ref{ch7: eq_spinor_prod_11}) and (\ref{ch7: eq_spinor_prod_12}) into Eq.\ (\ref{ch7: eq_general_formula_2}), we find that $T_1$ is given by
\begin{eqnarray}
\label{ch7: eq_12j_formula_3a}
T_1
&=& \frac{e^{i \kappa_1} \sqrt{J_{12} J_{34} J_{13} J_{24}} }{ \pi \sqrt{ |V_{123} V_{432} - V_{214} V_{341} |}} \, d^s_{\nu \mu} (\theta) \\ \nonumber
&& \times \cos \left[ S^{(1)} - \frac{\mu_1 \pi}{4} + \mu \phi_{12} + \nu \left( \frac{H_{13}}{2} - \phi_{12} \right) \right] \, .
\end{eqnarray}
Using a different choice of the reference point and paths, we can derive an alternative expression for the inner product, and eliminate the term $H_{13}$. Let us choose a new reference point $z_{11}$ to correspond to an orientation in which ${\bf J}_{13}$ is along the $z$-axis, and ${\bf J}_6$ lies in the $x$-$z$ plane. We choose the path $\gamma_a$ to go from $p$ to $q'$ along the first two paths in Fig.\ \ref{ch7: fig_loop_large_space}, and we choose $\gamma^b$ to be the inverse of the last two paths that goes from $q'$ back to $p$ in Fig.\ \ref{ch7: fig_loop_large_space}. Through essentially the same arguments, we find
\begin{eqnarray}
\label{ch7: eq_12j_formula_3b}
T_1
&=& \frac{e^{i \kappa_1} \sqrt{J_{12} J_{34} J_{13} J_{24}} }{\pi \sqrt{ |V_{123} V_{432} - V_{214} V_{341} |}} \, d^s_{\nu \mu} (\theta) \, \\ \nonumber
&& \times \cos \left[ S^{(1)} - \frac{\mu_1 \pi}{4} + \mu \left(\frac{H_{12}}{2} - \phi_{13} \right) + \nu \phi_{13} \right] \, .
\end{eqnarray}
Here $H_{12}$ is another holonomy for the ${\bf J}_{12}$ vector, and the angle $\phi_{13}$ is the angle between the (${\bf J}_{13}$, ${\bf J}_{12}$) plane and (${\bf J}_{13}$, ${\bf J}_6$) plane. Because the quantities $\psi_i, \phi_{12}, \phi_{13}, H_{12}, H_{13}$ depend only on the geometry of the vector configuration, and are independent of $\mu$ and $\nu$, we conclude that the argument in the cosine must be linear in $\mu$ and $\nu$. Equating the two arguments of the cosine in Eqs.\ (\ref{ch7: eq_12j_formula_3a}) and (\ref{ch7: eq_12j_formula_3b}), we find that this linear term is $( \mu \phi_{12} + \nu \phi_{13} )$. Using the Maslov index $\mu_1 = 0$ from \cite{littlejohn2010a}, we find
\begin{eqnarray}
\label{ch7: eq_main_formula_wo_phase1}
T_1
&=& \frac{e^{i \kappa_1} \sqrt{J_{12} J_{34} J_{13} J_{24}} }{\pi \sqrt{ | V_{123}^{(1)} V_{432}^{(1)} - V_{214}^{(1)} V_{341}^{(1)} |}} \, d^s_{\nu \mu} (\theta^{(1)}) \\ \nonumber
&& \,
\times \cos \left( S^{(1)} + \mu \phi_{12}^{(1)} + \nu \phi_{13}^{(1)} \right) \, ,
\end{eqnarray}
where we have put back the superscript $(1)$. Through an analogous calculation, we find
\begin{eqnarray}
\label{ch7: eq_main_formula_wo_phase2}
T_2
&=& \frac{e^{i \kappa_2 } \, \sqrt{J_{12} J_{34} J_{13} J_{24}} }{ \pi \sqrt{ | V_{123}^{(2)} V_{432}^{(2)} - V_{214}^{(2)} V_{341}^{(2)} |}} \, d^s_{\nu \mu} (\theta^{(2)}) \, \\ \nonumber
&& \, \times \sin \left( S^{(2)} + \mu \phi_{12}^{(2)} + \nu \phi_{13}^{(2)} \right) \, .
\end{eqnarray} \\
\section{\label{ch7: sec_12j_formula}Asymptotic Formula for the $12j$ Symbol}
From the definition in Eq.\ (\ref{ch7: eq_12j_definition}), we see that the factor $( [j_{12}][j_{34}] [j_{13}][j_{24}] )^{1/2}$ in the denominator of Eq.\ (\ref{ch7: eq_12j_definition}) partially cancels out the factor $( J_{12} J_{34} J_{13} J_{24} )^{1/2}$ from $T_1$ and $T_2$ in Eqs.\ (\ref{ch7: eq_main_formula_wo_phase1}) and (\ref{ch7: eq_main_formula_wo_phase2}), respectively, leaving a constant factor of $1/4$. Because the $12j$ symbol is a real number, the relative phase between $e^{i \kappa_1}$ and $e^{i \kappa_2}$ must be $\pm 1$. Through numerical experimentation, we found it to be $+1$. We use the limiting case of $j_5 = s = 0$ from Eq.\ (A9) in \cite{jahn1954} to determine the overall phase convention. This determines most of the overall phase. The rest can be fixed through numerical experimentation. Putting the pieces together, we obtain a new asymptotic formula for the $12j$ symbol with one small quantum number:
\begin{widetext}
\begin{eqnarray}
\label{ch7: eq_main_formula_12j}
\left\{
\begin{array}{cccc}
j_1 & j_2 & j_{12} & j_{125} \\
j_3 & j_4 & j_{34} & j_{135} \\
j_{13} & j_{24} & s & j_6 \\
\end{array}
\right\}
= \frac{(-1)^{\mu}}{ 4 \pi \, \sqrt{ (2j_{125}+1)(2j_{135}+1) }} \,
&& \left[ \frac{d^{s}_{\nu \, \mu} (\theta^{(1)})}{ \sqrt{| V_{123}^{(1)} V_{432}^{(1)} - V_{214}^{(1)} V_{341}^{(1)} | } } \cos (S^{(1)} + \mu \phi_{12}^{(1)} + \nu \phi_{13}^{(1)}) \right. \\ \nonumber
&& \left. \quad \, + \frac{d^{s}_{\nu \, \mu} (\theta^{(2)})}{ \sqrt{| V_{123}^{(2)} V_{432}^{(2)} - V_{214}^{(2)} V_{341}^{(2)} |}} \sin (S^{(2)} + \mu \phi_{12}^{(2)} + \nu \phi_{13}^{(2)}) \right] \, .
\end{eqnarray}
\end{widetext}
As mentioned above, the additional terms from the spinor product break the time-reversal symmetry. Thus, it is essential that $S^{(1)}$ and $S^{(2)}$ are evaluated at the configurations in which $V = {\mathbf J}_6 \cdot ({\mathbf J}_{12} \times {\mathbf J}_{13}) < 0$, and not at their mirror images.
Here, the indices on the $d$-matrix are given by $\mu = j_{125}-j_{12}$ and $\nu = j_{135}-j_{13}$. They are of the same order as the small parameter $s$. The phases $S^{(1)}$ and $S^{(2)}$ are defined in (\ref{ch7: eq_action_integral}), and the $V$'s are defined by
\begin{equation}
V_{ijk} = {\bf J}_i \cdot ({\bf J}_j \times {\bf J}_k) \, .
\end{equation}
The angles $\phi_{12}$ and $\phi_{13}$ are internal dihedral angles at the edge $J_{12}$ and $J_{13}$, respectively, of a tetrahedron formed by the six vectors ${\bf J}_{12}, {\bf J}_{13}, {\bf J}_{24}, {\bf J}_{34}, {\bf J}_6$, and ${\bf J}_{2'3}$, where ${\bf J}_{2'3} = {\bf J}_3 - {\bf J}_2$. This tetrahedron is illustrated in Fig.\ \ref{ch7: fig_tetrahedron_9j}. The angle $\theta$ is the angle between the vectors ${\bf J}_{12}$ and ${\bf J}_{13}$. The explicit expression for the angles $\phi_{12}$, $\phi_{13}$, and $\theta$ are given by the following equations
\begin{eqnarray}
\label{ch7: eq_phi_12_def}
\phi_{12} &=& \pi - \cos^{-1} \left( \frac{ ({\bf J}_{12} \times {\bf J}_{13} ) \cdot ({\bf J}_{12} \times {\bf J}_{6} ) }{ | {\bf J}_{12} \times {\bf J}_{13} | \, | {\bf J}_{12} \times {\bf J}_{6} |} \right) \, , \\
\label{ch7: eq_phi_13_def}
\phi_{13} &=& \pi - \cos^{-1} \left( \frac{ ({\bf J}_{13} \times {\bf J}_{12} ) \cdot ({\bf J}_{13} \times {\bf J}_{6} ) }{ | {\bf J}_{13} \times {\bf J}_{12} | \, | {\bf J}_{13} \times {\bf J}_{6} |} \right) \, , \\
\label{ch7: eq_theta_def}
\theta &=& \cos^{-1} \left( \frac{ {\bf J}_{12} \cdot {\bf J}_{13} }{J_{12} J_{13} } \right) \, .
\end{eqnarray}
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.28\textwidth]{ch7_fig11.eps}
\caption{The angles $\phi_{12}$ and $\phi_{13}$ are internal dihedral angles in the tetrahedron with the six lengths ${\bf J}_6, {\bf J}_{12}, {\bf J}_{34}, {\bf J}_{13}, {\bf J}_{24}, {\bf J}_{2'3}$, where ${\bf J}_{2'3} = {\bf J}_3 - {\bf J}_2$. The angle $\theta$ is the angle between ${\bf J}_{12}$ and ${\bf J}_{13}$. }
\label{ch7: fig_tetrahedron_9j}
\end{center}
\end{figure}
\section{\label{ch7: sec_plots}Plots}
We illustrate the accuracy of the approximation Eq.\ (\ref{ch7: eq_main_formula_12j}) by plotting it against the exact $12j$ symbol in the classically allowed region for the following values of the $j$'s:
\begin{equation}
\label{ch7: eq_12j_values_1_11_case1}
\left\{
\begin{array}{cccc}
j_1 & j_2 & j_{12} & j_{125} \\
j_3 & j_4 & j_{34} & j_{135} \\
j_{13} & j_{24} & s_5 & j_6 \\
\end{array}
\right\}
=
\left\{
\begin{array}{rrrr}
51/2 & 59/2 & 21 & 22 \\
55/2 & 53/2 & 27 & 26 \\
27 & 25 & 1 & j_6 \\
\end{array}
\right\} \, .
\end{equation}
The result is shown in Fig.\ \ref{ch7: fig_12j_plot_1_case1}. From the error plot in Fig.\ \ref{ch7: fig_12j_11_1_errors}(a), we see that the agreement is excellent, even for these relatively small values of the $j$'s.
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.50\textwidth]{ch7_fig12.eps}
\caption{Comparison of the exact $12j$ symbol (vertical sticks and dots) and the asymptotic formula (\ref{ch7: eq_main_formula_12j}) in the classically allowed region away from the caustics, for the values of $j$'s shown in Eq.\ (\ref{ch7: eq_12j_values_1_11_case1}). }
\label{ch7: fig_12j_plot_1_case1}
\end{center}
\end{figure}
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.50\textwidth]{ch7_fig13.eps}
\caption{Comparison of the exact $12j$ symbol (vertical sticks and dots) and the asymptotic formula (\ref{ch7: eq_main_formula_12j}) in the classically allowed region away from the caustics, for the values of $j$'s shown in Eq.\ (\ref{ch7: eq_12j_values_1_11}). }
\label{ch7: fig_12j_plot_1}
\end{center}
\end{figure}
\begin{figure}[tbhp]
\begin{center}
\includegraphics[width=0.50\textwidth]{ch7_fig14.eps}
\caption{Absolute value of the error of the asymptotic formula (\ref{ch7: eq_main_formula_12j}) for (a) the case shown in Eq.\ (\ref{ch7: eq_12j_values_1_11_case1}), and (b) the case shown in Eq.\ (\ref{ch7: eq_12j_values_1_11}). The error is defined as the difference between the approximate value and the exact value. }
\label{ch7: fig_12j_11_1_errors}
\end{center}
\end{figure}
Since the asymptotic formula (\ref{ch7: eq_main_formula_12j}) should become more accurate as the values of the $j$'s get larger, we plot the formula against the exact $12j$ symbol for another example,
\begin{equation}
\label{ch7: eq_12j_values_1_11}
\left\{
\begin{array}{cccc}
j_1 & j_2 & j_{12} & j_{125} \\
j_3 & j_4 & j_{34} & j_{135} \\
j_{13} & j_{24} & s_5 & j_6 \\
\end{array}
\right\}
=
\left\{
\begin{array}{rrrr}
211/2 & 219/2 & 91 & 92 \\
205/2 & 223/2 & 107 & 108 \\
99 & 93 & 2 & j_6 \\
\end{array}
\right\} \, ,
\end{equation}
in the classically allowed region away from the caustic in Fig.\ \ref{ch7: fig_12j_plot_1}. These values of the $j$'s are roughly four times those in Eq.\ (\ref{ch7: eq_12j_values_1_11_case1}). The errors for this case are displayed in Fig.\ \ref{ch7: fig_12j_11_1_errors}(b). By comparing Figs.\ \ref{ch7: fig_12j_11_1_errors}(a) and \ref{ch7: fig_12j_11_1_errors}(b), we can conclude that the error scales with the $j$'s.
\section{Conclusions}
In this paper, we have derived an asymptotic formula of the $12j$ symbol with one small angular momentum, generalizing the special formula of the $12j$ symbol, Eq.\ (A9) in \cite{jahn1954}. By looking at the other special formula for the $12j$ symbol, Eq.\ (A8) in \cite{jahn1954}, we can guess that the other asymptotic limit of the $12j$ symbol will involve the semiclassical analysis of the trivial $9j$ symbol, which reduces to a product of two $6j$ symbols. We will present that result in a future paper.
The analysis of the $12j$ symbol in this paper is a natural extension of the analysis of the $9j$ symbol in \cite{yu2011}. Based on the calculations in these two papers, we can summarize our steps in finding asymptotic formulas for the $3nj$ symbols with small and large quantum numbers. First, we ignore the small quantum numbers and any of the large quantum numbers that involve the indices of the small ones. For instance, in this paper, $j_5 = s_5$ is small, so we ignore $j_5$, $j_{125}$, and $j_{135}$. The remaining relevant large quantum numbers determine the Lagrangian manifolds. Once we fix the Lagrangian manifolds, the scalar WKB parts of the wave-functions can be derived from a semiclassical analysis of these Lagrangian manifolds, following the procedure in \cite{littlejohn2007, littlejohn2010b}. The spinor parts of the wave-functions at the intersection points of the Lagrangian manifolds are determined by the path used to calculate the action integral in the semiclassical analysis. Finally, taking the inner product of both the scalar part and the spinor part of the wave-functions, we can derive an asymptotic formula for the $3nj$ symbol with small and large angular momenta.
In general, we note that the asymptotic limits of a $3nj$ symbol with one small angular momentum is expressed in terms of the geometry associated with the asymptotic limits of a $3mj$-symbol, where $m=n-1$. Since the Wigner $15j$ symbol is used extensively in loop quantum gravity and topological quantum field theory, we suspect that there are deeper, and more geometrical interpretations of these approximate relations of the $3nj$ symbol in their various semiclassical limits.
|
2,877,628,088,568 | arxiv | \section{Introduction}
\label{sect-1}
The problem of variability of fundamental physical constants has a
long history starting 70 years ago with publications by
\citet{Mil35} and \citet{Dir37}. The review of its current status is
given in \cite{Uza03,GIK07}. Recent achievements in laboratory
studies of the time-variation of fundamental constants are
described, for example, in Refs.~\cite{Lea07,FK07c}.
The variability of the dimensionless physical constants is usually
considered in the framework of the theories of fundamental
interactions such as string and M theories, Kaluza-Klein theories,
quintessence theories, etc. In turn, the experimental physics and
observational astrophysics offer possibilities to probe directly the
temporal changes in the physical constants both locally and at early
cosmological epochs comparable with the total age of the Universe
($T_{\rm U} = 13.8$ Gyr for the $H_0 = 70$ km~s$^{-1}\,$ Mpc$^{-1}$,
$\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$ cosmology). Here we discuss
a possibility of using the ground state fine-structure (FS)
transitions in atoms and ions to probe the variability of
$\alpha$ at high redshifts, up to $z \sim 10$ ($\sim96$\% of $T_{\rm
U}$).
The constants which can be probed from astronomical spectra are the
proton-to-electron mass ratio, $\mu = m_{\rm p}/m_{\rm e}$, the
fine-structure constant, $\alpha = e^2/(\hbar c)$, or different
combinations of $\mu$, $\alpha$, and the proton gyromagnetic ratio
$g_{\rm p}$. The reported in the literature data concerning the
relative values of $\Delta\mu/\mu$\,\ and $\Delta\alpha/\alpha$\,\ at $z\sim$~1--3 are controversial
at the level of a few ppm (1ppm = $10^{-6}$): $\Delta\mu/\mu$\, = $24\pm6$ ppm
\cite{RBH06} versus $0.6\pm1.9$ ppm \cite{FK07a}, and
$\Delta\alpha/\alpha$\, = $-5.7\pm1.1$ ppm \cite{MWF03} versus $-0.6\pm0.6$ ppm \cite{SCP04},
$-0.4\pm1.9$ ppm \cite{QRL04}, and $5.4\pm2.5$ ppm \cite{LML07}.
Such a spread points unambiguously to the presence of unaccounted
systematics. Some of the possible problems were studied in
\cite{MRA07,MLM07,MWF07,SCP07}, but the revealed systematic errors
cannot explain the full range of the observed discrepancies between
the $\Delta\alpha/\alpha$\,\ and $\Delta\mu/\mu$\,\ values. We can state, however, that a
conservative upper limit on the hypothetical variability of these
constants is $10^{-5}$.
Astronomical estimates of the dimensionless physical constants are
based on the comparison of the line centers in the
absorption/emission spectra of astronomical objects and the
corresponding laboratory values. In practice, in order to
disentangle the line shifts caused by the motion of the object and
by the putative effect of the variability of constants, lines with
different sensitivities to the constant variations should be
employed. However, if different elements are involved in the
analysis, an additional source of errors due to the so-called
Doppler noise arises. The Doppler noise is caused by non-identical
spatial distributions of different species. It introduces offsets
which can either mimic or obliterate a real signal. The evaluation
of the Doppler noise is a serious problem
\cite{Lev94,L04,BSS04,Car00,KCL05,LRK07}. For this reason lines
of a single element arising exactly from the same atomic or
molecular level are desired. This would provide reliable
astronomical constraints on variations of physical constants.
In the present communication we propose to use the mid- and
far-infrared FS transitions within the ground multiplets $^3\!P_J$,
$^5\!D_J$, $^6\!D_J$, $^3\!F_J$ and $^4\!F_J$ of some of the most
abundant atoms and ions, such as Si~\textsc{i}, S~\textsc{i},
Ti~\textsc{i}, Fe~\textsc{i}, Fe~\textsc{ii}, S~\textsc{iii},
Ar~\textsc{iii}, Fe~\textsc{iii}, Mg~\textsc{v}, Ca~\textsc{v},
Na~\textsc{vi}, Fe~\textsc{vi}, Mg~\textsc{vii}, Si~\textsc{vii},
Ca~\textsc{vii}, Fe~\textsc{vii}, and Si~\textsc{ix} for
constraining the variability of $\alpha$. This approach has the
following advantages. Most important is that each element provides
two, or more FS lines which can be used independently~--- this
considerably reduces the Doppler noise. The mid- and far-infrared FS
transitions are typically more sensitive to the change of $\alpha$
than optical lines. For high redshifts ($z > 2$), the far-infrared
(FIR) lines are shifted into sub-mm range. The receivers at sub-mm
wavelengths are of the heterodyne type, which means that the signal
can be fixed at a high frequency stability ($\sim 10^{-12}$).
Besides, FIR lines can be observed at early cosmological epochs ($z
\gtrsim 10$) which are far beyond the range accessible to optical
observations ($z\lesssim 4$).
\section{Astronomically observed FS transitions}\label{SecAstro}
The ground state FS transitions in mid- and far-infrared are
observed in emission in the interstellar dense and cold molecular
gas clouds, diffuse ionized gas in the star forming H~\textsc{ii}
regions and in the `coronal' gas of active galactic nuclei (AGNs),
and in the warm gas envelopes of the protostellar objects. Cold
molecular gas clouds have been observed not only in our Galaxy, but
also in numerous galaxies with redshifts $z > 1$ up to $z = 6.42$
\cite{MCC05} and often around powerful quasars and radio galaxies
\cite{Omo07}. Recently the C~\textsc{ii} 158 $\mu$m line and CO low
rotational lines were used to set a limit on the variation of the
product $\mu\alpha^2$ at $z = 4.69$ and 6.42 \cite{LRK07}. The FIR
transitions in C\,{\sc i} (370, 609 $\mu$m) were detected at
$z=2.557$ towards H1413+117 \cite{WHD03,WDH05}. Four other
observations of the C\,{\sc i} 609 $\mu$m line were reported at $z =
4.120$ (PSS 2322+1944) \cite{PBC04}, at $z = 2.285$ (IRAS
F10214+4724) and $z=2.565$ (SMM J14011+0252) \cite{WDH05}, and at $z
= 3.913$ (APM 08279+5255) \cite{WWN06}.
In our Galaxy the most luminous protostellar objects are seen in the
O~\textsc{i} lines $\lambda\lambda63, 146$ $\mu$m\, \cite{CHT96} and
in the FIR lines from intermediate ionized atoms O~\textsc{iii},
N~\textsc{iii}, N~\textsc{ii} and C~\textsc{ii}, photoionized by the
stellar continuum \cite{BP03}. The lines of N~\textsc{ii} (122, 205
$\mu$m) S~\textsc{iii} (19, 34 $\mu$m), Fe~\textsc{iii} (23 $\mu$m),
Si~\textsc{ii} (35 $\mu$m), Ne~\textsc{iii} (36 $\mu$m),
O~\textsc{iii} (52, 88 $\mu$m), N~\textsc{iii} (57 $\mu$m),
O~\textsc{i} (63, 146 $\mu$m), and C~\textsc{ii} (158 $\mu$m)
\cite{GDG77, MSF80, CHE93}, as well as Ne~\textsc{ii} (13 $\mu$m),
S~\textsc{iv} (11 $\mu$m), and Ar~\textsc{iii} (9 $\mu$m)
\cite{GWD81} have been observed in the highly obscured ($A_v \simeq
21$ mag) massive star forming region G333.6--0.2. The FS transitions
of N~\textsc{iii}, O~\textsc{iii}, Ne~\textsc{iii}, S~\textsc{iii},
Si~\textsc{ii}, N~\textsc{iii}, O~\textsc{i}, C~\textsc{ii}, and
N~\textsc{ii} are detected in numerous Galactic H~\textsc{ii}
regions \cite{SCR95, SCC97, SRC04}. Compact and ultracompact
H~\textsc{ii} regions are the sources of the FS lines of
S~\textsc{iii}, O~\textsc{iii}, N~\textsc{iii}, Ne~\textsc{ii},
Ar~\textsc{iii}, and S~\textsc{iv}\, \cite{ACW97, OKY03}. Giant
molecular clouds in the Orion Kleinmann-Low cluster \cite{LBS06},
the Sgr~B2 complex \cite{GRC03, PBS07}, the $\rho$~Oph and
$\sigma$~Sco star-forming regions \cite{OON06}, and in the Carina
nebular \cite{MOS04, OPS06} emit the FIR lines of O~\textsc{i},
N~\textsc{ii}, C~\textsc{ii}, Si~\textsc{ii}, O~\textsc{iii}, and
N~\textsc{iii}.
Ions with low excitation potential $E_{\rm ex} < 50$ eV
(N~\textsc{ii}, Fe~\textsc{ii}, S~\textsc{iii}, Ar~\textsc{iii},
Fe~\textsc{iii}) as well as ions with high excitation potential 50
eV $ < E_{\rm ex} \leq 351$ eV (O~\textsc{iii}, Ne~\textsc{iii},
Ne~\textsc{v}, Mg~\textsc{v}, Ca~\textsc{v}, Na~\textsc{vi},
Mg~\textsc{vii}, Si~\textsc{vii}, Ca~\textsc{vii}, Fe~\textsc{vii},
Si~\textsc{ix}) are effectively produced by hard ionizing radiation
and ionzing shocks in the gas surrounding active galactic nuclei.
The FS emission lines of these ions have been detected with the {\it
Infrared Space Observatory (ISO)} and the {\it Spitzer Space
Telescope (Spitzer)} in Seyfert galaxies, 3C radio sources and
quasars, and in ultraluminous infrared galaxies in the redshift
interval from $z \sim 0.01$ up to $z = 0.994$
\cite{GC00,SLV02,DSA06,DWS07,SVH07,GCWL07}.
The infrared FS lines of the neutral atoms Si~\textsc{i},
S~\textsc{i}, and Fe~\textsc{i} have not been detected yet in
astronomical objects, but these atoms were observed in resonance
ultraviolet lines in two damped Ly$\alpha$ systems at $z = 0.452$
\cite{VD07} and $z = 1.15$ \cite{QRB08} toward the quasars HE
0000--2340 and HE 0515--4414, respectively.
The FIR lines are expected to be observed in extragalactic objects
at a new generation of telescopes such as the Stratospheric
Observatory for Infrared Astronomy (SOFIA), the Herschel Space
Observatory originally called `FIRST' for `Far InfraRed and
Submillimeter Telescope', and the Atacama Large Millimeter Array
(ALMA) which open a new opportunity of probing the relative values
of the fundamental physical constants with an extremely high
accuracy ($\delta \sim 10^{-7}$) locally and at different
cosmological epochs.
\section{Estimate of the sensitivity coefficients}
\label{estimate}
In the nonrelativistic limit and for an infinitely heavy point-like
nucleus all atomic transition frequencies are proportional to the
Rydberg constant, $\mathcal{R}$. In this approximation, the ratio of
any two atomic frequencies does not depend on any fundamental
constants. Relativistic effects cause corrections to atomic energy,
which can be expanded in powers of $\alpha^2$ and $(\alpha Z)^2$,
the leading term being $(\alpha Z)^2\mathcal{R}$, where $Z$ is
atomic number. Corrections accounting for the finite nuclear mass
are proportional to $\mathcal{R}/(\mu Z)$, but for atoms they are
much smaller than relativistic corrections. The finite nuclear mass
effects form the basis for the molecular constraints to the $m_{\rm
p}/m_{\rm e}$ mass ratio variation
\cite{RBH06,FK07a,T75,P77,VL93,PIV98,MSIV06}.
Consider the dependence of an atomic frequency $\omega$ on $\alpha$
in the co-moving reference frame:
\begin{align}\label{qfactor1}
\omega_z = \omega + q x + \dots, \quad x \equiv
\left({\alpha_z}/{\alpha}\right)^2 - 1\, .
\end{align}
Here $\omega$ and $\omega_z$ are the frequencies corresponding to
the present-day value of $\alpha$ and to a change $\alpha
\rightarrow \alpha_z$ at a redshift $z$. The parameter $q$
(so-called $q$-factor) is individual for each atomic transition
\cite{DFK02}.
If $\alpha$ is not a constant, the parameter $x$ differs from zero
and the corresponding frequency shift, $\Delta\omega = \omega_z -
\omega$, is given by:
\begin{align}\label{qfactor2}
{\Delta\omega}/{\omega} = 2\mathcal{Q}\,({\Delta\alpha}/{\alpha})\,,
\end{align}
where ${\cal Q} = q/\omega$ is the dimensionless sensitivity
coefficient, and $\Delta\alpha/\alpha \equiv (\alpha_z -
\alpha)/\alpha$. Here we assume that $|\Delta\alpha/\alpha| \ll 1$.
If such a frequency shift takes place for a distant object observed
at a redshift $z$, then an apparent change in the redshift,
$\Delta z = \tilde{z} - z$, occurs:
\begin{align}\label{qfactor3}
{\Delta\omega}/{\omega} = -\Delta z/(1+z) \equiv {\Delta v}/{c}\, ,
\end{align}
where $\Delta v$ is the Doppler radial velocity shift. If $\omega'$
is the observed frequency from a distant object, then the true
redshift is given by
\begin{align}\label{zfactor1}
1+z = \omega_z/\omega' \, ,
\end{align}
whereas the shifted (apparent) value is
\begin{align}\label{zfactor2}
1+\tilde{z} = \omega/\omega' \, .
\end{align}
If we have two lines of the same element with the apparent redshifts
$\tilde{z}_1$ and $\tilde{z}_2$ and the corresponding sensitivity
coefficients ${\cal Q}_1$ and ${\cal Q}_2$, then
\begin{align}\label{zfactor3}
2\Delta {\cal Q}(\Delta\alpha/\alpha) = (\tilde{z}_1 -
\tilde{z}_2)/(1 + z ) = \Delta v /c\, ,
\end{align}
where $\Delta v = v_1 - v_2$ is the difference of the measured
radial velocities of these lines, and
$\Delta {\cal Q} = {\cal Q}_2 - {\cal Q}_1$.
Relativistic corrections grow with atomic number $Z$, but for
optical and UV transitions in light atoms they are small, i.e.
$\mathcal{Q} \sim (\alpha Z)^2\ll 1$. For example, Fe~\textsc{ii}
lines have sensitivities $\mathcal{Q} \sim 0.03$ \citep{PKT07}.
Other atomic transitions, used in astrophysical searches for
$\alpha$-variation have even smaller sensitivities. The only
exceptions are the Zn~\textsc{ii} $\lambda 2026$ \AA\ line, where
$\mathcal{Q} \approx 0.050$ \citep{DFK02} and the Fe~\textsc{i}
resonance transitions considered in \citep{DF08} where $\mathcal{Q}$
ranges between 0.03 and 0.09. One can significantly increase the
sensitivity to $\alpha$-variation by using transitions between FS
levels of one multiplet \cite{DF05b}. In the nonrelativistic limit
$\alpha \rightarrow 0$ such levels are exactly degenerate.
Corresponding transition frequencies $\omega$ are approximately
proportional to $(\alpha Z)^2$. Consequently, for these transitions
$\mathcal{Q}\approx 1$ and
\begin{align}\label{qfactor4}
{\Delta\omega}/{\omega}
\approx
2 {\Delta\alpha}/{\alpha}\, ,
\end{align}
which implies that for any two FS transitions $\Delta {\cal Q}
\approx 0$. In this approximation $\Delta\alpha/\alpha$\,\ cannot be determined from
\Eref{zfactor3}.
We will show now that in the next order in $(\alpha Z)^2$ the
$\mathcal{Q}$-factors of the FS transitions deviate from unity and
$\Delta \mathcal{Q}$ in \Eref{zfactor3} is not equal to zero. In
fact, for heavy atoms with $\alpha Z \sim 1$ it is possible to find
FS transitions with $|\Delta \mathcal{Q}| \gg 1$ \cite{DF05b}. Here
we focus on atoms with $\alpha Z \ll 1$, which are more important
for astronomical observations. For such atoms
$|\Delta \mathcal{Q}| < 1$ and, as we will show below, there is a
simple analytical relation between $\Delta \mathcal{Q}$ and
experimentally observed FS intervals.
There are two types of relativistic corrections to atomic energy.
The first type depends on the powers of $\alpha Z$ and rapidly grows
along the periodic table. The second type of corrections depends on
$\alpha$ and does not change much from atom to atom. Such
corrections are usually negligible, except for the lightest atoms.
Expanding the energy of a level of the FS multiplet $^{2S+1}\!L_J$
into (even) powers of $\alpha Z$ we have (see \cite{Sob79},
Sec.~5.5):
\begin{align}\nonumber
E_{L,S,J} &= E_0
+\tfrac{A(\alpha Z)^2}{2}\left[J(J+1)-L(L+1)-S(S+1)\right]
\nonumber\\
&+B_J\,(\alpha Z)^4 + \dots\,,
\label{FS1}
\end{align}
where $A$ and $B_J$ are the parameters of the FS multiplet. Note,
that in general, $B_J$ depends on quantum numbers $L$ and $S$, but
we will omit $L$ and $S$ subscripts since they do not change the
following discussion. In \Eref{FS1} we keep the term of the
expansion $\sim(\alpha Z)^4$, but neglect the term $\sim\alpha^2$.
This is justified only for atoms with $Z\gtrsim 10$. Therefore, the
following discussion is not applicable to atoms of the second
period. As long as these atoms are very important for astrophysics,
we will briefly discuss them in the end of this section.
The strongest FS transitions are of {\it M1}-type. They occur between
levels with $\Delta J=1$:
\begin{align}\label{FS2}
\omega_{J,J-1}
&=E_{L,S,J}-E_{L,S,J-1}
\nonumber\\
&=AJ(\alpha Z)^2+\left(B_J-B_{J-1}\right)(\alpha Z)^4\, .
\end{align}
In the first order in $(\alpha Z)^2$ we have the well known
Land\'{e} rule: $\omega_{J,J-1} = AJ(\alpha Z)^2$, which directly
leads to \Eref{qfactor4}. In the next order we get:
\begin{align}\label{FS3}
\mathcal{Q}_{J,J-1} = 1 + \frac{B_J-B_{J-1}}{AJ}\,(\alpha Z)^2 \,.
\end{align}
Let us consider the multiplet $^3\!P_J$ (i.e. the ground multiplet
for Si~\textsc{i}, S~\textsc{i}, Ar~\textsc{iii}, Mg~\textsc{v},
Ca~\textsc{v}, Na~\textsc{vi}, Mg~\textsc{vii}, Si~\textsc{vii},
Ca~\textsc{vii}, and Si~\textsc{ix}). For two transitions
$\omega_{2,1}$ and $\omega_{1,0}$ \Eref{FS3} gives:
\begin{align}\label{FS4}
\mathcal{Q}_{2,1}-\mathcal{Q}_{1,0} = \frac{B_2-3B_1+2B_0}{2A}\,(\alpha Z)^2\,.
\end{align}
At the same time, \Eref{FS2} gives the following expression for the
frequency ratio:
\begin{align}\label{FS5}
\frac{\omega_{2,1}}{\omega_{1,0}}
&= 2 + \frac{B_2-3B_1+2B_0}{A}\,(\alpha Z)^2\,.
\end{align}
Comparison of Eqs.~\eqref{FS4} and \eqref{FS5} leads to the final result:
\begin{align}\label{FS6}
\Delta \mathcal{Q} = \mathcal{Q}_{2,1}-\mathcal{Q}_{1,0}
= \frac{1}{2}\,\left(\frac{\omega_{2,1}}{\omega_{1,0}}\right)-1\,.
\end{align}
In a general case of the $^{2S+1}\!L_J$ multiplet the difference
between the sensitivity coefficients $\mathcal{Q}_{J,J-1}$ and
$\mathcal{Q}_{J-1,J-2}$ is given by
\begin{align}\label{FS7}
\Delta \mathcal{Q} = \frac{J-1}{J}\,
\left(\frac{\omega_{J,J-1}}{\omega_{J-1,J-2}}\right)-1\,.
\end{align}
If two arbitrary FS transitions $\omega_{J_1,J_1'}$ and $\omega_{J_2,J_2'}$
of the $^{2S+1}\!L_J$ multiplet are considered, then
the difference
$\Delta\mathcal{Q}=\mathcal{Q}_{J_2,J_2'}-\mathcal{Q}_{J_1,J_1'}$
is expressed by
\begin{align}\label{FS7a}
\Delta \mathcal{Q}
= \frac{J_1(J_1+1) - J_1'(J_1'+1)}{J_2(J_2+1) - J_2'(J_2'+1)}\,
\left(\frac{\omega_{J_2,J_2'}}{\omega_{J_1,J_1'}}\right) - 1\,.
\end{align}
This equation can be used also for $E2$-transitions with $\Delta
J=2$ and for combination of $M1$- and $E2$-transitions.
It is to note that the derived values of $\Delta \mathcal{Q}$ for
two FS transitions are expressed in terms of their frequencies,
which are known from the laboratory measurements. Another point is
that the right-hand side of \Eref{FS7} turns to zero when the
frequency ratio equals $J/(J-1)$, i.e. when the Land\'{e} rule is
fulfilled. Eqs.~\eqref{FS6} --~\eqref{FS7a} hold only as long as we
neglect corrections of the order of $\alpha^2$ and $(\alpha Z)^6$ to
\Eref{FS1}, which is justified for the atoms in the middle of the
periodic table, i.e. approximately from Na ($Z=11$) to Sn ($Z=50$).
\begin{table*}[tbh]
\caption{The differences of the sensitivity coefficients $\Delta
\mathcal{Q}$ of the FS emission lines within the ground multiplets
$^3\!P_J$, $^5\!D_J$, $^6\!D_J$, $^4\!F_J$, and $^3\!F_J$ for the
most abundant atoms and ions. The FS intervals for S~\textsc{i},
Fe~\textsc{i--iii}, Ar~\textsc{iii}, Mg~\textsc{v}, Ca~\textsc{v},
and Si~\textsc{vii} are inverted. The excitation temperature $T_{\rm
ex}$ for the upper level is indicated. Transition wavelengths and
frequencies (rounded) are taken from Ref.~\cite{NIST}. The values of
$\Delta \mathcal{Q}$ for the ions C~\textsc{i}, N~\textsc{ii}, and
O~\textsc{iii} are calculated using Eq.~(5.197) from
Ref.~\cite{Sob79}.} \label{tab1}
\begin{tabular}{lcdddcdddcd}
\hline\hline\\[-7pt]
\multicolumn{1}{c}{Atom/Ion}
&\multicolumn{4}{c}{Transition $a$}
&\multicolumn{4}{c}{Transition $b$}
&\multicolumn{1}{c}{$\omega_b/\omega_a$}
&\multicolumn{1}{c}{$\Delta \mathcal{Q}=$} \\
&\multicolumn{1}{c}{$(J_a,J_a')$}
&\multicolumn{1}{c}{$\lambda_a$ ($\mu$m)}
&\multicolumn{1}{c}{$\omega_a$ (cm$^{-1}$)}
&\multicolumn{1}{c}{$T_{\rm ex}$ (K)}
&\multicolumn{1}{c}{$(J_b,J_b')$}
&\multicolumn{1}{c}{$\lambda_b$ ($\mu$m)}
&\multicolumn{1}{c}{$\omega_b$ (cm$^{-1}$)}
&\multicolumn{1}{c}{$T_{\rm ex}$ (K)}
&&\multicolumn{1}{c}{$\mathcal{Q}_b-\mathcal{Q}_a$} \\
\hline\\[-5pt]
C~\textsc{i} & (1,0) &609.1& 16.40& 24& (2,1) &370.4& 27.00& 63&1.646& -0.008 \\
Si~\textsc{i} & (1,0) &129.7& 77.11& 111& (2,1) & 68.5& 146.05& 321&1.894& -0.053 \\
S~\textsc{i} & (0,1) & 56.3&177.59& 825& (1,2) & 25.3& 396.06& 570&2.230& 0.115 \\
Ti~\textsc{i} & (2,3) & 58.8&170.13& 245& (3,4) & 46.1& 216.74& 557&1.274& -0.045 \\
Fe~\textsc{i} & (2,3) & 34.7&288.07&1013& (3,4) & 24.0& 415.93& 599&1.444& 0.083 \\
& (1,2) & 54.3&184.13&1278& (2,3) & 34.7& 288.07&1013&1.565& 0.043 \\
& (0,1) &111.2& 89.94&1407& (1,2) & 54.3& 184.13&1278&2.048& 0.024 \\
N~\textsc{ii} & (1,0) &205.3& 48.70& 70& (2,1) &121.8& 82.10& 188&1.686& -0.016 \\
Fe~\textsc{ii} & (5/2,7/2) & 35.3&282.89& 961& (7/2,9/2) & 26.0& 384.79& 554&1.360& 0.058 \\
& (3/2,5/2) & 51.3&194.93&1241& (5/2,7/2) & 35.3& 282.89& 961&1.451& 0.037 \\
& (1/2,3/2) & 87.4&114.44&1406& (3/2,5/2) & 51.3& 194.93&1241&1.703& 0.022 \\
O~\textsc{iii} & (1,0) & 88.4&113.18& 163& (2,1) & 51.8& 193.00& 441&1.705& -0.027 \\
S~\textsc{iii} & (1,0) & 33.5&298.69& 430& (2,1) & 18.7& 534.39&1199&1.789& -0.105 \\
Ar~\textsc{iii}& (0,1) & 21.9&458.05&2259& (1,2) & 9.0&1112.18&1600&2.428& 0.214 \\
Fe~\textsc{iii}& (2,3) & 33.0& 302.7&1063& (3,4) & 22.9& 436.2 & 628&1.441& 0.081 \\
& (1,2) & 51.7& 193.5&1342& (2,3) & 33.0& 302.7 &1063&1.564& 0.043 \\
& (0,1) &105.4& 94.9 &1478& (1,2) & 51.7& 193.5 &1342&2.039& 0.019 \\
Mg~\textsc{v} & (0,1) & 13.5& 738.7&3628& (1,2) & 5.6&1783.1 &2566&2.414& 0.207 \\
Ca~\textsc{v} & (0,1) & 11.5& 870.9&4713& (1,2) & 4.2&2404.7 &3460&2.761& 0.381 \\
Na~\textsc{vi} & (1,0) & 14.3& 698 &1004& (2,1) & 8.6& 1161 &2675&1.663& -0.168 \\
Fe~\textsc{vi} & (5/2,3/2) & 19.6& 511.3& 736& (7/2,5/2) & 14.8& 677.0 &1710&1.324& -0.054 \\
& (7/2,5/2) & 14.8& 677.0&1710& (9/2,7/2) & 12.3& 812.3 &2879&1.200& -0.067 \\
Mg~\textsc{vii}& (1,0) & 9.0 & 1107 &1593& (2,1) & 5.5& 1817 &4207&1.641& -0.179 \\
Si~\textsc{vii}& (0,1) & 6.5 & 1535 &8007& (1,2) & 2.5& 4030 &5817&2.625& 0.313 \\
Ca~\textsc{vii}& (1,0) & 6.2 &1624.9&2338& (2,1) & 4.1&2446.5 &5858&1.506& -0.247 \\
Fe~\textsc{vii}& (3,2) & 9.5 &1051.5&1513& (4,3) & 7.8&1280.0 &3354&1.217& -0.087 \\
Si~\textsc{ix} & (1,0) & 3.9&2545.0&3662& (2,1) & 2.6&3869 &9229&1.520& -0.240 \\
\hline\hline
\end{tabular}
\end{table*}
\tref{tab1} lists the calculated $\Delta \mathcal{Q}$ values for the
most abundant atoms and ions observed in Galactic and extragalactic
gas clouds. The ions C~\textsc{i}, Si~\textsc{i}, N~\textsc{ii},
O~\textsc{iii}, Na~\textsc{vi}, Mg~\textsc{vii}, and
Ca~\textsc{vii}, have configuration $ns^2 np^2$ and `normal' order
of the FS sub-levels. The ions Mg~\textsc{v}, Si~\textsc{vii},
S~\textsc{i}, and Ca~\textsc{v} have configuration $ns^2 np^4$ and
`inverted' order of the FS sub-levels. However, \Eref{FS6} is
applicable for both cases. We note that the FS lines of
N~\textsc{ii} (122, 205 $\mu$m) can be asymmetric and broadened due
to hyperfine components, as observed in \cite{LBS06,PBS07}. The
hyperfine splitting occurs also in the FS lines of Na~\textsc{vi}
(8.6, 14.3 $\mu$m).
Transition wavelengths and frequencies listed in \tref{tab1} are
approximate and are given only to identify the FS transitions.
At present, many of them have been measured with a sufficiently
high accuracy \cite{NIST}.
The Iron ions Fe~\textsc{i}, Fe~\textsc{ii}, Fe~\textsc{iii},
Fe~\textsc{vi}, and Fe~\textsc{vii}, have ground multiplets $^5\!D$,
$^6\!D$, $^5\!D$, $^4\!F$, and $^3\!F$, respectively. All these
multiplets, except the last one, produce more than two FS lines,
which can be used to further reduce the systematic errors. The
sensitivity coefficients for transitions in Iron and Titanium from
\tref{tab1} are calculated with the help of \Eref{FS7}.
According to \tref{tab1}, the absolute values of the difference
$\Delta\mathcal{Q}$ are usually quite large even for atoms with
$Z\sim 10$. The sign of $\Delta\mathcal{Q}$ is negative for atoms
with configuration $ns^2 np^2$ and positive for atoms with
configuration $ns^2 np^4$. These features are not surprising if we
consider the level structure of the respective configurations
\cite{Sob79}. Both of them have three terms: $^3\!P_{0,1,2}$,
$^1\!D_2$, and $^1\!S_0$, but for the configuration $ns^2 np^4$, the
multiplet $^3\!P_J$ is `inverted'. The splitting between these terms
is caused by the residual Coulomb interaction of $p$-electrons and
is rather small compared to the atomic energy unit $2\mathcal{R}$.
For example, the level $^1\!D_2$ for Si~\textsc{i} lies only
6299~cm$^{-1}$ above the ground state, which corresponds to
$E_D-E_P=0.029$~a.u.. Relativistic corrections to the energy are
dominated by the spin-orbit interaction, which for $p$-electrons has
the order of $0.1(\alpha Z)^2$~a.u.. The diagonal part of this
interaction leads to the second term in \Eref{FS1}, i.e. $A\,(\alpha
Z)^2\sim 200$~cm$^{-1}$. In the second order the non-diagonal
spin-orbit interaction causes repulsion between the levels $^3\!P_2$
and $^1\!D_2$ and results in non-zero parameter $B_2$. We can
estimate this correction as $B_2(\alpha Z)^4\sim A^2(\alpha
Z)^4/(E_P-E_D)\,\sim -10~\mathrm{cm}^{-1}$. This estimate has an
expected order of magnitude. Note that $B_2$ is negative. For normal
multiplets it reduces the ratio $\omega_{2,1}/\omega_{1,0}$, whereas
for the inverted multiplet the ratio increases. We see that this is
in a qualitative agreement with \tref{tab1}. Iron and Titanium ions
have configurations $3d^k 2s^l$, with $k=6,\,l=2$ for Fe~\textsc{i}
and $k=2,\,l=0,2$ for Fe~\textsc{vii} and Ti~\textsc{i}
respectively. As we can see from \tref{tab1}, here also all normal
multiplets (for Ti~\textsc{i}, Fe~\textsc{vi}, and Fe~\textsc{vii})
have negative values of $\Delta\mathcal{Q}$, while inverted
multiplets for all other ions have positive values of
$\Delta\mathcal{Q}$.
Equation~(\ref{FS4}) shows that sensitivity to $\alpha$-variation
grows with $Z$. For heavy atoms, $\alpha Z \sim 1$, neglected terms
in expansion \eqref{FS1} become important. That breaks relation
\eqref{FS7} between $\Delta \mathcal{Q}$ and FS intervals and
sensitivity coefficients $\mathcal{Q}$ have to be calculated
numerically. According to \tref{tab1}, the largest coefficients
$B_J$ appear for Ca~\textsc{v} and Si~\textsc{vii}. The neglected
corrections to $\Delta \mathcal{Q}$ can be estimated as $\sim
[A(\alpha Z)^2/(E_P-E_D)]^2 $, i.e. the uncertainty in $\Delta
\mathcal{Q}$ for Ca~\textsc{v} and Si~\textsc{vii} is less than
20\%. For other elements listed in \tref{tab1} this correction
should be smaller. Note that for Iron ions, which have the largest
$Z$, the relativistic effects are suppressed, because for
$d$-electrons they are typically an order of magnitude smaller, than
for $p$-electrons.
For light elements the accuracy of our estimate depends on the
neglected terms $\sim \alpha^2$. The discussion of these terms can
be found in \cite{Sob79} (see Eq.~(5.197) and Table~5.21 therein).
The corresponding correction decreases from almost 50\% for
Na~\textsc{vi} to 30\% for Mg~\textsc{vii} and to 15\% for
Si~\textsc{ix}.
For atoms with $Z\lesssim 10$ one can calculate $\Delta \mathcal{Q}$
using Eq.~(5.197) from Ref.~\cite{Sob79}. For example, for
C~\textsc{i}, N~\textsc{ii}, and O~\textsc{iii}, we get $\Delta
\mathcal{Q} = -0.008$, $-0.016$, and $-0.027$, respectively. As
expected, these values are much smaller than those for the heavier
elements. On the other hand, these ions are so important for
astrophysics, that we keep them in \tref{tab1}.
Numerical calculations for heavy many-electron atoms
are rather difficult to perform and the computed $\Delta
\mathcal{Q}$ values may not be very accurate. For atoms with $\alpha
Z \ll 1$ one can use \Eref{FS7} to check the accuracy of the
numerical results.
\begin{table}[bh]
\caption{The differences between sensitivity coefficients of the FS transitions
within ground $^6\!D_J$ multiplet of Fe~\textsc{ii},
$\Delta \mathcal{Q} \equiv \mathcal{Q}_{J,J-1}-\mathcal{Q}_{J-1,J-2}$.
In the third column
we use calculated $q$-factors from \cite{PKT07} (see Table~I from this Ref.,
basis set [7$spdf$]). In the fourth and fifth columns we apply \Eref{FS7} to
calculated and experimental FS intervals, respectively.}
\label{tab2}
\begin{tabular}{ccddd}
\hline\hline
\multicolumn{2}{c}{Transitions}
&\multicolumn{3}{c}{$\Delta\mathcal{Q}$}\\
(5/2,7/2) & (7/2,9/2) & 0.045 & 0.049 & 0.058 \\
(3/2,5/2) & (5/2,7/2) & 0.023 & 0.029 & 0.037 \\
(1/2,3/2) & (3/2,5/2) & 0.017 & 0.016 & 0.022 \\
\hline\hline
\end{tabular}
\end{table}
As an example we consider the ground $^6\!D_J$ multiplet of
Fe~\textsc{ii} ion (\tref{tab2}). One can see that numerical results
in Ref.~\cite{PKT07} are in good agreement with the values obtained
from \Eref{FS7} for the calculated FS intervals. However, when we
apply \Eref{FS7} to actual experimental FS intervals, the agreement
worsens noticeably. It is well known that deviations from the
Land\'{e} rule for FS intervals depend on the interplay between the
(non-diagonal) spin-orbit and the residual Coulomb interactions
\cite{Sob79}. For this reason numerical results are very sensitive
to the treatment of the effects of the core polarization and the
valence correlations. Note also that the calculated $q$-factors are
firstly used to find sensitivity coefficients $\mathcal{Q}$, and
then the (small) differences are taken. Obviously this makes the
whole calculation rather unstable. Similarly, \Eref{FS7} can be used
to check calculations of the $q$-factors for other atoms considered
in Refs.~\cite{BFK05a,BFK06,DF08}.
Numerical calculations for light atoms with $Z\lesssim 10$ are
usually much simpler and more reliable. However, as we have pointed
out above, the differences in the sensitivity coefficients of the
light atoms depend on the relativistic corrections $\sim\alpha^2$.
This means that the Breit interaction between valence electrons
should be accurately included, while the majority of the published
results were obtained in the Dirac-Coulomb approximation.
There is a certain similarity between the present method and the
method of optical doublets, used previously to study
$\alpha$-variation (see, e.g., \cite{BSS04} and references therein).
In that method, however, the FS energy constitutes a small fraction
of the total transition energy. Therefore, the parameter
$\Delta\mathcal{Q}$ for optical transitions is much smaller. Note
that for the mid- and far-infrared FS lines, the transition energy
and the FS splitting coincide, which leads to a much larger
parameter $\Delta\mathcal{Q}$.
\section{Discussion and Conclusions}
In this paper we suggest to use two, or more FS lines of the same
ion to study possible variation of $\alpha$ at early stages of the
evolution of the Universe up to $\Delta T \sim 96$\% of $T_{\rm U}$.
The sensitivity of the suggested method is proportional to
$\Delta\mathcal{Q}$, as seen from \Eref{zfactor3}. We have deduced a
simple analytical expression to calculate $\Delta\mathcal{Q}$ for
the FS transitions in light atoms and ions within the range of
nuclear charges $11 \le Z\le 26$. We found that
$|\Delta\mathcal{Q}|$ grows with $Z$ and reached 0.2~--~0.4 for the
ions of Ar and Ca. This is about one order of magnitude higher than
typical sensitivities in the optical and UV range.
In addition of being more sensitive, this method provides also a
considerable reduction of
the Doppler noise, which limits the accuracy of the optical
observations. Using the lines of the same element reduces the
sources of the Doppler noise to the inhomogeneity of the excitation
temperature $T_{\rm ex}$ within the cloud(s). Alternatively, when
the lines of different species are used, the Doppler noise may be
significantly higher because of the difference of the respective
spatial distributions.
At present, the precision of the existing radio observations of the
FS lines from distant objects is considerably lower than in the most
accurate optical observations. For example, the error in the line
center position for the C~\textsc{i} $J = 2 \rightarrow 1$ and $J =
1 \rightarrow 0$ lines at $z = 2.557$ was $\sigma_{v,{\rm radio}} =
8$ and 25 km~s$^{-1}\,$\, respectively \cite{WHD03,WDH05}. This has to be
compared with the precision of the modern optical measurements of
$\sigma_{v,{\rm opt}} = 85$ m~s$^{-1}\,$~\cite{LML07,LCM06}. In the optical
range the error $\sigma_{v,{\rm opt}}$ includes both random and
systematic contributions. The systematic error is the wavelength
calibration error which is negligible at radio frequencies.
In the forthcoming observations with ALMA, the statistical error is
expected to be several times smaller than 85 m~s$^{-1}\,$. Together with the
higher sensitivity to $\alpha$-variation, this would allow to
estimate $\Delta\alpha/\alpha$\,\ at the level of one tenth of ppm~--- well beyond the
limits of the contemporary optical observations and comparable to
the anticipated sensitivity of the next generations of spectrographs
for the VLT and the EELT \cite{MML06,M07}. Thus, FIR lines offer a
very promising strategy to probe the hypothetical variability of the
fine-structure constant both locally and in distant extragalactic
objects.
\begin{acknowledgments}
MGK, SGP, and SAL gratefully acknowledge the hospitality of
Hamburger Sternwarte while visiting there. This research has been
partly supported by the DFG projects SFB 676 Teilprojekt C,
the RFBR grants No. 06-02-16489 and 07-02-00210, and by the Federal
Agency for Science and Innovations grant NSh 9879.2006.2.
\end{acknowledgments}
|
2,877,628,088,569 | arxiv |
\section{Introduction}
As the most abundant molecular species in the universe,
molecular hydrogen (H$_{2}$) is the main constituent of giant molecular clouds,
the exclusive birthplaces of stars (e.g., \citeauthor{Kennicutt12} 2012).
Recent observations of galaxies at both low and high redshifts have shown that
star formation rates are strongly correlated with H$_{2}$ surface densities ($\Sigma_{\rm H2}$),
the relation generally known as the ``Kennicutt--Schmidt law''
(e.g., \citeauthor{Schmidt59} 1959; \citeauthor{Kennicutt89} 1989;
\citeauthor{Bigiel08} 2008; \citeauthor{Wilson09} 2009; \citeauthor{Tacconi10} 2010;
\citeauthor{Schruba11} 2011; \citeauthor{Genzel13} 2013).
This suggests that physical processes responsible for the atomic-to-molecular hydrogen (HI-to-H$_{2}$) transition
play a key role in the evolution of galaxies.
Observationally, the HI-to-H$_{2}$ transition has been studied via
ultraviolet (UV) absorption measurements along many random lines of sight through the Galaxy
(e.g., \citeauthor{Savage77} 1977; \citeauthor{Rachford02} 2002; \citeauthor{Gillmon06a} 2006).
For these measurements, either early-type stars or active galactic nuclei were used as background sources,
and HI and H$_{2}$ column densities, $N$(HI) and $N$(H$_{2}$),
were estimated from Lyman--alpha and Lyman--Werner (LW) band absorption.
The UV studies probed the H$_{2}$ mass fraction $f_{\rm H2}$ = 2$N$(H$_{2}$)/[$N$(HI) + 2$N$(H$_{2}$)]
ranging from $\sim$10$^{-6}$ to $\sim$10$^{-1}$,
and found that $f_{\rm H2}$ sharply increases at
the total gas column density $N$(H) = $N$(HI) + 2$N$(H$_{2}$) of $\sim$(3--5) $\times$ 10$^{20}$ cm$^{-2}$.
Additionally, the HI-to-H$_{2}$ transition has been indirectly inferred from the flattening
of the relation between the HI column density and a tracer of total gas column density
(e.g., far-infrared (FIR) or hydroxide (OH) emission;
\citeauthor{Reach94} 1994; \citeauthor{Meyerdierks96} 1996; \citeauthor{Douglas07} 2007;
\citeauthor{Barriault10b} 2010; \citeauthor{Liszt14b} 2014).
These studies found that the HI column density saturates to $\sim$(5--10) $\times$ 10$^{20}$ cm$^{-2}$,
suggesting the presence of H$_{2}$.
The HI saturation has also been found in extragalactic observations on $\sim$kpc scales
(e.g., \citeauthor{Wong02} 2002; \citeauthor{Blitz06} 2006; \citeauthor{Leroy08} 2008; \citeauthor{Wong09} 2009).
Theoretically, the HI-to-H$_{2}$ transition has been investigated
as a central process in photodissociation regions (PDRs).
In PDRs, the interstellar medium (ISM) is predominatly atomic,
and the molecular gas is only found in well-shielded regions
where dissociating UV photons are sufficiently attenuated.
Many studies have been presented with different treatments of chemistry, geometry, and radiative transfer
(e.g., \citeauthor{Spitzer48} 1948; \citeauthor{Gould63a} 1963; \citeauthor{Glassgold74} 1974;
\citeauthor{vanDishoeck86} 1986; \citeauthor{Sternberg88} 1988; \citeauthor{Elmegreen93} 1993;
\citeauthor{Draine96} 1996; \citeauthor{Spaans97} 1997; \citeauthor{Browning03} 2003; \citeauthor{Goldsmith07} 2007;
\citeauthor{Liszt07} 2007; \citeauthor{Krumholz09} 2009; \citeauthor{Glover10} 2010; \citeauthor{Offner13} 2013;
\citeauthor{Sternberg14} 2014), and an excellent summary of these studies was recently provided by \cite{Sternberg14}.
Among the many studies, the \cite{Krumholz09} (KMT09 hereafter) model has recently been tested with
a variety of Galactic and extragalactic observations
(e.g., \citeauthor{Bolatto11} 2011; \citeauthor{Lee12} 2012; \citeauthor{Welty12} 2012;
\citeauthor{Wong13} 2013; \citeauthor{Motte14} 2014),
thanks to its simple analytic predictions that allow a comparison with direct observables
and an extrapolation of the model over a wide range of ISM environments.
In the KMT09 model, a spherical cloud is illuminated by a uniform and isotropic radiation field,
and the H$_{2}$ abundance is computed based on the balance between
the rate of formation on dust grains and the rate of dissociation by UV photons (chemical equilibrium).
The authors derived two dimensionless parameters that
determine the location of the HI-to-H$_{2}$ transition in the cloud,
resulting in the following important predictions.
First, they found that H$_{2}$ formation requires a certain amount of HI surface density ($\Sigma_{\rm HI}$)
for shielding against the dissociating radiation field.
Interestingly, this shielding surface density primarily depends on metallicity,
and is expected to be $\sim$10 M$_{\odot}$ pc$^{-2}$
(corresponding to $N$(HI) $\sim$ 1.3 $\times$ 10$^{21}$ cm$^{-2}$) for solar metallicity.
Second, the H$_{2}$-to-HI ratio, $R_{\rm H2}$ = $\Sigma_{\rm H2}$/$\Sigma_{\rm HI}$,
was predicted to linearly increase with the total gas surface density.
This is because once the minimum HI surface density is obtained for shielding H$_{2}$ against photodissociation,
all additional hydrogen is fully converted into H$_{2}$ and the HI surface density remains constant.
As a result, $R_{\rm H2}$ is simply a function of metallicity and total gas surface density.
Another interesting feature of the KMT09 model is that the ISM is ``self-regulated''
in that pressure balance between the cold neutral medium (CNM) and the warm neutral medium (WNM)
determines the ratio of the UV intensity to the HI density.
Aiming at testing the KMT09 model on sub-pc scales,
we have recently focused on the Perseus molecular cloud (\citeauthor{Lee12} 2012).
Perseus is one of the nearby molecular clouds in the Gould's Belt,
and is located at a distance of $\sim$300 pc (\citeauthor{Herbig83} 1983; \citeauthor{Cernis90} 1990).
It has a projected angular size of $\sim$6$^{\circ}$ $\times$ 3$^{\circ}$ on the sky
(based on the CO emission)\footnote{In this paper, $^{12}$CO($J = 1 \rightarrow 0$) is quoted as CO.},
and lies at high Galactic latitude $b$ $\sim$ $-$20$^{\circ}$,
resulting in relatively simple HI spectra compared to other molecular clouds in the Galactic plane.
With a total mass of $\sim$2 $\times$ 10$^{4}$ M$_{\odot}$ (\citeauthor{Sancisi74} 1974; \citeauthor{Lada10} 2010),
Perseus is considered as a low-mass molecular cloud with an intermediate level of star formation (\citeauthor{Bally08} 2008).
To test the KMT09 model,
we derived $\Sigma_{\rm HI}$ and $\Sigma_{\rm H2}$ images
using HI data from the Galactic Arecibo L-band Feed Array HI Survey
(GALFA-HI; \citeauthor{Stanimirovic06} 2006; \citeauthor{Peek11} 2011)
and FIR data from the Improved Reprocessing of the \textit{IRAS} Survey (IRIS; \citeauthor*{MD05} 2005).
The final images were at $\sim$0.4 pc resolution,
and covered the far outskirts of the cloud as well as the main body.
We found that the HI surface density is relatively uniform with $\Sigma_{\rm HI}$ $\sim$ 6--8 M$_{\odot}$ pc$^{-2}$
for five dark and star-forming regions in Perseus (B5, B1E, B1, IC348, and NGC1333).
In addition, the relation between $R_{\rm H2}$ and $\Sigma_{\rm HI} + \Sigma_{\rm H2}$ on a log-linear scale
was remarkably consistent for all individual regions,
having a steep rise of $R_{\rm H2}$ at small $\Sigma_{\rm HI} + \Sigma_{\rm H2}$,
a turnover at $R_{\rm H2}$ $\sim$ 1, and a slow increase toward larger $R_{\rm H2}$.
All these results were in excellent agreement with the KMT09 predictions
for solar metallicity\footnote{Perseus has solar metallicity (\citeauthor{GonzalezHernandez09} 2009).
See Section 7.2.1 of \citet{Lee12} for a detailed discussion.},
suggesting that the KMT09 model captures well the fundamental physics of H$_{2}$ formation on sub-pc scales.
The observed HI saturation in Perseus, however, could alternatively result from the high optical depth HI.
When the HI emission is optically thick,
the brightness temperature ($T_{\rm B}$) becomes comparable to the kinetic temperature ($T_{\rm k}$).
As a result, the HI surface density saturates in the optically thin approximation (which we used in \citeauthor{Lee12} 2012)
since $\Sigma_{\rm HI}$ $\propto$ $T_{\rm B}$ $\sim$ $T_{\rm k}$,
and is underestimated.
As the constant HI surface density is the key prediction from KMT09,
it is critical to evaluate how much of the HI column density distribution is affected by the optically thick HI.
In this paper, we assess the impact of high optical depth on the observed HI saturation in Perseus
by using HI emission and absorption measurements obtained toward 26 background radio continuum sources
(\citeauthor{Stanimirovic14} 2014; Paper I hereafter).
These observations provide the most direct way to measure the high optical depth HI,
allowing us to derive the ``true'' total HI column density distribution.
Specifically, we use the HI emission and absorption spectra to estimate the correcton factor for high optical depth,
and apply the correction to the HI column density image computed by \cite{Lee12} in the optically thin approximation.
We take this localized approach rather than using data from existing all-sky surveys (e.g., \citeauthor{Heiles03a} 2003a; HT03a hereafter)
in order to treat all spectra uniformly for the velocity range of Perseus
and consider the possibility that CNM/WNM properties may vary with ISM environments
(e.g., metallicity, star formation rate, etc.) as expected from theoretical models
(e.g., \citeauthor{McKee77} 1977; \citeauthor{Koyama02} 2002; \citeauthor{Wolfire03} 2003; \citeauthor{Audit05} 2005;
\citeauthor{MacLow05} 2005; \citeauthor{Kim13} 2013).
This paper is organized in the following way.
We start with a summary of previous studies
where various methods have been employed to derive the correction for high optical depth (Section \ref{s:background}).
We then provide a description of the data used in this study (Section \ref{s:obs}).
In Section \ref{s:correction}, we estimate the correction factor for high optical depth
using two different methods, and compare our results with previous studies.
In Sections \ref{s:perseus-corrected} and \ref{s:revisit-saturation},
we apply the correction to the HI column density image from \cite{Lee12} on a pixel-by-pixel basis,
and revisit the HI saturation issue by rederiving the H$_{2}$ column density image
and comparing our results with the KMT09 predictions.
We then investigate whether or not the optically thick HI can explain the observed ``CO-dark'' gas in Perseus (Section \ref{s:CO-dark}),
and finally summarize our conclusions (Section \ref{s:summary}).
\section{Background: Methods to estimate the correction for high optical depth}
\label{s:background}
In most radio observations,
HI is detected in emission,
and the intensity of radiation is measured as the brightness temperature as a function of radial velocity, i.e., $T_{\rm B}$$(v)$.
Since the HI optical depth ($\tau$) can be measured
only via absorption line measurements in the direction of background radio continuum sources,
the optically thin approximation of $\tau \ll 1$ is frequently employed to estimate the HI column density:
\begin{equation}
\label{eq:N_HI}
N(\textrm{HI})~(\textrm{cm}^{-2}) = 1.823 \times 10^{18} \int T_{\textrm{B}}(v)dv~(\textrm{K km s}^{-1}).
\end{equation}
Over the past three decades,
several approaches have been employed to estimate
how much of the true total HI column density is underestimated in the optically thin approximation.
Most of these approaches can be classified as ``isothermal'',
and the only multiphase approaches are by \citet{Dickey00} and HT03a.
Here we summarize main results from some of the most important studies.
\cite{Dickey82} used 47 emission/absorption spectral line pairs in the direction of background sources,
and estimated the ratio of the HI column density from the absorption spectra
to the HI column density in the optically thin approximation.
Although $\sim$1 at high and intermediate Galactic latitudes,
the ratio reached $\sim$1.8 at low latitudes.
There was considerable scatter in the ratio, however:
several lines of sight at low latitudes showed small ratios,
suggesting that the low latitude directions with large ratios likely intersect dense molecular clouds.
In order to compute the ratios, HI in each velocity channel was assumed to have a single temperature (``isothermal'' approximation).
In the \cite{Dickey00} study of the Small Magellanic Cloud (SMC),
HI absorption observations were obtained in the direction of 13 background radio continuum sources.
The corresponding emission spectra were derived
by averaging HI profiles from \cite{Stanimirovic99}
over a $3 \times 3$ pixel region (pixel size = 30$''$) centered on the position of each source.
The correction factor for high optical depth was calculated for each velocity channel in the isothermal approximation,
and the line of sight integrated value $f$ was expressed as a function of the uncorrected $N$(HI):
$f = 1 + 0.667(\log_{10}N(\textrm{HI}) - 21.4)$ for $N(\textrm{HI}) > 10^{21.4}$ cm$^{-2}$.
This relation was then applied to the $N$(HI) image of the SMC on a pixel-by-pixel basis,
resulting in a $\sim$10\% increase of the total HI mass
from $\sim$3.8 $\times$ 10$^8$ M$_{\odot}$ to $\sim$4.2 $\times$ 10$^8$ M$_{\odot}$.
Although negligible at $N$(HI) $<$ $3 \times 10^{21}$ cm$^{-2}$,
the correction factor increased with the uncorrected $N$(HI)
up to $\sim$1.4 at $N$(HI) $\sim$ 10$^{22}$ cm$^{-2}$.
In some cases, the correction factors for individual channels were larger than the integrated value, reaching up to $\sim$2.
However, such values covered only a narrow range of channels,
and their effect on $N$(HI) was relatively small.
Finally, the authors rederived the correction factor in the two-phase approximation,
and found that the difference between the one- and two-phase cases depends on
the relative location of cold and warm HI components along a line of sight (Section \ref{s:method2} for details).
In the Millennium Arecibo 21-cm Absorption Line Survey,
HT03a obtained HI emission and absorption spectra toward 79 randomly positioned radio continuum sources,
and performed Gaussian decomposition to estimate the physical properties
of individual CNM and WNM components (column density, optical depth, spin temperature $T_{\rm s}$, etc.).
These multiphase analyses showed that two or more components with very different spin temperatures
can contribute to a single velocity channel,
implying that the isothermal treatment may not be satisfactory.
\citet{Heiles03b} (HT03b hereafter) then calculated the correction factor using the Gaussian decomposition results,
which they called $R_{\rm raw}$ = $1/f$.
There were interesting variations in $R_{\rm raw}$,
ranging from $\sim$0.3 to $\sim$1.0 ($f$ = $\sim$1.0--3.0; Appendix B for details).
Specifically, $f \sim 1.3$ was found for the Taurus/Perseus region.
A very different approach was adopted in \cite{Braun09}
to calculate the correction for high optical depth in M31.
They only used high-resolution HI emission observations for their modeling,
and assumed that a single cold component determines the brightness temperature along a line of sight.
While previous similar studies have applied a single temperature to the images of entire galaxies
(e.g., \citeauthor{Henderson82} 1982; \citeauthor{Braun92} 1982),
\cite{Braun09} estimated the spin temperature and non-thermal velocity dispersion for each pixel.
After excluding HI spectra that likely suffer from
a high blending of different components along a line of sight,
they noticed that the opaque HI is organized into filamentary complexes and isolated clouds
down to their resolution limit of $\sim$100 pc.
The spin temperature was found to increase from $\sim$20 K to $\sim$60 K with radius to 12 kpc,
and then to decline smoothly down to $\sim$20 K beyond 25 kpc.
The estimated correction resulted in a $\sim$30\% increase of the global HI mass of M31.
Using the same methodology, \cite{Braun12} found that the correction for high optical depth
increases the HI masses of the Large Magellanic Cloud (LMC) and M33 by the same amount ($\sim$30\%).
While the main advantage of the \cite{Braun09} approach is clearly that
galactic-scale images of the opaque component can be produced solely from HI emission observations,
the method has several weaknesses.
For example, it does not consider multiple components along a line sight
and how they self-absorb each other.
In addition, it does not take account of the possibility that
some of the brightness temperature could come from unabsorbing warm HI components.
\begin{figure*}
\centering
\includegraphics[scale=0.08]{HI_cden_src_names_display.eps}
\caption{\label{f:sources} 26 radio continuum sources overlaid on the HI column density image at 4$'$ resolution
(4C$+$32.14 excluded; Section \ref{s:data-HI-abs} for details).
The HI column density image is produced by integrating the GALFA-HI cube from
$v_{\rm LSR}$ = $-$5 km s$^{-1}$ to $+$15 km s$^{-1}$,
and the gray contours are from the CfA CO integrated intensity image at 8.4$'$ resolution.
The contour levels range from 10\% to 90\% of the peak value (69 K km s$^{-1}$) with 10\% steps.
In this figure, both the Perseus and Taurus molecular clouds are seen.}
\end{figure*}
\cite{Chengalur13} tested the optically thin and isothermal approximations
with Monte Carlo simulations of the multiphase ISM.
They varied the fraction of gas in three phases (CNM, WNM, and the thermally unstable neutral medium)
and the location of each phase along a line of sight.
A wide range of values were assumed for the HI column density (10$^{20}$--10$^{24}$ cm$^{-2}$)
and the spin temperature (20--5000 K).
They found that the optically thin approximation underestimates the true HI column density
by a factor of $\sim$1.6 when $\int \tau dv \sim 1$ km s$^{-1}$,
while the underestimate can be as high as a factor of $\sim$20 when $\int \tau dv \sim 10$ km s$^{-1}$.
On the other hand, the simulations showed that the isothermal estimate tracks the true HI column density
to better than 10\% even when $\int \tau dv \sim 5$ km s$^{-1}$.
Their conclusion that the isothermal estimate provides a good measure of the true HI column density
of up to $\sim$5 $\times$ 10$^{23}$ cm$^{-2}$ was insensitive
to the assumed gas temperature distribution and the positions of the different phases along a line of sight.
We note that Equation (1) of \cite{Chengalur13} does not include the contribution
from the cosmic microwave background (CMB) and the Galactic synchrotron emission,
which can be significant in certain cases (e.g., low Galactic latitudes).
In addition, the authors did not consider self-absorption of the WNM by the foreground CNM.
\cite{Liszt14b} compared $N$(HI) from Galactic HI surveys with $E(B-V)$ derived by \cite{Schlegel98},
and found a strong linear relation between $N$(HI) and $E(B-V)$ $\sim$ 0.02--0.08 mag
and a flattening of the relation at $E(B-V) \gtrsim 0.08$ mag.
While this flattening, likely due to H$_{2}$ formation,
was essentially the same effect as what \cite{Lee12} found for individual regions in Perseus,
the relation derived by \cite{Liszt14b} covers a large spatial area
with randomly selected lines of sight predominantly at $|b| > 20^{\circ}$.
By using HI absorption data compiled by \cite{Liszt10},
the author then derived the correction for high optical depth, and applied it to the $N$(HI) data.
The flattening at $E(B-V) \gtrsim 0.08$ mag persisted after the correction,
confirming its origin in the onset of H$_{2}$ formation.
The derived correction factor increased from $\sim$1.0 at $E(B-V)$ $\sim$ 0.01 mag
to $\sim$1.4 at $E(B-V)$ $\sim$ 1 mag,
and was $\lesssim$ 1.2 at $E(B-V) \lesssim 0.5$ mag.
Recently, \cite{Fukui15} suggested a new approach
to estimate the correction for high optical depth by using \textit{Planck} dust continuum data.
They noticed that the dust optical depth at 353 GHz ($\tau_{353}$) correlates with the HI column density,
and the dispersion in this relation becomes much smaller
when the data points are segregated based on the dust temperature ($T_{\rm dust}$).
The highest dust temperature was assumed to be associated with the optically thin HI,
and the saturation seen in the $\tau_{353}$--$N$(HI) relation was then
solely attributed to the optically thick HI.
By coupling the $\tau_{353}$--$N$(HI) relation with radiative transfer equations,
\cite{Fukui15} calculated $T_{\rm s}$ and $\tau$ for the Galactic sky at $|b|$ $>$ 15$^{\circ}$ on a pixel-by-pixel basis.
They found that more than 70\% of the data points have $T_{\rm s} < 40$ K and $\tau > 0.5$,
and similar results were obtained for the high latitude molecular clouds MBM 53, 54, 55, and HLCG 92-35 \citep{Fukui14}.
The correction for high optical depth resulted in a factor of $\sim$2 increase in the total HI mass in the solar neighborhood,
implying that the optically thick HI may explain the ``CO-dark'' gas in the Galaxy.
\section{Data}
\label{s:obs}
\subsection{HI Emission and Absorption Observations}
\label{s:data-HI-abs}
We use the HI emission and absorption observations from Paper I.
The observations were performed with the Arecibo telescope\footnote{
The Arecibo Observatory is operated by SRI International
under a cooperative agreement with the National Science Foundation (AST-1100968),
and in alliance with Ana G. M\'{e}ndez-Universidad Metropolitana
and the Universities Space Research Association.} using the L-band wide receiver,
and were made toward 27 radio continuum sources located behind Perseus.
The target sources were selected from the NRAO VLA Sky Survey (\citeauthor{Condon98} 1998)
based on flux densities at 1.4 GHz greater than $\sim$0.8 Jy,
and are distributed over a large area of $\sim$500 deg$^{2}$ centered on the cloud (Figure \ref{f:sources})\footnote{In this paper,
we quote all velocities in the local standard of rest (LSR) frame, which is defined
based on the average velocity of stars in the solar neighborhood:
20 km s$^{-1}$ toward (R.A.,decl.) = (18$^{\rm h}$,30$^{\circ}$) in B1900.}.
The angular resolution of the Arecibo telescope at 1.4 GHz is 3.5$'$.
For the observations, a special procedure was adopted to make a ``17-point pattern'',
which includes 1 on-source measurement and 16 off-source measurements
(HT03a; \citeauthor*{Stanimirovic05} 2005).
This procedure was designed to consider HI intensity variations across the sky
and instrumental effects involving telescope gains.
The data were processed using the reduction software developed by HT03a,
and the final products for each source include
an HI absorption spectrum ($e^{-\tau(v)}$),
an ``expected'' HI emission spectrum ($T_{\rm exp}(v)$;
HI profile that we would observe at the source position if the continuum source were not present),
and their uncertainty profiles.
Among the 27 sources, 4C$+$32.14 was excluded from further analyses
because of its saturated absorption spectrum.
With an average integration time of 1 hour,
the root-mean-square (rms) noise level in the optical depth profiles was
$\sim$1 $\times$ 10$^{-3}$ per 1 km s$^{-1}$ velocity channel.
Finally, the derived optical depth and ``expected'' emission spectra
were decomposed into separate CNM and WNM components using the technique of HT03a,
and physical properties (optical depth, spin temperature, column density, etc.)
were computed for the individual components.
We refer to Paper I for details on the observations, data reduction, line fitting\footnote{Pros and cons of
our Gaussian fitting method were discussed in HT03a in detail.
In the future, we plan to compare the Gaussian fitting method with results from numerical simulations
to investigate biases that could be introduced by Gaussian fitting (Lindner et al. in prep).},
and CNM/WNM properties.
\subsection{HI Emission Data from the GALFA-HI Survey}
\label{s:data-HI-ems}
In order to evaluate different methods for deriving the correction for high optical depth,
we also use the HI emission data from the GALFA-HI survey (\citeauthor{Stanimirovic06} 2006; \citeauthor{Peek11} 2011).
GALFA-HI uses ALFA, a seven-beam array of receivers at the focal plane of the Arecibo telescope,
to map the HI emission in the Galaxy.
Each of the seven dual polarization beams has an effective beam size of 3.9$'$ $\times$ 4.1$'$.
For Perseus, \citet{Lee12} produced an HI cube
centered at (R.A.,decl.) = (03$^{\rm h}$29$^{\rm m}$52$^{\rm s}$,$+$30$^{\circ}$34$'$1$''$) in J2000\footnote{In this paper,
we quote all coordinates in J2000.} with a size of $\sim$15$^{\circ}$ $\times$ 9$^{\circ}$
by combining a number of individual GALFA-HI projects.
We use the same data here, but extend the HI cube up to $\sim$60$^{\circ}$ $\times$ 18$^{\circ}$
to include all radio continuum sources in Paper I.
The HI column density image derived from the extended HI cube is shown in Figure 1
along with our continuum sources (4C$+$32.14 excluded).
\subsection{HI and H$_{2}$ Distributions of Perseus}
\label{s:data-lee}
We use the $N$(HI) and $N$(H$_{2}$) images from \citet{Lee12}.
To derive the $N$(HI) image,
we integrated the HI emission from $v_{\rm LSR}$ = $-$5 km s$^{-1}$ to $+$15 km s$^{-1}$ in the optically thin approximation.
This velocity range was determined
based on the maximum correlation between the $N$(HI) image and 2MASS-based $A_{V}$ data from the COMPLETE Survey \citep{Ridge06b}.
In order to construct the $N$(H$_{2}$) image of Perseus with a large sky coverage,
we used the 60 $\mu$m and 100 $\mu$m data from the IRIS survey (\citeauthor*{MD05} 2005),
and derived the dust optical depth at 100 $\mu$m ($\tau_{\rm 100}$)
by assuming that dust grains are in thermal equilibrium.
For this purpose, the emissivity spectral index of $\beta = 2$ was adopted,
and the contribution from very small grains (VSGs) to the intensity at 60 $\mu$m ($I_{\rm 60}$) was removed
by calibrating the derived $T_{\rm dust}$ image with DIRBE-based $T_{\rm dust}$ data from \citet{Schlegel98}.
We then converted the $\tau_{\rm 100}$ image into the $A_{V}$ image
by finding the conversion factor $X$ for $A_{V} = X \tau_{\rm 100}$
that results in the minimum difference between the derived $A_{V}$ and the COMPLETE $A_{V}$.
This calibration of $\tau_{\rm 100}$ to the COMPLETE $A_{V}$ was motivated by \citet{Goodman09}
who showed that dust extinction at near-infrared (NIR) wavelengths is the best probe of total gas column density.
Finally, we measured a local dust-to-gas ratio (D/G)
by examining the $A_{V}$--$N$(HI) relation for diffuse regions,
and derived the $N$(H$_{2}$) image by
\begin{equation}
\label{eq:H2}
N({\rm H_{2}}) = \frac{1}{2}\left[\frac{A_{V}}{\rm D/G} - N({\rm HI})\right].
\end{equation}
The derived $N$(HI) and $N$(H$_{2}$) images are at 4.3$'$ resolution
(corresponding to $\sim$0.4 pc at the distance of 300 pc),
and their median 1$\sigma$ uncertainties are
$\sim$5.6 $\times$ 10$^{19}$ cm$^{-2}$ and $\sim$3.6 $\times$ 10$^{19}$ cm$^{-2}$.
See Sections 3 and 4 of \citet{Lee12} for details
on the derivation of the $N$(HI) and $N$(H$_{2}$) images and their uncertainties.
\begin{figure}
\centering
\includegraphics[scale=0.53]{Ncube_Nexp_compare_final.eps}
\caption{\label{f:Ncube_Nexp} Comparison of the HI column densities calculated using the two different methods:
$N_{\rm exp}$ from the spatial derivative method versus $N_{\rm cube}$ from the simple averaging method.
Both quantities are estimated in the optically thin approximation, and the black solid line shows a one-to-one relation.}
\end{figure}
\subsection{CO Data from the CfA Survey}
\label{s:data-cfa}
We use CO integrated intensity ($I_{\rm CO}$) data from \citet{Dame01}.
\citet{Dame01} produced a composite CO survey of the Galaxy at 8.4$'$ resolution
by combining individual observations of the Galactic plane and local molecular clouds.
The observations were conducted with the Harvard-Smithsonian Center for Astrophysics (CfA) telescope.
The final cube had a uniform rms noise of 0.25 K per 0.65 km s$^{-1}$ velocity channel.
To estimate $I_{\rm CO}$ for Perseus, \citet{Dame01} integrated the CO emission
from $v_{\rm LSR}$ = $-$15 km s$^{-1}$ to $+$15 km s$^{-1}$.
See Section 2 of \citet{Dame01} for details on the observations, data reduction, and analyses.
\section{Correction for High Optical Depth}
\label{s:correction}
In the direction of 26 radio continuum sources,
we measured the optical depth profiles which we use to estimate the true total HI column density, $N_{\rm tot}$.
Along the same lines of sight, we also have the emission spectra
that can be used to calculate the HI column density in the optically thin approximation, $N_{\rm low-\tau}$.
This column density would be the only available information
if no HI absorption data were present.
In this section, we examine how $f = N_{\rm tot}/N_{\rm low-\tau}$,
which we call the correction factor for high optical depth, varies with $N_{\rm low-\tau}$.
Our aims are to offer an analytic estimate of $f(N_{\rm low-\tau})$
for Perseus using our HI emission and absorption measurements,
and then to apply this correction to the $N$(HI) image from \cite{Lee12} on a pixel-by-pixel basis.
In this way, we can account for the optically thick HI that was missed in the GALFA-HI emission observations.
Since there are several approaches to derive $f(N_{\rm low-\tau})$,
we first compare different methods using full line of sight information.
\begin{figure*}
\centering
\includegraphics[scale=0.55]{f_plots_CH_final.eps}
\caption{\label{f:fcarl} METHOD 1: (left) $f$ = $N_{\rm tot}/N_{\rm exp}$
($N_{\rm tot}$: derived using the Gaussian decomposition results) as a function of log$_{10}(N_{\rm exp}/10^{20})$.
The blue squares show the (1/$\sigma^{2}$)-weighted mean values in 0.2-wide bins in log$_{10}(N_{\rm exp}/10^{20})$,
and the linear fit determined for all 26 data points is indicated as the green solid line (Equation \ref{eq:f-carl}).
(right) $f$ as a function of the integrated optical depth.
The (1/$\sigma^{2}$)-weighted mean values in 3.4 km s$^{-1}$-wide bins in the integrated optical depth are presented as the blue squares,
and the green solid line shows the linear fit to all 26 data points:
$f = (0.024 \pm 0.004)\int \tau(v)dv + (1.019 \pm 0.019)$.}
\end{figure*}
\subsection{Calculating the HI Column Density in the Optically Thin Approximation ($N_{\rm low-\tau}$)}
\label{s:low-tau}
As the HI emission spectrum in the direction of a radio continuum source is affected by absorption,
it is not possible to obtain an emission profile along exactly the same line of sight
as probed by the HI absorption observation.
For this reason, we instead estimated the ``expected'' HI emission spectrum $T_{\rm exp}(v)$,
which is the profile we would observe if the continuum source turned off,
by modeling the ``17-point pattern'' measurements.
In this modeling, spatial derivatives of the HI emission (up to the second order)
were carefully taken into account (Section 2.1 of Paper I for details).
Then the HI column density in the optically thin approximation, $N_{\rm exp}$, can be calculated by
\begin{equation}
\label{eq:N_exp}
N_{\textrm{exp}}~(\textrm{cm}^{-2}) = 1.823 \times 10^{18} \int T_{\textrm{exp}}(v)dv~(\textrm{K km s}^{-1}).
\end{equation}
\noindent
We compute $N_{\rm exp}$ over the velocity range
where $T_{\rm exp}(v)$ is above its 3$\sigma$ noise level.
This velocity range covers all CNM and WNM components for each source,
and the median velocity range for all 26 sources is
$v_{\rm LSR}$ = $-$39 km s$^{-1}$ to $+$20 km s$^{-1}$.
We then estimate the uncertainty in $N_{\rm exp}$ by propagating
the $T_{\rm exp}(v)$ error spectrum through Equation (\ref{eq:N_exp}),
finding a median of $\sim$3.0 $\times$ 10$^{18}$ cm$^{-2}$.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{fchan_plots_newMC.eps}
\caption{\label{f:t_jd} METHOD 2: (left) Correction factor per velocity channel $f_{\rm chan}(v)$
(estimated in the isothermal approximation) for all 26 sources.
(right) Histogram of $f_{\rm chan}(v)$ values.}
\end{figure*}
Additionally, we use spectra from the GALFA-HI cube (pixel size = 1$'$) in the direction of our radio continuum sources
to derive the ``expected'' emission profiles.
This simpler approach has been employed
when HI absorption spectra were obtained without any special strategy such as the ``17-point pattern''
(e.g., \citeauthor{Dickey00} 2000; \citeauthor{McClure-Griffiths01} 2001; \citeauthor{Dickey03} 2003).
We extract HI emission spectra from a 9 $\times$ 9 pixel region
(roughly the ``17-point pattern'' grid size; Figure 1 of HT03a) centered on each continuum source.
As the HI spectra right around the continuum source are likely affected by absorption of the background emission,
we exclude the HI spectra from the central 3 $\times$ 3 pixel region (roughly one Arecibo beamwidth across).
By averaging the remaining 72 spectra, we then compute an average emission spectrum $T_{\rm avg}(v)$,
and estimate the corresponding HI column density $N_{\rm cube}$ by
\begin{equation}
\label{eq:N_cube}
N_{\textrm{cube}}~(\textrm{cm}^{-2}) = 1.823 \times 10^{18} \int T_{\textrm{avg}}(v)dv~(\textrm{K km s}^{-1}).
\end{equation}
\noindent
The uncertainty in $N_{\rm cube}$ is estimated
by calculating the standard deviation of the extracted 72 spectra, $\sigma_{T\rm avg}(v)$,
and propagating it through Equation (\ref{eq:N_cube}).
The median value for all 26 sources is $\sim$1.4 $\times$ 10$^{19}$ cm$^{-2}$.
In Figure \ref{f:Ncube_Nexp}, we compare the HI column densities calculated using the two methods.
Most sources probe the HI column density of $\sim$(5--16) $\times$ 10$^{20}$ cm$^{-2}$,
and the last five sources (3C132, 3C131, 3C133, 4C+27.14, and 4C+33.10) extend this range by a factor of $\sim$2.
We find an excellent agreement between the two methods up to $\sim$3 $\times$ 10$^{21}$ cm$^{-2}$.
The ratio of $N_{\rm exp}$ to $N_{\rm cube}$ ranges from $\sim$0.9 to $\sim$1.1 with a median of $\sim$1.0,
suggesting that the two HI column density estimates are consistent within 10\%.
Considering the more careful examination of spatial variations in the HI emission,
we continue by using $N_{\rm exp}$ as $N_{\rm low-\tau}$ in the following sections.
\subsection{METHOD 1 -- Gaussian Decomposition to Estimate $N_{\rm tot}$}
\label{s:method1}
In Paper I, we performed Gaussian decomposition of the optical depth and ``expected'' emission spectra,
and calculated the properties of individual CNM and WNM components
while considering self-absorption of both the CNM and the WNM by the CNM.
All Gaussian decomposition results
(peak brightness temperature, peak optical depth, spin temperature, etc.)
for each component are presented in Table 2 of Paper I.
These results enable us to derive the true total HI column density along a line of sight by
\begin{equation}
\label{eq:N-true}
\begin{split}
N_{\textrm{tot}}~(\textrm{cm}^{-2}) & = N_{\textrm{CNM}} + N_{\textrm{WNM}} \\
& = 1.823 \times 10^{18} \int (\sum_{0}^{N-1} T_{\textrm{s},n} \tau_{0,n} e^{-[(v-v_{0,n}) / \delta v_{n}]^2} \\
& \quad + \sum_{0}^{K-1} T_{0,k} e^{-[(v-v_{0,k})/\delta v_{k}]^2})dv~(\textrm{K km s}^{-1}),
\end{split}
\end{equation}
\noindent
where the components with subscript $n$ refer to the CNM,
the components with subscript $k$ refer to the WNM,
$\tau_{0}$ is the peak optical depth,
$v_{0}$ is the central velocity,
$T_{0}$ is the peak brightness temperature,
and $\delta v$ is the $1/e$ width of the component.
Here $N_{\rm tot}$ is calculated over the velocity range
determined as having $T_{\rm exp}(v)$ higher than its 3$\sigma$ noise.
For the uncertainty in $N_{\rm tot}$,
we use errors of the fitted parameters provided by Gaussian decomposition to perform a Monte Carlo simulation
where 1000 $N_{\rm CNM}$ and $N_{\rm WNM}$ values are computed from normally distributed parameters.
The standard deviations of the $N_{\rm CNM}$ and $N_{\rm WNM}$ distributions are then added in quadrature
to estimate the uncertainty in $N_{\rm tot}$.
This method of deriving $N_{\rm tot}$ was used by HT03b for their HI absorption measurements
toward background sources randomly located over the whole Arecibo sky.
In this study, we focus on a localized group of background sources in the direction of Perseus.
The (integrated) correction factor, $f = N_{\rm tot}/N_{\rm low-\tau} = N_{\rm tot}/N_{\rm exp}$,
is shown in Figure \ref{f:fcarl} (left) as a function of $N_{\rm exp}$.
Clearly, the correction factor increases with $N_{\rm exp}$.
We then present the (1/$\sigma^{2}$)-weighted mean values as the blue squares
and the linear fit to all 26 data points as the green soild line\footnote{In attempting to be consistent
with our calculation of the uncertainty in $f$ for the isothermal method (Section \ref{s:method2}),
we run a full Monte Carlo simulation based on errors of the fitted parameters from Gaussian decomposition.
In this simulation, 1000 $N_{\rm exp}$ and $N_{\rm tot}$ values are calculated from normally distributed parameters,
and the standard deviation of 1000 $f$ values is used as the uncertainty in $f$.
We find that linear fit results from using this error estimate,
$f$ = log$_{10} (N_{\textrm{exp}}/10^{20}) (0.25 \pm 0.03) + (0.87 \pm 0.02)$,
are consistent with Equation (\ref{eq:f-carl}) within uncertainties.}:
\begin{equation}
\label{eq:f-carl}
\begin{split}
f & = \log_{10} (N_{\textrm{exp}}/10^{20}) \times a + b \\
& = \log_{10} (N_{\textrm{exp}}/10^{20}) (0.32 \pm 0.06) + (0.81 \pm 0.05).
\end{split}
\end{equation}
\noindent
In general, the correction factor ranges from $\sim$1.0 at $\sim$3.9 $\times$ 10$^{20}$ cm$^{-2}$
to $\sim$1.2 at $\sim$1.3 $\times$ 10$^{21}$ cm$^{-2}$ (maximum uncorrected HI column density in Perseus).
While $f$ and $N_{\rm exp}$ show a good correlation
(Spearman's rank correlation coefficient of 0.80),
there are two sources with relatively high correction factors
at $\sim$10$^{21}$ cm$^{-2}$, 3C092 and 3C093.1.
Interestingly, the two are located behind the main body of Perseus (Figure 1).
Their high $f$ values of $\sim$1.5--1.6 could result from
an increased amount of the cold HI in the molecular cloud
relative to the surrounding diffuse ISM.
The CNM fraction is indeed $\sim$0.4 for both sources,
which is higher than the median value of $\sim$0.3 for all 26 sources (Section 4.3 of Paper I).
However, this is not the maximum CNM fraction in our measurements ($\sim$0.6).
Observing a denser grid of radio continuum sources behind Perseus
and repeating the calculations would be an interesting way to test the cold HI hypothesis.
Finally, the correction factor is also presented as a function of the integrated optical depth in Figure \ref{f:fcarl} (right).
As expected, there is a clear correlation (Spearman's rank correlation coefficient of 0.94).
We note that our results are not sensitive to HI components at $v_{\rm LSR}$ $<$ $-$20 km s$^{-1}$,
which are likely unassociated with Perseus (Section 4.2 of Paper I):
limiting the calculation of $N_{\rm exp}$ and $N_{\rm tot}$ to $v_{\rm LSR}$ $>$ $-$20 km s$^{-1}$ or
excluding the five sources showing such HI components at large negative velocities
(corresponding to the sources with log$_{10}(N_{\rm exp}/10^{20})$ $>$ 1.2 in Figure \ref{f:fcarl};
3C132, 3C131, 3C133, 4C+27.14, and 4C+33.10)
results in linear fit coefficients that are consistent with what we present here within uncertainties.\footnote{To be specific,
limiting the calculation of $N_{\rm exp}$ and $N_{\rm tot}$ to $v_{\rm LSR}$ $>$ $-$20 km s$^{-1}$
results in $f$ = log$_{10} (N_{\textrm{exp}}/10^{20}) (0.44 \pm 0.07) + (0.73 \pm 0.06)$
and $f = (0.034 \pm 0.005) \int \tau(v)dv + (1.004 \pm 0.020)$.
Similarly, excluding the five sources that show the HI components at $v_{\rm LSR}$ $<$ $-$20 km s$^{-1}$
leads to $f$ = log$_{10} (N_{\textrm{exp}}/10^{20}) (0.31 \pm 0.11) + (0.82 \pm 0.09)$
and $f = (0.040 \pm 0.011) \int \tau(v)dv + (0.988 \pm 0.027)$.}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{f_plots_JD_newMC.eps}
\caption{\label{f:fjd} METHOD 2: (left) $f$ = $N_{\rm tot}/N_{\rm exp}$
($N_{\rm tot}$: estimated in the isothermal approximation) as a function of log$_{10}(N_{\rm exp}/10^{20})$.
The blue squares show the (1/$\sigma^{2}$)-weighted mean values in 0.2-wide bins in log$_{10}(N_{\rm exp}/10^{20})$,
and the linear fit determined for all 26 data points is indicated as the green solid line (Equation \ref{eq:f-JD}).
(right) $f$ as a function of the integrated optical depth.
The (1/$\sigma^{2}$)-weighted mean values in 3.4 km s$^{-1}$-wide bins in the integrated optical depth are presented as the blue squares,
and the green solid line shows the linear fit to all 26 data points:
$f = (0.023 \pm 0.001)\int \tau(v)dv + (1.030 \pm 0.006)$.}
\end{figure*}
\subsection{METHOD 2 -- Isothermal Estimate of $N_{\rm tot}$}
\label{s:method2}
By assuming that each velocity channel represents gas at a single temperature,
\cite{Dickey00} showed that the correction factor per velocity channel
can be written as
\begin{equation}
\label{eq:f-chan1}
f_{\textrm{chan}}(v) = \frac{C_0 T_{\textrm{s}}(v) \tau(v)}{C_0 T_{\textrm{exp}}(v)},
\end{equation}
where $C_0 = 1.823 \times 10^{18}$ cm$^{-2}$/(K km s$^{-1}$).
In addition, $T_{\rm exp}(v)$ was expressed as
\begin{equation}
\label{eq:T-exp1}
T_{\textrm{exp}}(v) = T_{\textrm{s}}(v) (1-e^{-\tau(v)}).
\end{equation}
This equation assumes the absence of any radio continuum source behind the absorbing HI cloud.
As a result, $f_{\rm{chan}}(v)$ simply becomes
\begin{equation}
\label{eq:f-chan2}
f_{\textrm{chan}}(v) = \frac{\tau(v)}{1-e^{-\tau(v)}}.
\end{equation}
However, while the radio continuum source is absent in Equation (\ref{eq:T-exp1}),
some diffuse radio continuum emission is always present, and should not be ignored.
This emission includes the CMB and the Galactic synchrotron emission
that varies across the sky and becomes strong toward the Galactic plane.
We call a combination of these contributions as $T_{\rm sky}$,
and Equation (\ref{eq:T-exp1}) then has to be rewritten as
\begin{equation}
\label{eq:T-exp2}
T_{\textrm{exp}}^{*}(v) = T_{\textrm{s}}(v) (1-e^{-\tau(v)}) + T_{\textrm{sky}}e^{-\tau(v)}.
\end{equation}
Considering that HI emission spectra are generally baseline subtracted during the reduction process,
$T_{\rm sky}$ can be removed from both sides of Equation (\ref{eq:T-exp2}).
Then $T_{\rm exp}(v)$ = $T_{\rm exp}^{*}(v) - T_{\rm sky}$,
the quantity we have been working with so far, can be expressed as
\begin{equation}
\label{eq:T-exp3}
\begin{split}
T_{\textrm{exp}}(v) & = T_{\textrm{exp}}^{*}(v) - T_{\textrm{sky}} \\
& = (T_{\textrm{s}}(v) - T_{\textrm{sky}}) (1-e^{-\tau(v)}).
\end{split}
\end{equation}
As a consequence, the correction factor becomes
\begin{equation}
\label{eq:f-chan3}
f_{\textrm{chan}}(v) = \frac{T_{\textrm{s}}(v)}{T_{\textrm{s}}(v) - T_{\textrm{sky}}} \frac{\tau(v)}{1-e^{-\tau(v)}},
\end{equation}
or with direct observables,
\begin{equation}
\label{eq:f-chan4}
f_{\textrm{chan}}(v) = T_{\textrm{sky}} \frac{\tau(v)}{T_{\textrm{exp}}(v)} + \frac{\tau(v)}{1-e^{-\tau(v)}}.
\end{equation}
\begin{figure*}
\centering
\includegraphics[scale=0.25]{HI_corr_factor_display.eps}
\caption{\label{f:corr-fact} Correction factor at 4.3$'$ resolution
estimated in Section \ref{s:method1} (Equation \ref{eq:f-carl}).
In addition, the 3$\sigma$ contour of the ``old'' $N$(H$_{2}$) from \citet{Lee12}
(before the correction for high optical depth) is overlaid in gray,
while the 3$\sigma$ contour of the CfA $I_{\rm CO}$ is shown in black.
The resolutions of the $N$(H$_{2}$) and $I_{\rm CO}$ images are 4.3$'$ and 8.4$'$.}
\end{figure*}
In order to estimate the contribution from the Galactic synchrotron emission,
we use the \cite{Haslam82} 408 MHz survey of the Galaxy.
The brightness temperature at 408 MHz is converted to 1.4 GHz using the spectral index of $-$2.7.
As the absolute Galactic latitude of our continuum sources is generally higher than 10$^{\circ}$,
the synchrotron contribution is small with $T_{\rm sky}$ ranging from 2.78 K to 2.80 K (Table 1 of Paper I).
Based on the histogram of $T_{\rm s}$ for the individual CNM components (Figure 5b of Paper I),
we can then provide a rough estimate of $T_{\rm s}(v)/[T_{\rm s}(v)-T_{\rm sky}]$:
the expected range is narrow, from $\sim$1.0 to $\sim$1.2.
Clearly, for molecular clouds located closer to the Galactic plane
the contribution from the diffuse radio continuum emission will be more significant.
In Equation (\ref{eq:f-chan1}), $f_{\rm chan}(v)$ essentially represents the correction
that needs to be applied to $T_{\rm exp}(v)$ to calculate the true brightness temperature profile.
As a result, the true total HI column density can be obtained by
\begin{equation}
\label{eq:N-true-JD}
N_{\textrm{tot}}~(\textrm{cm}^{-2}) = 1.823 \times 10^{18} \int f_{\textrm{chan}}(v) T_{\textrm{exp}}(v)dv~(\textrm{K km s}^{-1}).
\end{equation}
We derive $f_{\rm chan}(v)$ for all 26 sources (Equation \ref{eq:f-chan4}),
and present the results in Figure \ref{f:t_jd}.
About 91\% of the $f_{\rm chan}(v)$ values are between 1 and 2,
and the fraction of velocity channels with $f_{\rm chan}(v) > 2$ is very small.
We then calculate $N_{\rm tot}$ using Equation (\ref{eq:N-true-JD}),
and show the (integrated) correction factor, $f = N_{\rm tot}/N_{\rm exp}$,
as a function of $N_{\rm exp}$ in Figure \ref{f:fjd} (left).
Here the integration is done over the velocity range
where $T_{\rm exp}(v)$ is higher than its 3$\sigma$ noise.
The uncertainty in $f$ is derived by running a Monte Carlo simulation
where the optical depth and ``expected'' emission error spectra
are propagated through Equations (\ref{eq:f-chan4}) and (\ref{eq:N-true-JD}) to compute 1000 $f$ values.
The standard deviation of the $f$ distribution is used as the final uncertainty in $f$.
Similar to the Gaussian decomposition method,
we find a good correlation between $f$ and $N_{\rm exp}$ (Spearman's rank correlation coefficient of 0.84).
The linear fit determined using all 26 data points is
\begin{equation}
\label{eq:f-JD}
f = \log_{10}(N_{\textrm{exp}}/10^{20})(0.25 \pm 0.02) + (0.87 \pm 0.02).
\end{equation}
Additionally, $f$ is plotted as a function of the integrated optical depth in Figure \ref{f:fjd} (right),
again showing a clear correlation (Spearman's rank correlation coefficient of 0.97).
Both graphs in Figure \ref{f:fjd} are very similar with
those in Figure \ref{f:fcarl} for the Gaussian decomposition method.
Specifically, the linear fit coefficients are consistent within uncertainties.
This is surprising considering that the two methods are very different.
In particular, the isothermal method assigns a single spin temperature to each velocity channel,
while the Gaussian decomposition method allows a single velocity channel to have contributions
from several HI components with different spin temperatures.
In \cite{Dickey00}, the authors updated the isothermal method
by incorporating the two-phase approximation.
As input parameters, this method then required the spin temperature of the cold HI
and the fraction of the warm HI that is in front of the cold HI,
the quantity they referred to as $q$.
\cite{Dickey00} showed that for their SMC data
there is no difference between the one- and two-phase approximations regarding the correction factor if $q \gtrsim 0.5$,
while the difference becomes more pronounced when $q \lesssim 0.25$.
In the Gaussian decomposition method, this fraction ($F_k$ in Paper I)
is important as well, but is difficult to constrain.
Thus, the fitting process was repeated for $F_k$ = 0, 0.5, and 1,
and these results were used to estimate the final uncertainty in $T_{\rm s}$
(Section 3 of Paper I and HT03a for details).
To understand why the two different methods result in similar correction factors,
we compare Equations (\ref{eq:N-true}) and (\ref{eq:N-true-JD}) (Appendix A),
and find that they become comparable regardless of $F_k$ when $\tau \ll 1$ and $T_{\rm sky} \ll T_{\rm s}$.
In our observations of Perseus, the median peak optical depth for all CNM components is $\sim$0.2,
and only a small number of the components has the peak optical depth higher than 1
(10 out of 107; Section 4.1 of Paper I), satisfying the condition.
In addition, we already showed that $T_{\rm sky}$ is small with $\sim$2.8 K for Perseus.
The difference between the Gaussian decomposition and isothermal methods, however, will be more significant
for molecular clouds that have a large amount of the cold, optically thick HI and/or
a substantial contribution from the diffuse radio continuum emission.
Due to the more self-consistent way to derive $T_{\rm exp}(v)$,
we continue by using the Gaussian decomposition results for further analyses.
Finally, we note that Equations (\ref{eq:f-carl}) and (\ref{eq:f-JD})
could be biased against very high optical depths as they result in saturated absorption spectra,
e.g., 4C$+$32.14, the source we had to exclude from our analyses due to its highly saturated absorption profile.
In addition, the equations are based on our explicit assumption of the linear relation between $f$ and log$_{10}$($N_{\rm exp}/10^{20}$).
The HI emission and absorption measurements obtained by HT03a,b along many random lines of sight through the Galaxy
show that the linear relation indeed describes the observations well up to log$_{10}$($N_{\rm exp}/10^{20}$) $\sim$ 1.5,
and the fitted coefficients are consistent with Equations (\ref{eq:f-carl}) and (\ref{eq:f-JD}) within uncertainties (Appendix B).
There is some interesting deviation from the linear relation, however,
particularly for a few sources with log$_{10}$($N_{\rm exp}$/10$^{20}$) $\gtrsim$ 1.
This deviation could suggest a non-linear relation at high column densities,
and its significance needs to be further examined with more HI absorption measurements.
In the future, it will also be important to study the dense CNM using alternative tracers
such as CI and CII fine-structure lines (e.g., \citeauthor{Pineda13} 2013).
\begin{figure*}
\centering
\includegraphics[scale=0.25]{HI_cden_display.eps}
\caption{\label{f:corrected-HI} Corrected $N$(HI) image at 4.3$'$ resolution.}
\end{figure*}
\subsection{Comparison with Previous Studies}
\label{s:comparison}
\textit{Dickey et al. (2000)}.
The correction factor calculated by \cite{Dickey00} for the SMC
using full line of sight information can be rewritten as
$f = \log_{10} (N_{\rm exp}/10^{20}) 0.667 + 0.066$ for $N_{\rm exp} > 2.5 \times 10^{21}$ cm$^{-2}$.
The range of the HI column density we probe for Perseus barely overlaps with that in \cite{Dickey00},
as the HI column density in the low-metallicity SMC is significantly higher compared to the Galaxy.
While the difference with \cite{Dickey00} in $f$ values depends on $N_{\rm exp}$,
our correction factor is only $\sim$4\% higher than what \cite{Dickey00} suggests
when extrapolated to their maximum HI column density of 10$^{22}$ cm$^{-2}$.
Similarly, \cite{Dickey03} used HI emission and absorption data
from the Southern Galactic Plane Survey (\citeauthor{McClure-Griffiths05} 2005)
in combination with the isothermal method,
and estimated $f$ = $\sim$1.4--1.6 for sources
located at 326$^{\circ}$ $< l <$ 333$^{\circ}$ and $|b|$ $\lesssim$ 1$^{\circ}$.
This correction factor is comparable with what we find for Perseus.
\textit{Heiles \& Troland (2003a,b)}. HT03a,b performed Gaussian decomposition
of 79 HI emission/absorption spectral line pairs,
and derived the correction factor, which they called $R_{\rm raw} = 1/f$.
They found that $R_{\rm raw}$ ranges from $\sim$0.3 to $\sim$1.0,
corresponding to $f$ = $\sim$1.0--3.0 (Appendix B).
In particular, they estimated $f \sim 1.3$ for the Taurus/Perseus region,
which is similar with what we find for Perseus.
\textit{Braun et al. (2009)}. Our correction factor is
smaller than what \cite{Braun09} and \cite{Braun12}
claimed for M31, M33, and the LMC based on the modeling of HI emission spectra.
They found that the correction exceeds an order of magnitude in many cases,
and increases the global HI mass by $\sim$30\%.
Even when considering the correction factor per velocity channel,
we find the maximum $f_{\rm chan}(v)$ $\sim$ 4 for only one source
and $f_{\rm chan}(v) \lesssim 3$ for the rest of our sources.
\textit{Chengalur et al. (2013)}. Our correction factor
can also be compared with predictions from \cite{Chengalur13}.
This study performed Monte Carlo simulations
where observationally motivated input parameters such as the column density,
the spin temperatures of the CNM and the WNM, and the fraction of gas in each of the different phases
were provided for ISM models.
Our correction factor versus integrated optical depth plots, Figures \ref{f:fcarl} (right) and \ref{f:fjd} (right),
can be directly compared with Figure 1A in \cite{Chengalur13}.
We find that the correction factor by \cite{Chengalur13} is significantly higher than our estimate,
although the general trend of increasing correction with the integrated optical depth is similar.
For example, \cite{Chengalur13} expects $f \sim$ 20 when $\int \tau dv$ $\sim$ 10 km s$^{-1}$,
while we find only $f < 1.5$.
Similarly, our Figures \ref{f:fcarl} (left) and \ref{f:fjd} (left)
can be compared with Figure 2A in \cite{Chengalur13}.
We find that our estimate is consistent with the correction factor by \cite{Chengalur13}
for the column density less than 10$^{21}$ cm$^{-2}$,
while \cite{Chengalur13} overestimates at the high end of our column density range.
If we extrapolate our relation to 10$^{22}$ cm$^{-2}$,
we expect a $\sim$10 times lower correction factor than what \cite{Chengalur13} suggests.
The reason for their very high correction factor could be the
inclusion of extremely high column densities (10$^{23}$--10$^{24}$ cm$^{-2}$) in their ISM models,
although it is not clear why it less affects the isothermal estimate of the HI column density.
In the simulations, the median ratio of the true HI column density to the isothermal estimate was $\sim$1.1
when $\int \tau dv$ $\sim$ 5 km s$^{-1}$.
\begin{figure}
\centering
\includegraphics[scale=0.42]{HI_H2_cden_hist.eps}
\caption{\label{f:HI-H2-comp} (left) Normalized histograms of the two $N$(HI) images,
before (black) and after (gray) the correction for high optical depth.
The median of each histogram is shown as the dashed line.
(right) Same as the left panel, but for $N$(H$_{2}$).}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.25]{CO_H2_cden_display.eps}
\caption{\label{f:new-H2} Rederived $N$(H$_{2}$) image.
The blank pixels correspond to regions with possible contaminations
(point sources, the Taurus molecular cloud, and the ``warm dust ring''; Section 4 of \citeauthor{Lee12} 2012 for details).
The 3$\sigma$ contour of the new $N$(H$_{2}$) is overlaid in gray,
while the 3$\sigma$ contour of the CfA $I_{\rm CO}$ is shown in black.
The resolutions of the $N$(H$_{2}$) and $I_{\rm CO}$ images are 4.3$'$ and 8.4$'$.
The red rectangular boxes show the boundaries of the selected dark (B5, B1E, and B1) and star-forming (IC348 and NGC1333) regions.}
\end{figure*}
\textit{Liszt (2014)}. Using the HI absorption data compiled by \cite{Liszt10},
\cite{Liszt14b} estimated the correction factor for radio continuum sources located at high Galactic latitudes.
While they did not provide detailed information about how exactly the derivation was done,
their correction factor was small with $f$ less than 1.2 for $E(B-V)$ $\lesssim$ 0.5 mag.
This is comparable to our finding.
\textit{Fukui et al. (2014,2015)}. Finally, \cite{Fukui15}
estimated the correction factor for the Galaxy at $|b|$ $>$ 15$^{\circ}$
by exploring the relation between $\tau_{353}$ and $N$(HI) at 33$'$ resolution.
Their Figure 13 shows that the correction factor distribution
ranges from $\sim$1.0 to $\sim$3.0, and peaks at $\sim$1.5.
Using the same methodology, \cite{Fukui14} found that the correction increases the total HI mass of
the high latitude molecular clouds MBM 53, 54, 55, and HLCG 92-35 by a factor of $\sim$2.
In general, the correction factor by Fukui et al. (2014,2015) appears higher than our estimate for Perseus.
While we do not perform a detailed comparison with Fukui et al. (2014,2015),
we test their claim of the optically thick HI as an alternative of the ``CO-dark'' gas in Section \ref{s:CO-dark}.
\section{Applying the Correction for High Optical Depth to Perseus}
\label{s:perseus-corrected}
We apply the correction derived in Section \ref{s:method1}
to the $N$(HI) image of Perseus from \cite{Lee12}
for pixels with log$_{10}$($N_{\rm exp}$/10$^{20}$) $>$ 0.6
(where the correction factor is higher than 1).
The correction factor and corrected $N$(HI) images are shown
in Figures \ref{f:corr-fact} and \ref{f:corrected-HI}.
In addition, we present the normalized histograms of the two $N$(HI) images,
before and after the correction, in Figure \ref{f:HI-H2-comp}.
As Figure \ref{f:HI-H2-comp} indicates,
the correction does not make a significant change in $N$(HI).
To be specific, the median $N$(HI) increases by a factor of $\sim$1.1
from $\sim$7.9 $\times$ 10$^{20}$ cm$^{-2}$ to $\sim$8.7 $\times$ 10$^{20}$ cm$^{-2}$,
while the maximum $N$(HI) increases by a factor of $\sim$1.2
from $\sim$1.3 $\times$ 10$^{21}$ cm$^{-2}$ to $\sim$1.6 $\times$ 10$^{21}$ cm$^{-2}$.
In terms of the total HI mass, the correction results in a $\sim$10\% increase
from $\sim$2.3 $\times$ 10$^{4}$ M$_{\odot}$ to $\sim$2.5 $\times$ 10$^{4}$ M$_{\odot}$.
This increase in the HI mass is comparable to what \citet{Dickey00} found for the SMC,
but is smaller than the value estimated by \citet{Braun09} and \citet{Braun12} for M31, M33, and the LMC.
In addition, we note that our correction factor image looks smooth
compared to a granulated appearance of the corrected $N$(HI) images by \citet{Braun09} and \citet{Braun12},
although Perseus ($\sim$80 pc $\times$ 50 pc)
would be unresolved or only marginally resolved in their studies.
Finally, the HI mass increase due to the optically thick gas in Perseus is
smaller than what Fukui et al. (2014,2015) derived for Galactic molecular clouds.
\section{How Does the High Optical Depth Affect the HI Saturation in Perseus?}
\label{s:revisit-saturation}
\subsection{Rederiving $N$\textup{(H$_{2}$)}}
\label{s:H2}
To investigate the impact of high optical depth on the observed HI saturation in Perseus,
we first rederive the $N$(H$_{2}$) image using the corrected $N$(HI).
In essence, we use the same methodology as \citet{Lee12}:
the $A_{V}$ image is derived using the IRIS 60/100 $\mu$m and 2MASS $A_{V}$ data,
and a local D/G is adopted to estimate $N$(H$_{2}$).
We refer to Section 4 of \citet{Lee12} for details on the method for deriving $N$(H$_{2}$) and its limitation,
and summarize here main results.
\begin{figure*}
\centering
\includegraphics[scale=0.33]{tot_sden_HI_plot_IC348_all.eps}
\includegraphics[scale=0.33]{tot_sden_HI_plot_IC348_all_Lee12.eps}
\includegraphics[scale=0.33]{tot_sden_HI_plot_B1E_all.eps}
\includegraphics[scale=0.33]{tot_sden_HI_plot_B1E_all_Lee12.eps}
\caption{\label{f:HI} (left) $\Sigma_{\rm HI}$ versus $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ for IC348 and B1E (this study).
All finite pixels in the rectangular boxes shown in Figure \ref{f:new-H2} are used for plotting.
The median 3$\sigma$ values of $\Sigma_{\rm HI}$ and $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ are indicated as the black dashed lines,
while the best-fit curves determined in Section \ref{s:KMT09} are overlaid in red.
The best-fit parameters are shown in the top right corner of each plot.
(right) Plots from \citet{Lee12}.}
\end{figure*}
1. Contamination from VSGs:
To estimate $T_{\rm dust}$ from the ratio of $I_{60}$ to $I_{100}$,
the contribution from stochastically heated VSGs to $I_{60}$ must be removed.
For this purpose, we compare our IRIS-based $T_{\rm dust}$ with the DIRBE-based $T_{\rm dust}$ from \citet{Schlegel98},
and find that the contribution from VSGs to $I_{60}$ is 78\%.
This is the same as what \citet{Lee12} found.
2. Zero point calibration for $\tau_{100}$:
We refine the zero point of the $\tau_{100}$ image by assuming that
the dust column density traced by $\tau_{100}$ is proportional to $N$(HI) for atomic-dominated regions.
Based on the zero point of the $\tau_{100}$--$N$(HI) relation,
we add 1.1 $\times$ 10$^{-4}$ to the $\tau_{100}$ image.
This is slightly smaller than what \cite{Lee12} added, i.e., 1.8 $\times$ 10$^{-4}$.
3. Conversion from $\tau_{100}$ to $A_{V}$:
We convert $\tau_{100}$ into $A_{V}$ by adopting $X = 740$ for $A_{V}$ = $X \tau_{100}$
that results in the best agreement between our IRIS-based $A_{V}$ and the 2MASS-based $A_{V}$ from \citet{Ridge06b}.
This is slightly higher than $X = 720$ used by \citet{Lee12}.
We compare our rederived $A_{V}$ with the $A_{V}$ image from \citet{Lee12},
and find that the ratio of the new $A_{V}$ to the old $A_{V}$ ranges from $\sim$0.93 to $\sim$1.02.
4. Deriving a local D/G and $N$(H$_{2}$):
We examine the $A_{V}$--$N$(HI) relation, and find that the slope of $A_{V}$/$N$(HI) = 1 $\times$ 10$^{-21}$ mag cm$^{2}$
is a good measure of D/G for Perseus.
This is slightly lower than what \citet{Lee12} estimated, i.e., 1.1 $\times$ 10$^{-21}$ mag cm$^{2}$,
and makes sense considering that our rederived $A_{V}$ is essentially
the same with the $A_{V}$ image from \citet{Lee12},
while the corrected $N$(HI) is slightly higher than the uncorrected $N$(HI).
Finally, we derive $N$(H$_{2}$) using Equation (\ref{eq:H2}),
and mask pixels with possible contaminations
(point sources, the Taurus molecular cloud, and the ``warm dust ring''),
following what \citet{Lee12} did.
We show the final $N$(H$_{2}$) image in Figure \ref{f:new-H2},
as well as the normalized histograms of the new (this study)
and old \citep{Lee12} $N$(H$_{2}$) images in Figure \ref{f:HI-H2-comp} (right).
Conclusion: In our rederivation of $N$(H$_{2}$),
all parameters are identical with or only slightly different from what \citet{Lee12} used.
As a result, the new $N$(H$_{2}$) is comparable to the old $N$(H$_{2}$),
shown as a good agreement between the two histograms in Figure \ref{f:HI-H2-comp} (right).
The rederived $N$(H$_{2}$) ranges from
$-$1.5 $\times$ 10$^{20}$ cm$^{-2}$ to 5.1 $\times$ 10$^{21}$ cm$^{-2}$
with a median of 4.6 $\times$ 10$^{19}$ cm$^{-2}$,
and $\sim$83\% of the pixels whose S/N is higher than 1
have the new $N$(H$_{2}$) differing from the old $N$(H$_{2}$) by only 10\%.
Finally, we note that $\sim$30\% of all finite pixels
have negative $N$(H$_{2}$) values of mostly around $-$(1--5) $\times$ 10$^{19}$ cm$^{-2}$,
which are very close to zero considering
the median uncertainty in $N$(H$_{2}$) of $\sim$5 $\times$ 10$^{19}$ cm$^{-2}$ (Section \ref{s:errors}).
\subsection{Uncertainty in $N$\textup{(H$_{2}$)}}
\label{s:errors}
As \citet{Lee12} did, we perform a series of Monte Carlo simulations
to estimate the uncertainty in $N$(H$_{2}$).
In these simulations, we assess the errors in $N$(HI) and $A_{V}$, and
propagate them together.
\begin{figure*}
\centering
\includegraphics[scale=0.33]{R_H2_plot_IC348_all.eps}
\includegraphics[scale=0.33]{R_H2_plot_IC348_all_Lee12.eps}
\includegraphics[scale=0.33]{R_H2_plot_B1E_all.eps}
\includegraphics[scale=0.33]{R_H2_plot_B1E_all_Lee12.eps}
\caption{\label{f:R-H2} (left) $R_{\rm H2}$ versus $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ for IC348 and B1E (this study).
All finite pixels in the rectangular boxes shown in Figure \ref{f:new-H2} are used for plotting.
The median 3$\sigma$ values of $R_{\rm H2}$ and $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ are indicated as the black dashed lines,
while the best-fit curves determined in Section \ref{s:KMT09} are overlaid in red.
The best-fit parameters are shown in the bottom right corner of each plot.
(right) Plots from \citet{Lee12}.}
\end{figure*}
For the uncertainty in $N$(HI), we combine two terms in quadrature:
the error from using a fixed velocity width ($\sigma_{\rm HI,1}$),
and the error from the correction for high optical depth ($\sigma_{\rm HI,2}$).
To estimate $\sigma_{\rm HI,1}$, we produce 1000 $N$(HI) images using 1000 velocity widths
randomly drawn from a Gaussian distribution that peaks at 20 km s$^{-1}$ with 1$\sigma$ of 4 km s$^{-1}$.
The standard deviation of the simulated $N$(HI) is then computed for $\sigma_{\rm HI,1}$.
This $\sigma_{\rm HI,1}$ is what \citet{Lee12} used as their uncertainty in $N$(HI).
Similarly, to derive $\sigma_{\rm HI,2}$, we generate 1000 $N$(HI) images
by applying the correction to the $N$(HI) image from \citet{Lee12}
using 1000 combinations of $a$ and $b$ in Equation (\ref{eq:f-carl}).
These $a$ and $b$ values are again drawn from Gaussian distributions
whose peaks and widths correspond to the fitted $a$ and $b$ values and their uncertainties.
We find that the median 1$\sigma$ of $N$(HI) in our study is $\sim$8.1 $\times$ 10$^{19}$ cm$^{-2}$.
For the uncertainty in $A_{V}$, we repeat the exercise done by \citet{Lee12}:
deriving a number of $A_{V}$ images by changing input conditions
(1$\sigma$ noises of the IRIS 60/100 $\mu$m images, $\beta$, zero point calibration for $\tau_{100}$),
and estimating the minimum and maximum $A_{V}$ values for each pixel.
In this exercise, we find that the contribution from VSGs to $I_{60}$ varies from 76\% to 88\%,
while $X$ varies from 655 to 855.
Finally, we propagate the uncertainty in $N$(HI) and the minimum/maximum $A_{V}$ values through a Monte Carlo simulation
in order to produce 1000 $N$(H$_{2}$) images.
The distribution of the simulated $N$(H$_{2}$) is then used to estimate the uncertainty in $N$(H$_{2}$) on a pixel-by-pixel basis.
We find that the median 1$\sigma$ of $N$(H$_{2}$) in our study is $\sim$4.8 $\times$ 10$^{19}$ cm$^{-2}$.
\subsection{$R_{\rm H2}$ versus $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ and
$\Sigma_{\rm HI}$ versus $\Sigma_{\rm HI}+\Sigma_{\rm H2}$}
\label{s:R-H2-HI}
From the rederived $N$(HI) and $N$(H$_{2}$) images,
we estimate $\Sigma_{\rm HI}$ and $\Sigma_{\rm H2}$ by
\begin{equation*}
\Sigma_{\rm HI}~(\rm M_{\odot}~pc^{-2}) =
\frac{\textit{N}(HI)~(cm^{-2})}{1.25 \times 10^{20}}
\end{equation*}
\begin{equation}
\Sigma_{\rm H2}~(\rm M_{\odot}~pc^{-2}) =
\frac{\textit{N}(H_{2})~(cm^{-2})}{6.25 \times 10^{19}} .
\end{equation}
\noindent We find that $\Sigma_{\rm HI}$ varies by only a factor of $\sim$2.6
from $\sim$4.8 M$_{\odot}$ pc$^{-2}$ to $\sim$12.7 M$_{\odot}$ pc$^{-2}$.
On the other hand, $\Sigma_{\rm H2}$ ranges from $-$2.4 M$_{\odot}$ pc$^{-2}$ to 81.8 M$_{\odot}$ pc$^{-2}$,
although $\sim$98\% of the pixels have $\Sigma_{\rm H2}$ $<$ 15 M$_{\odot}$ pc$^{-2}$.
As a result, $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ has a small dynamic range of $\sim$8--30 M$_{\odot}$ pc$^{-2}$
across most of the cloud.
To compare with the KMT09 predictions aiming at revisiting the HI saturation in Perseus,
we focus on the individual dark (B5, B1E, and B1) and star-forming (IC348 and NGC1333) regions.
The boundaries of each region were determined based on the $^{13}$CO emission (Section 5 of \citeauthor{Lee12} 2012 for details),
and are shown as the red rectangular boxes in Figure \ref{f:new-H2}.
Using all finite pixels in the rectangular boxes,
we plot $\Sigma_{\rm HI}$ and $R_{\rm H2}$ as a function of $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ for each region,
and present the results for IC348 and B1E in Figures \ref{f:HI} and \ref{f:R-H2}.
Additionally, we show the same plots from \citet{Lee12} for comparison.
Note that both this study and \cite{Lee12} include negative $\Sigma_{\rm H2}$ values
by using all finite pixels in the rectangular boxes.
Almost all ($\sim$90\%) of these negative $\Sigma_{\rm H2}$ values fluctuate around zero within uncertainties.
\subsection{Comparison to the KMT09 Predictions}
\label{s:KMT09}
As in \citet{Lee12}, the following KMT09 predictions are used to fit
the observed $R_{\rm H2}$ vs $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ profiles:
\begin{equation}
R_{\rm H2} = \frac{4 \tau_{\rm c}}{3 \psi} \left[1 + \frac{0.8 \psi \phi_{\rm mol}}{4 \tau_{\rm c} +
3(\phi_{\rm mol} - 1) \psi}\right] - 1
\end{equation}
\noindent where
\begin{equation}
\tau_{\rm c} = \frac{3}{4} \left(\frac{\Sigma_{\rm comp} \sigma_{\rm d}}{\mu_{\rm H}}\right),
\end{equation}
\begin{equation}
\psi = \chi \frac{2.5 + \chi}{2.5 + \chi e},
\end{equation}
\noindent and
\begin{equation}
\chi = 2.3 ~ \frac{1 + 3.1 Z'^{0.365}}{\phi_{\rm CNM}}.
\end{equation}
\noindent Here $\tau_{\rm c}$ is the dust optical depth a spherical cloud would have
if its HI and H$_{2}$ are uniformly mixed,
and $\chi$ is the ratio of the rate at which LW photons are absorbed by dust grains
to the rate at which they are absorbed by H$_{2}$.
In addition, $\Sigma_{\rm comp}$ is the total gas column density,
$\sigma_{\rm d}$ is the dust absorption cross section per hydrogen nucleus in the LW band,
$\mu_{\rm H}$ is the mean mass per hydrogen nucleus,
$Z'$ is the metallicity normalized to the value in the solar neighborhood,
$\phi_{\rm CNM}$ is the ratio of the CNM density to the minimum CNM density
at which the CNM can be in pressure balance with the WNM,
and finally $\phi_{\rm mol}$ is the ratio of the H$_{2}$ density to the CNM density.
We refer to Section 6 of \citet{Lee12} for a detailed summary of the KMT09 model.
As Equations (17)--(20) suggest, $R_{\rm H2}$ in the KMT09 model is simply
a function of total gas column density, metallicity, $\phi_{\rm CNM}$, and $\phi_{\rm mol}$,
and is independent of the strength of the radiation field in which the cloud is embedded.
Following \citet{Lee12},
we adopt $Z'$ = 1 and $\phi_{\rm mol}$ = 10 (fiducial value by KMT09)\footnote{We note that
$R_{\rm H2}$ is not sensitive to $\phi_{\rm mol}$.
For example, with our median $\phi_{\rm CNM}$ value of $\sim$7 (Table 1),
$R_{\rm H2}$ at $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ = 100 M$_{\odot}$ pc$^{-2}$ varies by only a factor of $\sim$1.1
for $\phi_{\rm mol}$ = 10--50.}, and constrain $\phi_{\rm CNM}$
by finding the best-fit model for the observed $R_{\rm H2}$ vs $\Sigma_{\rm HI}+\Sigma_{\rm H2}$.
For this purpose, we perform Monte Carlo simulations
where the uncertainties in $R_{\rm H2}$ and $\Sigma_{\rm HI}+\Sigma_{\rm H2}$
are taken into account for model fitting.
In these simulations, we add random offsets to $R_{\rm H2}$ and $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ based on their uncertainties,
and determine the best-fit curve by setting $Z'$ = 1 and $\phi_{\rm mol}$ = 10
and finding $\phi_{\rm CNM}$ that results in the minimum sum of squared residuals.
We repeat this process 1000 times, and estimate the best-fit $\phi_{\rm CNM}$
by calculating the median $\phi_{\rm CNM}$ among the simulated $\phi_{\rm CNM}$.
The derived $\phi_{\rm CNM}$ for each region is summarized in Table \ref{t:fitting},
and the best-fit curves are shown in red in Figures \ref{f:HI} and \ref{f:R-H2}.
\begin{table}
\begin{center}
\caption{\label{t:fitting} Fitting Results for $R_{\rm H2}$ versus $\Sigma_{\rm H\textsc{i}}+\Sigma_{\rm H2}$}
\begin{tabular}{l c}\hline \hline
Region & Best-fit $\phi_{\rm CNM}$ \\ \hline
B5 & $8.75 \pm 1.35$ \\
IC348 & $7.40 \pm 0.94$ \\
B1E & $7.28 \pm 1.13$ \\
B1 & $6.93 \pm 1.00$ \\
NGC1333 & $5.28 \pm 0.85$ \\
\hline
\end{tabular}
\end{center}
{Note. The uncertainty in $\phi_{\rm CNM}$ is estimated from
the distribution of the simulated 1000 $\phi_{\rm CNM}$ values.}
\end{table}
In the KMT09 model, $\chi$ measures the relative importance of dust shielding and H$_{2}$ self-shielding,
and is predicted to be $\sim$1 for a wide range of galactic environments.
In this case, a certain amount of $\Sigma_{\rm HI}$ is required to shield H$_{2}$ againt photodissociation,
and H$_{2}$ forms out of HI once this minimum shielding column density is obtained.
The KMT09 model predicts the minimum shielding column density of $\sim$10 M$_{\odot}$ pc$^{-2}$ for solar metallicity,
and this is indeed consistent with what we observe in Perseus:
$\Sigma_{\rm HI}$ saturates at $\sim$7--9 M$_{\odot}$ pc$^{-2}$ for all five regions.
The level of the HI saturation changes between the regions though,
from $\sim$7 M$_{\odot}$ pc$^{-2}$ for B5 to $\sim$9 M$_{\odot}$ pc$^{-2}$ for NGC1333.
In comparison with \citet{Lee12}, our $\Sigma_{\rm HI}$ values are slightly higher
due to the correction for high optical depth.
This correction brings a closer agreement with the KMT09 model.
The excellent agreement with the KMT09 model is also evident from
$R_{\rm H2}$ vs $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ in Figure \ref{f:R-H2}.
For all five regions, we find that $R_{\rm H2}$ steeply rises at small $\Sigma_{\rm HI}+\Sigma_{\rm H2}$,
turns over at $R_{\rm H2}$ $\sim$ 1, and then slowly increases at large $\Sigma_{\rm HI}+\Sigma_{\rm H2}$.
In fact, this common trend on a log-linear scale
is entirely driven by the almost constant $\Sigma_{\rm HI}$,
and therefore it is not surprising to find such a good agreement with the KMT09 model
where the $\Sigma_{\rm HI}$ saturation is predicted.
By fitting the KMT09 predictions to the observed $R_{\rm H2}$ vs $\Sigma_{\rm HI}+\Sigma_{\rm H2}$ profiles,
we constrain $\phi_{\rm CNM}$ $\sim$ 5--9,
which is consistent with what \cite{Lee12} estimated within uncertainties.
In the KMT09 model, $\phi_{\rm CNM}$ determines the CNM density by
$n_{\rm CNM} = \phi_{\rm CNM}n_{\rm min}$
where $n_{\rm min}$ is the minimum CNM density at which the CNM can be in pressure balance with the WNM,
and $\phi_{\rm CNM}$ = 5--9 translates into $T_{\rm CNM}$ = 60--80 K (Equation 19 of KMT09).
This $T_{\rm CNM}$ range is consistent with
what we observationally constrained for the HI environment of Perseus via the HI absorption measurements (Paper I):
$T_{\rm CNM}$ mostly ranges for $\sim$10--200 K, and its distribution peaks at $\sim$50 K.
Finally, we note that $\phi_{\rm CNM}$ systematically decreases toward the southwest of Perseus,
reflecting the observed region-to-region variations in $\Sigma_{\rm HI}$.
The difference in $\phi_{\rm CNM}$, however, is not significant,
and this suggests similar $\chi$ values for all dark and star-forming regions (Equation 20).
We indeed find $\chi$ = $\sim$1.1--1.8, in agreement with the KMT09 prediction of
comparable dust-shielding and H$_{2}$ self-shielding for H$_{2}$ formation.
\section{Optically Thick HI: Alternative to the ``CO-dark'' Gas?}
\label{s:CO-dark}
Recently, Fukui et al. (2014,2015) suggested that the ``CO-dark'' gas in the Galaxy,
referring to the interstellar gas undetectable either in the 21-cm HI and 2.6-mm CO emission (e.g., \citeauthor{Bolatto13} 2013),
could be dominated by the optically thick HI.
As our HI absorption measurements provide an independent estimate of the optical depth,
we can test the validity of their claim against Perseus on sub-pc scales.
To do so, we utilize our old and new $N$(H$_{2}$) images
(before and after the correction for high optical depth),
as well as the CfA $I_{\rm CO}$ data.
First, we examine how the ``CO-dark'' gas and the optically thick HI gas are spatially distributed.
In order to identify the ``CO-dark'' gas,
we use the 3$\sigma$ contours of the old $N$(H$_{2}$) and CfA $I_{\rm CO}$ images,
following the definition by \citet{Lee12}.
These contours are overlaid on our correction factor image (Figure \ref{f:corr-fact}),
and show that the relative distribution of H$_{2}$ (or simply the gas not probed by the HI emission)
and CO changes across the cloud.
For example, H$_{2}$ and CO trace each other in the southwest,
while H$_{2}$ is more extended than CO elsewhere.
\cite{Lee12} compared H$_{2}$ and CO radial profiles for Perseus,
and estimated that H$_{2}$ is on average $\sim$1.4 times more extended than CO,
suggesting a substantial amount of the ``CO-dark'' gas.
We then find that the distributions of the ``CO-dark'' gas (traced by the difference between the H$_{2}$ and CO 3$\sigma$ contours)
and the optically thick HI gas (traced by the correction factor) generally disagree with each other.
For example, the region around B5 where the discrepancy between the H$_{2}$ and CO distributions is greatest
shows relatively low correction factors.
On the other hand, the regions with high correction factors at (R.A.,decl.) $\sim$ (3$^{\rm h}$33$^{\rm m}$,$+$34$^{\circ}$30$'$)
and $\sim$ (3$^{\rm h}$23$^{\rm m}$,$+$29$^{\circ}$) do not have a large amount of the ``CO-dark'' gas.
While our previous evaluation was based on the visual examination of the 3$\sigma$ contours,
here we more rigorously investigate whether or not the optically thick HI gas can explain the ``CO-dark'' gas
by comparing the old and new $N$(H$_{2}$) images.
We first smooth the old and new $N$(H$_{2}$) images to 8.4$'$ resolution,
as well as their uncertainties,
to match the resolution of the CfA $I_{\rm CO}$ image.
We then find the ``CO-dark'' gas from the smoothed $N$(H$_{2}$) images
by utilizing the 3$\sigma$ contours of $N$(H$_{2}$) and $I_{\rm CO}$.
Essentially, a pixel is classified as the ``CO-dark'' gas
when $N$(H$_{2}$) is above the 3$\sigma$ level,
but $I_{\rm CO}$ is less than the 3$\sigma$ noise.
For the selected pixels, we calculate the ``CO-dark'' gas column density, 2$N$(H$_{2}$),
and present two histograms in Figure \ref{f:CO-dark}.
The ``CO-dark'' gas from the old $N$(H$_{2}$) is in black,
while that from the new $N$(H$_{2}$) is in gray.
Figure \ref{f:CO-dark} shows that the two histograms are comparable
regarding their minimum, maximum, and median values (different by less than a factor of 2),
although the gray histogram has a smaller number of pixels due to the larger uncertainty of the new $N$(H$_{2}$) image.
Given that the optically thick HI gas was already taken into consideration in the derivation of the new $N$(H$_{2}$) image,
the comparable black and gray histograms suggest that the increased column density
due to the optically thick HI gas is small (up to $\sim$2 $\times$ 10$^{20}$ cm$^{-2}$),
and the ``CO-dark'' gas still exists in Perseus.
In terms of mass, the additional contribution from the optically thick HI only accounts for $\sim$20\% of the observed ``CO-dark'' gas.
\begin{figure}
\centering
\includegraphics[scale=0.48]{CO_dark_hist.eps}
\caption{\label{f:CO-dark} Histograms of the ``CO-dark'' gas column density.
The ``CO-dark'' gas from the old $N$(H$_{2}$) image is in black,
while that from the new $N$(H$_{2}$) image is in gray.
Both histograms are constructed using the data smoothed to 8.4$'$ resolution.}
\end{figure}
While our results are in contrast with Fukui et al. (2014,2015)
who found that the optically thick HI adds the column density of $\sim$10$^{20}$--10$^{22}$ cm$^{-2}$
and possibly explains the ``CO-dark'' gas in the Galaxy,
there are multiple factors that could affect the comparison, e.g.,
spatial coverage ($\sim$500 deg$^{2}$ for Perseus vs whole Galactic sky at $|b|$ $>$ 15$^{\circ}$)
and method for deriving $N$(H$_{2}$) (IRIS/2MASS vs \textit{Planck}).
In particular, we note that this study and Fukui et al. (2014,2015) probe very different scales:
our results are based on pencil-beam HI absorption measurements on 3.5$'$ scales,
while Fukui et al. (2014,2015) estimated the correction factor on 33$'$ scales.
If the CNM is highly structured with a low filling factor,
this could affect the estimate of the correction factor in both studies.
In the future, it will be important to compare the results from Fukui et al. (2014,2015)
with a large sample of HI absorption measurements as well as numerical simulations
(e.g., \citeauthor{Audit05} 2005; \citeauthor{Kim14} 2014)
to investigate how the derivation of the correction factor depends on different methodologies
and CNM properties.
\section{Summary}
\label{s:summary}
In this paper, we investigate the impact of high optical depth
on the HI column density distribution across the Perseus molecular cloud.
We use Arecibo HI emission and absorption measurements obtained toward 26 background sources (Paper I)
in order to derive the properties of CNM and WNM components
along each line of sight via the Gaussian decomposition approach (HT03a).
The derived properties are then used to estimate the correction factor for high optical depth,
and the correction is applied to the HI column density image computed in the optically thin approximation.
To revisit the HI saturation in Perseus observed by \cite{Lee12},
we rederive the H$_{2}$ column density image by adopting the same methodology as \cite{Lee12},
but using the corrected HI column density image.
The final HI and H$_{2}$ column density images at $\sim$0.4 pc resolution are then compared with the KMT09 predictions.
Finally, we investigate if the observed ``CO-dark'' gas in Perseus is dominated by the optically thick HI gas.
We summarize our main results as follows.
\begin{enumerate}[topsep=0pt,itemsep=-1ex]
\item We estimate the correction factor for high optical depth ($f$),
which is defined as the ratio of the true total HI column density ($N_{\rm tot}$)
to the HI column density derived in the optically thin approximation ($N_{\rm exp}$),
and express it as a function of $N_{\rm exp}$:
$f$ = log$_{10}$($N_{\rm exp}$/10$^{20}$)(0.32 $\pm$ 0.06) + (0.81 $\pm$ 0.05).
We use two different methods, Gaussian decomposition and isothermal approximation methods,
and find that they are consistent within uncertainties.
This is likely due to the relatively low optical depth and
insignificant contribution from the diffuse radio continuum emission for Perseus.
\item We estimate that the correction factor in/around Perseus is small (up to $\sim$1.2),
and the total HI mass increases by only $\sim$10\% from $\sim$2.3 $\times$ 10$^{4}$ M$_{\odot}$
to $\sim$2.5 $\times$ 10$^{4}$ M$_{\odot}$ due to the inclusion of the optically thick HI gas.
\item The H$_{2}$ column density image rederived using the corrected HI column density image
is comparable to the original one by \cite{Lee12},
confirming the minor correction for high optical depth.
\item For individual dark and star-forming regions in Perseus (B5, B1E, B1, IC348, and NGC1333),
the HI surface density is relatively uniform with $\sim$7--9 M$_{\odot}$ pc$^{-2}$.
This is slightly higher than what \cite{Lee12} found
due to the correction for high optical depth.
The correction brings a closer agreement with the KMT09 model
where the minimum HI surface density of $\sim$10 M$_{\odot}$ pc$^{-2}$ is predicted
for shielding H$_{2}$ against photodissociation in the solar metallicity environment.
\item Driven by the uniform $\Sigma_{\rm HI}$ $\sim$ 7--9 M$_{\odot}$ pc$^{-2}$,
$R_{\rm H2}$ vs $\Sigma_{\rm HI} + \Sigma_{\rm H2}$ on a log-linear scale
shows remarkably consistent results for all dark and star-forming regions:
$R_{\rm H2}$ sharply rises at small $\Sigma_{\rm HI} + \Sigma_{\rm H2}$,
and then gradually increases toward large $\Sigma_{\rm HI} + \Sigma_{\rm H2}$
with the transition at $R_{\rm H2}$ $\sim$ 1.
\item The mass increase due to the optically thick HI only accounts for $\sim$20\% of the observed ``CO-dark'' gas in Perseus.
The spatial distributions of the ``CO-dark'' gas and the optically thick HI gas do not generally coincide with each other,
and the ``CO-dark'' gas still exists even after the optically thick HI is considered in the derivation of H$_{2}$.
\end{enumerate}
Our study is one of the first attempts to examine the properties of the cold and warm HI in molecular cloud environments
and their relation with the HI-to-H$_{2}$ transition.
While HI envelopes surrounding molecular clouds have been frequently found
(e.g., \citeauthor{Knapp74} 1974; \citeauthor{Wannier83} 1983; \citeauthor{Elmegreen87} 1987;
\citeauthor{Andersson91} 1991; \citeauthor{Rogers95} 1995; \citeauthor{Williams96} 1996;
\citeauthor{Imara11} 2011; \citeauthor{Lee12} 2012; \citeauthor{Lee14} 2014; \citeauthor{Motte14} 2014),
HI has traditionally been considered less important for the formation and evolution of molecular clouds.
The excellent agreement between our observations of Perseus on sub-pc scales and the KMT09 model, on the other hand,
suggests the significance of HI as one of the key ingredients for the HI-to-H$_{2}$ transition and consequently for star formation.
While our data agree with the KMT09 model,
further theoretical developments are required.
For example, the KMT09 model considers only the CNM as a source of shielding against H$_{2}$ photodissociaiton.
Our HI emission and absorption measurements, however,
show that the CNM and the WNM have comparable column density contributions for Perseus (Paper I).
Clearly, the WNM needs to be taken into consideration in the models of H$_{2}$ formation (e.g., Bialy et al. in prep).
In addition, the KMT09 model ignores the effect of internal radiation field.
This is valid for Perseus, as there are no early-type stars producing a significant amount of dissociating radiation (\citeauthor{Lee12} 2012).
However, the role of internal radiation field would be important for massive molecular clouds containing a large number of OB stars.
The discrepancy with the KMT09 model recently found for the W43 molecular cloud complex in the Galactic plane (\citeauthor{Motte14} 2014)
and the low-metallicity SMC (\citeauthor{Welty12} 2012) suggests that
additional physical ingredients (e.g., better description of H$_{2}$ formation and photodissociation in the low-metallicity ISM
and strong shear/turbulence at galactic bars) would be necessary for extreme environments.
Finally, vertical thermal and dynamical equilibrium in a galactic disk is another important aspect,
as recently explored by \cite{Kim14}.
Future comparisons between theoretical models and HI emission/absorption observations of molecular clouds in a wide range of ISM environments
will be important for a deep understanding of HI properties and their role in the formation and evolution of molecular clouds.
\acknowledgements
We thank the anonymous referee for suggestions that improved this work.
We also thank Shmuel Bialy, John Dickey, Yasuo Fukui, Mark Krumholz, Vianney Lebouteiller, Franck Le Petit, Harvey Liszt,
Suzanne Madden, Evelyne Roueff, and Amiel Sternberg for stimulating discussion,
and telescope operators at the Arecibo Observatory for their help in conducting our HI observations.
M.-Y.L acknowledges supports from the DIM ACAV of the Region Ile de France,
and S.S acknowledges supports from the NSF Early Career Development (CAREER) Award AST-1056780.
We also acknowledge the NSF REU grant AST-1004881, which funded summer research of Jesse Miller.
For this work, we have made use of the KARMA visualization software (Gooch 1996) and NASA's Astrophysics Data System (ADS).
|
2,877,628,088,570 | arxiv | \section{Introduction}
The conflict between quantum theory and general relativity exposed by the black hole information paradox has swung back and forth for nearly four decades, recently inflamed by the firewall paradox.
There have been a variety of previous proposals that the black hole horizon is not as general relativity describes. In particular, the fuzzball program argues that the structure of the horizon is necessarily modified by the extended objects of string theory. Indeed, key features of the firewall argument were first put forward as evidence for fuzzballs~\cite{Mathur:2009hf}.
In this paper we focus primarily on the simplest version of fuzzballs, the two-charge system of D1-D5 branes compactified on a circle. In \S~\ref{sec:Jeq0} we reexamine the argument that the naive two-charge geometry is unphysical, and that fuzzball solutions are the correct description.
We begin by noting that as one approaches the singularity of the naive geometry, the first sign of a breakdown is that the radius of a circle drops below the string scale. This suggests a $T$-duality from the original IIB picture to IIA, and indeed this provides a description valid down to smaller radii. Eventually the coupling grows large, and an $S$-duality takes us to M theory. In this regime the four-torus shrinks toward zero size, and a further $STS$ duality brings us to a new Type II description, in which the charges are carried by fundamental strings and momentum. Finally this breaks down due to the spacetime curvature becoming large, and no further stringy duality can save us. Rather, the final picture is a weakly coupled CFT.
This onion-like layered structure has already been described in detail by Martinec and Sahakian~\cite{Martinec:1999sa}, building on the classic analysis of non-conformal branes in Ref.~\cite{Itzhaki:1998dd}. However, its significance for the fuzzball program does not seem to have been discussed.
Fuzzball solutions
approximate the naive geometry outside some crossover radius, which depends inversely on the average harmonic excited, $\overline{m}$. For different values of $\overline{m}$, the crossover radius may lie within any of the IIB/IIA/M/II$'$/CFT regimes, and the parametrically valid description is a fuzzball solution in the given duality frame. For {\it typical} states, the crossover occurs right at the transition between the final geometric picture, II$'$, and the free \mbox{CFT}. In particular, this changes the standard picture of two-charge D1-D5 fuzzballs. The smooth geometries~\cite{Lunin:2001jy} are not an accurate description for typical states. Rather, the best (though still marginal) supergravity description is one with explicit stringy sources.
Indeed, it is well-known that typical two-charge fuzzballs lie right at the breakdown of supergravity. In fact, there are three important radii that are known to coincide: the typical {\it fuzzball radius} $r_{\rm f}$; the {\it entropy radius} $r_{\rm S}$, where the area in Planck units just matches the density of states of the system; and the {\it breakdown radius} $r_{\rm b}$, beyond which supergravity cannot be continued. Historically the D1-D5 fuzzballs were derived by a duality chain from F1-$p$ solutions. These
are the same as the charges of our II$'$ description. We trace the relation between these descriptions, and we emphasize the distinction between two free orbifold CFTs that arise in the D1-D5 system.
Much of the discussion of two-charge fuzzballs focuses on this final transition radius, and compares fuzzballs with a black hole solution including $\alpha'$ corrections. Our focus is rather on descriptions that are parametrically valid. In search of a more interesting situation, we consider in \S~\ref{sec:Jneq0} states with large angular momentum $J$, for which the naive geometry is a black ring. This geometry breaks down due to large curvature as we approach the ring. We find that, as measured from the ring, the fuzzball and entropy radii again coincide, but the breakdown radius can be larger or smaller, depending on parameters. Thus we identify a regime where the fuzzball description is parametrically valid and physically correct, even though the naive geometry still has small curvature. We suggest that the breakdown of the naive geometry is instead signaled by the entropy radius, beyond which the naive geometry would describe more states than holography allows.
In \S~\ref{sec:discussion} we discuss further directions.
\section{The $J=0$ system}\label{sec:Jeq0}
\subsection{Naive geometry: small black hole}\label{subsec:naivegeo}
Consider the background
\begin{eqnarray}
ds^2_{\rm IIB} &=& \frac{1}{\sqrt{H_1 H_5}}(-dt^2+R^2dy^2) + \sqrt{H_1 H_5} \, dx_4^2 + \sqrt{\frac{H_1}{H_5}}\sqrt{V} dz_4^2 \,,\nonumber\\
e^{\Phi_{\rm IIB}} &=& g \sqrt{\frac{H_1}{H_5}} \,, \nonumber\\
C_2 &=& g^{-1}\left[H_1^{-1} dt\wedge R dy + Q_5 R \cos^2\tilde \theta d\psi \wedge d\phi\right] \,,
\end{eqnarray}
where
\begin{eqnarray}
H_{1} &=& 1+\frac{g N_1}{V r^2} \equiv 1+\frac{Q_1}{r^2} \,,\nonumber\\
H_5&=& 1+\frac{g N_5}{r^2} \equiv 1+ \frac{Q_5}{r^2}\,.
\label{naivemet}
\end{eqnarray}
We work in units such that $\alpha'=1$. The four flat transverse directions $x$ are non-compact, and can be coordinatized as $dx_4^2=d r^2+r^2(d\tilde \theta^2+\sin^2\tilde \theta d\phi^2+\cos^2\tilde \theta d\psi^2)$, where the tildes are included to conform with standard notation~\cite{Mathur:2005zp}. The $T^4$ coordinates
$z$ have period $2\pi$. We consider the case where the $T^4$ is replaced by K3 in Appendix~\ref{app:K3}.
For non-compact $y$, the infrared geometry is $AdS_3 \times S^3 \times T^4$ in Poincar\'e coordinates. If we then identify $y \cong y + 2\pi$, the horizon $r=0$ is a fixed point and becomes a cusp singularity. For the compact theory there are three moduli: the coupling $g$, the circle radius $R$, and the torus volume $V$. In the attractor limit where we ignore the 1's in the harmonic functions, only the modulus $g$ remains. The torus volume flows to the attractor value $V = N_1/N_5$, while $R$ appears only in the combinations $Rr$ and $y/R$. For simplicity we fix $V$ to its attractor value, so that $Q_1 = Q_5 \equiv Q$ and $H_1 = H_5 \equiv H$. We are most interested in the attractor region, but it is useful to keep the harmonic function $H$ general. The background is then given by \eqref{naivemet} with $e^{\Phi_{IIB}}=g$ and $H=1+Q/r^2$ where the 1 drops out in the attractor.
In order for this D1-D5 description to be the correct duality frame asymptotically, we need the coupling and curvature to be small, and the circle and torus to be larger than the string scale. Thus,
\begin{equation}
g < 1\,,\quad Q>1\,,\quad R > 1\,,\quad N_1 > N_5 \,. \label{regime}
\end{equation}
Discussions of this system often begin with a dual F1-$p$ description. In \S~\ref{subsec:fuzzgeos} we will discuss connections with this frame.
\subsection{Into the black onion}\label{subsec:intoonion}
In the fuzzball program, it is argued that for $y$ compact the geometry~(\ref{naivemet}) breaks down even before the singularity, and must be replaced by fuzzball solutions. We wish to ask, is there some signal of this breakdown as we approach the singularity?
While this work was in progress, we learned that this question had already been addressed by Martinec and Sasakian~\cite{Martinec:1999sa}. Since this result does not seem to be widely known, we review their analysis.
Note that the $y$ circle is shrinking, and at a radius $r \sim r_{\rm IIA}= {Q}^{1/2}/R$ it reaches the string scale. In the D1-D5 regime~(\ref{regime}) this is always inside the crossover to the near-horizon region, $r \sim r_{\rm nh} = {Q}^{1/2}$.\footnote{We will encounter a long list of significant radii as we move along. Figure~\ref{fig:radii} gives an overview. Because of the scaling in the attractor region noted above, most radii are proportional to $1/R$.}
This breakdown suggests a $T$-duality along the $y$ circle to a IIA solution, and indeed this extends the range of validity to smaller $r$.
The solution is
\begin{eqnarray}
ds^2_{\rm IIA} &=& -H^{-1} dt^2 + H (d\tilde y^2/R^2+ dx_4^2)+\sqrt{V} dz_4^2\,, \nonumber\\
e^{\Phi_{\rm IIA}} &=& \frac{g\sqrt H}{R} \,,\nonumber\\
C_1 &=& \frac{R}{g H} dt \,,\nonumber\\
C_3 &=& g^{-1} Q R \cos^2\tilde\theta d\psi \wedge d\phi\wedge d\tilde
y \,.
\end{eqnarray}
In the IIA frame, the charges are carried by D0- and D4-branes localized in the $\tilde y$ direction. We are interested in single-particle states, so the branes should be coincident in the $\tilde y$ direction. Unsmearing the sources gives
\begin{equation}
H = \frac{Q}{r^2}\rightarrow \frac{\pi Q}{R}\sum_n \frac{1}{[r^2+(\tilde y-2\pi n)^2 /R^2]^{3/2}} \sim \frac{\pi Q}{R \rho^3 } \,,
\end{equation}
where the normalization is fixed by the large-$r$ behavior. The crossover to the unsmeared solution is at $r \sim r_{\rm u} = 1/R$. In the last line we have given the form as we approach the $\tilde y = 0$ image, where $\rho^2 = r^2 + \tilde y^2/R^2$.
As we continue toward the singularity, the IIA coupling becomes large, suggesting a lift to M theory. If we work with the smeared metric, this occurs at $r \sim r_{\rm M} = g Q^{1/2}/R$. Thus $r_{\rm M}/r_{\rm u} = g Q^{1/2}$. In the D1-D5 regime~(\ref{regime}), $g$ is small and $ Q$ is large, but the product $g Q^{1/2}$ is not restricted. If $g Q^{1/2}>1$, the transition to the M theory picture occurs in the smeared regime, at $r \sim r_{\rm M}$. If $g Q^{1/2}<1$ it occurs at in the unsmeared regime, at $\rho = \rho_{\rm M} = g^{2/3} Q^{1/3}/R$.
Either way, we end up with the M theory solution
\begin{eqnarray}
ds^2_{\rm M} &=& e^{-2\Phi_{\rm IIA}/3} ds^2_{\rm IIA} + e^{4\Phi_{\rm IIA}/3}(dx_{11}+ C_1)^2 \nonumber\\
&=& \left(\frac{R^2}{g^2 H}\right)^{1/3}\left[-H^{-1} dt^2 + H(d\tilde y^2/R^2+dx_4^2)+\sqrt{V}dz_4^2\right] + \left(\frac{g^2 H}{R^2}\right)^{2/3}\left( dx_{11} + \frac{R}{gH}dt\right)^2 \,,
\nonumber\\[3pt]
A_3 &=& C_3 \,,
\label{mmet}
\end{eqnarray}
(here $x_{11}$ denotes the M direction, and the units are such that the M theory Planck scale is 1) which has $p_{11}$ and wrapped M5 charges.
As we proceed to smaller $r$, both the transverse $S^3$ and the $T^4$ may shrink. The $S^3$ metric is proportional to $r^{2/3}$ in the smeared regime but constant $\rho^0$ in the unsmeared regime, the latter property following from the conformal behavior of the M5 solution. One can check that the $S^3$ radius never falls below the coincident M5-brane value $N_5^{1/3}$, so this never leads to a breakdown of the solution. For the $T^4$, the radii become Planckian when $H = R^2 V^{3/2} /g^2$. In the smeared solution this is at $r_{\rm II'} = gQ^{1/2} /V^{3/4}R = r_{\rm M}/V^{3/4}$. In the unsmeared solution it is at $\rho_{\rm II'} = g^{2/3} Q^{1/3}/V^{1/2} R$. If $r_{\rm II'} > r_{\rm u}$ the M theory solution breaks down in the unsmeared regime at $r_{\rm II'}$, otherwise it breaks down at $\rho_{\rm II'}$.
In order to extend the solution further, we must first reduce to IIA along one of the $T^4$ directions. The other three torus radii remain small, so a $T$-duality along these is needed next. This leaves the IIB coupling large, so a further $S$ duality is needed. The net result of this $STS$ transformation is a parametrically valid type II description
\begin{eqnarray}
ds^2_{\rm II'} &=& V\left[d{x_{11}}^2+\frac{2R}{gH} dt dx_{11} +\frac{R^2}{g}\left(d\tilde y^2/R^2 +dx_4^2\right)\right]+ d \tilde z_3^2\,, \nonumber\\
e^{\Phi_{\rm II'}} &=& \frac{R V^{3/4}}{g H^{1/2}} \,, \nonumber\\
B_2^{\rm II'} &=&
\frac{R^2 V}{gH} dt\wedge dx_{11} \,. \label{ii'}
\end{eqnarray}
In this solution one of the original torus directions has become the M direction, while $x_{11}$ has emerged as a new periodic direction. The three $\tilde z$-circles remaining from the original $T^4$ are now string-sized. We therefore label this solution simply as II$'$, since it is midway between the IIA and IIB descriptions. The charges are F-string winding in the 11-direction and $p_{11}$.
In this final form, the curvature becomes large at $r_{\rm b} = \rho_{\rm b} = g /V^{1/2} R$. This is inside the unsmearing radius $r_{\rm u}$, so it is $\rho_{\rm b}$ that matters. When curvature becomes large, no further string duality can save us. However, we note that the II$'$ description and its breakdown are very similar to those of the supergravity description of the D1-brane in Ref.~\cite{Itzhaki:1998dd}. There, the final supergravity description is in terms of F-strings, and it is argued that dynamics at smaller $r$ (lower energy) is given by the long string CFT identified in Refs.~\cite{Motl:1997th,Banks:1996my,Dijkgraaf:1997vv}.
We expect the same to hold here as well, although the additional momentum charge means that we are looking at excited states in this theory.
This conjecture is in keeping with the general expectation that when the curvature becomes large while the string coupling goes to zero, as it does in the solution~(\ref{ii'}), one should look for a weakly coupled CFT description. The leading twist interaction in the CFT is irrelevant~\cite{Dijkgraaf:1997vv}, so that the coupling continues to go to zero in this regime.
The full picture is summarized in Figure~\ref{fig:radii}. Martinec and Sahakian do not restrict to the asymptotic D1-D5 regime~(\ref{regime}) and so cover a wider range of phases (Ref.~\cite{Martinec:1999sa}, Fig.~4). Note also that they use different variables for the axes.
\begin{figure}[!ht]
\begin{center}
\vspace {-5pt}
\includegraphics[width=\textwidth]{Onion.pdf}
\end{center}
\vspace {-10pt}
\caption{Domains of duality frames, on a log-log plot of radius and coupling. The dashed line divides smeared geometries (above) from unsmeared (below).
}
\label{fig:radii}
\end{figure}
For such non-conformal branes~\cite{Itzhaki:1998dd}, the physics at a given scale or temperature is governed by the weakly coupled description at the corresponding holographic radius. For example, at the lowest energies the weakly coupled field theory is the appropriate description, as it is for D$p$-branes with $p>3$.
\subsection{Fuzzball geometries}\label{subsec:fuzzgeos}
\subsubsection{Fuzz and the onion}
A more general class of two-charge geometries is characterized by a curve $\vec{F}(v)$ in the non-compact $\R^4$~\cite{Lunin:2001jy}:
\begin{eqnarray}
ds^2 &=& \frac{1}{\sqrt{H_1 H_5}}\left[-(dt+A)^2+(Rdy+B)^2\right]+\sqrt{H_1H_5}dx_4^2+\sqrt{\frac{H_1}{H_5}}\sqrt{V}dz_4^2
\,,\nonumber\\
e^{\Phi} &=&g\sqrt{\frac{H_1}{H_5}}
\,,\nonumber\\
C_2 &=& g^{-1} \left[ H_1^{-1} (dt+A)\wedge (R dy+B) +\zeta\right]
\,, \label{fuzzmet}
\end{eqnarray}
where the harmonic functions are
\begin{eqnarray}
H_5 &=& 1 + \frac{Q_5}{L}\int_0^L \frac{dv}{|\vec{x}-\vec{F}(v)|^2}
\,,\nonumber\\
H_1 &=& 1 + \frac{Q_5}{L}\int_0^L \frac{|\dot{\vec{F}}|^2dv}{|\vec{x}-\vec{F}(v)|^2}
\,,\nonumber\\
A^i &=& \frac{Q_5}{L}\int_0^L\frac{\dot F^i dv}{|\vec{x}-\vec{F}(v)|^2} \,, \label{fuzzharm}
\end{eqnarray}
with $L = \frac{2\pi Q_5}{R}$.\footnote{The range $L$ is a vestige of the original derivation of these solutions and does not have particular significance.} The remaining quantities are defined via $dB = \star_4 dA$, $d\zeta = -\star_4 dH_5$.
To be precise, this solution describes only oscillations in the transverse directions. The complete solution with oscillations in the torus directions is given in Refs.~\cite{Lunin:2002iz,Kanitscheider:2007wq}. It is slightly more complicated in form, but qualitatively similar, and the same estimates of radii apply.
At $r > |\vec{F}|$ these solutions go over to the naive geometry~(\ref{naivemet}), with
\begin{equation}
Q_1 = \frac{Q_5}{L}\int_0^L {|\dot{\vec{F}}|^2dv} \,.
\end{equation}
Expanding $\vec{F}$ in harmonics,
\begin{equation}
\vec{F} = \sum_{m=1}^\infty \vec{F}_m e^{2\pi i m v/L} + {\rm c.c.} \,,
\end{equation}
this becomes
\begin{equation}
2\sum_{m=1}^\infty m^2 |\vec{F}_m|^2 = \frac{Q_1 Q_5}{R^2} \label{fsum}
\end{equation}
or
\begin{equation}
\frac{2\ V R^2}{g^2} \sum_{m=1}^\infty m^2 |\vec{F}_m|^2 = N_1 N_5 . \label{sumrule}
\end{equation}
This last form is compatible with the quantization condition
\begin{equation}
|\vec{F}_m|^2 = \frac{g^2 n_m}{2mV R^2} = \frac{r^2_{\rm b} n_m}{2m} \ \ \Rightarrow\ \ \sum_{m=1}^\infty m n_m = N_1 N_5\,,
\label{quant}
\end{equation}
which can be derived either by duality from the F1-$p$ system~\cite{Lunin:2001jy} or by quantization of the D1-D5 solution~\cite{Rychkov:2005ji}. Note that the breakdown radius $r_{\rm b}$ is the same as the parameter $\mu$ in the literature, meaning that $r_{\rm b}$ maps to the string length in the F1-$p$ frame.
For a solution with average harmonic $\overline{m}$, the sum~(\ref{fsum}) implies that
\begin{equation}
|\vec{F}| \sim \frac{\sqrt{Q_1 Q_5}}{\overline{m}R} = \frac{\sqrt{N_1N_5}}{\overline{m}} r_{\rm b} \equiv r_{\overline{m}} \,.
\end{equation}
As long as $r_{\overline{m}} > r_{\rm b}$ this should be a valid supergravity solution. This translates to $\overline{m} < \sqrt{N_1N_5}$. Note that $r_1 > r_{\rm IIA}$, so the largest solutions are described in the original IIB D1-D5 frame. The ratio $r_1/r_{\rm nh}$ is of order $Q^{1/2}/R$. In the asymptotic regime~(\ref{regime}) this can be either large or small, but we are usually interested in scaling up the charges with other parameters held fixed. In this case the $\overline{m} \sim 1$ solutions extend into the flat Minkowski region.
As $\overline{m}$ decreases, the parametrically valid description of the state moves among the IIA, M, and II$'$ frames. Since the $y$ and $z$ directions remain flat in the fuzzball solutions, it is straightforward to dualize them in the same way as for the naive solution, including the unsmearing; we do so in Appendix~\ref{app:fuzzyonion}. For $\overline{m} > \sqrt{N_1N_5}$ the states are described by the low energy CFT rather than supergravity. As $\overline{m} \to \infty$, the fuzzball solution approaches the naive solution, although the quantization condition puts the limit $m \leq N_1 N_5$ on the highest Fourier mode of $\vec{F}$.
For {\it typical} states, $\overline{m} \sim \sqrt{N_1N_5}$, which defines the fuzzball radius $r_{\rm f} = r_{\sqrt{N_1N_5}} = r_{\rm b}$.
That is, these states live at the boundary of validity between the last supergravity solution and the free CFT. The fact that these fuzzballs live at the boundary of validity of supergravity is well-known in the F1-$p$ frame~\cite{Lunin:2002qf}, and remains true here. The duality cascade that we have found means that the D1-D5 geometries are never good descriptions of these typical fuzzball states. The best supergravity description would be the F1-$p$ solutions~\cite{Callan:1995hn,Dabholkar:1995nc}.
Note that for both the fuzzball and naive D1-D5 geometries, the IIB curvature is always small in terms of the tension of a probe F-string, seemingly in contradiction with what we have found.
The point of the duality cascade is that there is a lighter string-like object: a probe KK monopole (charged on the $y$-circle, wrapped on the torus and extended in one transverse direction) which maps to a probe F-string in the II' picture. It has a tension $\tau_{\rm KK} \sim R_y^2(r) V(r)/g^2(r) = R^2V/g^2 H(r)$, which goes to zero as it approaches the singularity and matches the IIB curvature $Q^{-1}$ at $\rho_{\rm b}$, signaling a breakdown.
Before we go on, there is one additional radius of interest. The two-charge system has a known microscopic entropy of order
\begin{equation}
S \sim \sqrt{N_1N_5} \,. \label{entropy}
\end{equation}
Let us compare this to the Bekenstein-Hawking entropy that we would ascribe to a spherical shell surrounding the singularity in the naive geometry. In the smeared regime $r R > 1$ this would be
\begin{equation}
\frac{\mbox{8d area}}{l_p^8}\sim R_{y}\times V_{S^3}\times V_{T^4}\times e^{-2\Phi} = \frac{r}{r_{\rm b} }\sqrt{N_1N_5} \,. \label{area1}
\end{equation}
The area in Planck units is the same in any duality frame; the decomposition~(\ref{area1}) corresponds to the IIB picture.
In the smeared regime $r R < 1$ it is
\begin{equation}
\frac{\mbox{8d area}}{l_p^8}\sim V_{S^4}\times V_{T^3}\times L_{11} \times e^{-2\Phi} = \frac{\rho}{\rho_{\rm b}} \sqrt{N_1N_5} \,, \label{area2}
\end{equation}
where we have used the II$'$ description. It is now interesting to ask, at what radius is the holographic value equal to the actual entropy? We see that this is true at $\rho = \rho_{\rm b}\equiv \rho_S$. Again this reproduces a result known from the F1-$p$ frame~\cite{Sen:1994eb,Lunin:2002qf}, that the horizon radius corresponding to the microscopic entropy is comparable to the breakdown radius and the typical fuzzball radius.
It is not clear then whether the fuzzball solutions are any better as a description than the naive geometry.
\subsubsection{From F1-$p$ to D1-D5 and back again}
\label{subsec:f1pduals}
The D1-D5 fuzzball geometries were originally obtained~\cite{Lunin:2001jy} via U-duality from F1-$p$ geometries describing a string with left-moving excitations:
\begin{equation}
\left(\begin{array}{c} $F1$\\ p\end{array}\right)\underrightarrow{S}
\left(\begin{array}{c} $D1$\\ p\end{array}\right)\underrightarrow{T_{T^4}}
\left(\begin{array}{c} $D5$\\ p\end{array}\right)\underrightarrow{S}
\left(\begin{array}{c} $NS5$\\ p\end{array}\right)\underrightarrow{T_y T_6}
\left(\begin{array}{c} $NS5$\\ F1\end{array}\right)\underrightarrow{S}
\left(\begin{array}{c} $D5$\\ D1\end{array}\right) \,. \label{horiz}
\end{equation}
This relates the F1-$p$ and D1-D5 moduli as
\begin{equation}
g_{\mbox{\scriptsize F1-$p$}} = \left(\frac{ V^{3/4} R}{g }\right)_{\mbox{\scriptsize D1-D5}}\,,\quad R_{\mbox{\scriptsize F1-$p$}}=\left(\sqrt{V}\right)_{\mbox{\scriptsize D1-D5}}\,,\quad V_{\mbox{\scriptsize F1-$p$}}^{1/4} = \left(\frac{\sqrt{V}}{g}\right)_{\mbox{\scriptsize D1-D5}} \,.
\end{equation}
The F1-$p$ solutions describe the physics in a corner of the moduli space where, in terms of the asymptotic D1-D5 moduli, ${ V^{3/4} R}/{g} <1$, ${V} > 1$, and $\sqrt{V}/g > 1$. In this regime the D1-D5 description at infinity breaks down.
It is amusing that the descent into the fuzzball core leads us back to the F1-$p$ duality frame in which the solutions were originally obtained, a sort of ``ontogeny recapitulates phylogeny." Unlike the horizontal duality chain~(\ref{horiz}), the asymptotics are held fixed as we descend. The II$'$ frame in the deep IR is related to the asymptotic IIB frame by
\begin{equation}
\left(\begin{array}{c} D5\\ D1\end{array}\right)\underrightarrow{T_y}
\left(\begin{array}{c} D4\\ D0\end{array}\right)\underrightarrow{S_{11}}
\left(\begin{array}{c} M5 \\ p\end{array}\right)\underrightarrow{S_{6}}
\left(\begin{array}{c} D4\\ p\end{array}\right)\underrightarrow{T_{{789}}}
\left(\begin{array}{c} D1\\ p\end{array}\right)\underrightarrow{S}
\left(\begin{array}{c} F1\\ p\end{array}\right) \,,
\end{equation}
which inverts the horizontal chain:
$
S T_{T^4} S T_{y6} S T_{y} S_{11} S_{6} T_{789} S =1 .$
Examining the II$'$ metric, one finds $R^{11}_{\rm II'} = \sqrt{N_1/N_5} = R^y_{\mbox{\scriptsize F1-P}}$, while $R^y_{\rm II'} = g^{-1} \sqrt{N_1/N_5} = V_{\mbox{\scriptsize F1-P}}^{1/4}$. The long chain from F1-$p$ to D1-D5 and back again just switches the $(y,x^6)$ circles of the original F1-$p$ picture with the $(x_{11}, y)$ circles of II$'$: the emergent II$'$ description of D1-D5 at low energies matches the F1-$p$ description obtained by moving on the asymptotic moduli space.
\subsubsection{Orbifolds and orbifolds}
The target space of the free CFT is the orbifold $(R^4 \times T^4)^{N_5}/S_{N_5}$. This should not be confused with the orbifold $(T^4)^{N_1 N_5}/S_{N_1 N_5}$ which also appears in the D1-D5 system. The latter is relevant in an entirely different duality frame where $N'_5 = 1$, reached by turning on form fields on the $T^4$. We also note some other differences between these:
\begin{itemize}
\item
For $(R^4 \times T^4)^{N_5}/S_{N_5}$ we are interested in states with $N_1$ left-moving excitations. For $(T^4)^{N_1 N_5}/S_{N_1 N_5}$ we are interested in ground states.
\item
For $(R^4 \times T^4)^{N_5}/S_{N_5}$ we are only interested in the sector with a single long string, because only this corresponds to a single-particle state. For $(T^4)^{N_1 N_5}/S_{N_1 N_5}$ the fractionalized strings are all bound to the D5-branes, so all winding sectors correspond to single-particle states.
\item
For $(R^4 \times T^4)^{N_5}/S_{N_5}$ the twist interaction is irrelevant as noted above. For $(T^4)^{N_1 N_5}/S_{N_1 N_5}$ it is marginal.
\end{itemize}
\subsubsection{Lessons}\label{subsubsec:lessons}
Our conclusion is that the typical fuzzball is at the transition between two descriptions, a supergravity description with stringy sources and a weakly coupled CFT description. There is yet a third description that has been given for this system: the black hole solution with a horizon, which exists when higher derivative terms are included~\cite{Dabholkar:2004yr,Dabholkar:2004dq}. This is usually discussed in systems with half as much supersymmetry, where the $T^4$ is replaced by K3, but as shown in Appendix~\ref{app:K3} the onion structure is the same in this case.\footnote{We thank Nori Iizuka for discussions of the K3 case and the relation between different pictures.} This solution allows a precise counting of supersymmetric states, but like the naive and fuzzball geometries it is on the boundary of its range of validity.
We are primarily interested in regimes where the fuzzball geometries are parametrically valid, and we will find one in \S~\ref{sec:Jneq0}, but here we make a few remarks about the marginal case found above. Ref.~\cite{Sen:2009bm} argues that two-charge systems fall into two classes, those whose description is given by smooth horizonless solutions, and those where it is a black hole from a higher derivative action. The D1-D5 system was argued to be of the first type, but the onion structure shows that, if this classification is correct, then it is of the second type.
The fuzzball description might seem to retain more information by distinguishing individual microstates, but this information may not be meaningful. As argued in~\cite{Sen:2009bm}, interactions mix the BPS states of interest into a larger space of non-BPS states, so that the resulting BPS states may bear little resemblance to their naive form. This phenomenon can be seen for example in the low-energy CFT frame. There is a twist interaction, which mixes the BPS single-long-string sector with non-BPS multi-string states (these are somewhat localized in the transverse directions and so have supersymmetry-breaking $p_\bot$).
However, there is an interesting counterargument. The one-point functions of chiral operators distinguish microstates~\cite{Skenderis:2006ah,Kanitscheider:2006zf}, and these one-point functions are not renormalized~\cite{Baggio:2012rr}.\footnote{In Ref.~\cite{Giusto:2014aba}, it has been shown that these same one-point functions imply that the entanglement entropy distinguishes microstates.}
It is puzzling to reconcile this with the point of view above. Note that in a Haar-random state the one-point functions will be of order $e^{-S/2}$~\cite{Balasubramanian:2007qv}. Curiously, the same is true for Schwarzschild black holes. In thermal systems, variations of the one-point functions from their thermal averages are of order $e^{-S/2}$~\cite{shredder}. However, this implies that the eigenvalues are $O(1)$, and one can find a basis in which the one-point functions are of this size in {\it any} thermal system.
Indeed, a similar basis has been used to argue for the genericity of firewalls, namely the basis in which the Hawking occupation numbers are diagonal~\cite{Bousso:2013wia,Marolf:2013dba}. These would be analogous to number eigenstates for the $\vec F_m$. So the `firewall' basis in these papers seems to be the Schwarzschild equivalent of the two-charge fuzzball states. This parallel is somewhat unexpected, since extremal and non-extremal horizons are in many respects quite different. Clearly it is interesting to contemplate this further.
\section{The $J > 0$ system}\label{sec:Jneq0}
\subsection{Naive geometry: small black ring}
We now focus on fuzzball states having angular momentum $J$ in the 1-2 plane of the transverse space. The maximum value $J_{\rm max} = N_1 N_5$ corresponds to the classical solution~\cite{Maldacena:2000dr,Lunin:2001fv}
\begin{equation}
\vec{F}_{\rm max}=(a \cos \omega v,a \sin \omega v)\,,
\end{equation}
where only the $m=1$ harmonic is excited. Here
\begin{equation}
a = r_1 = \sqrt{Q_1 Q_5}/{R}\,,\quad \omega = 2\pi/L = R/Q_5 \,.
\end{equation}
For near maximal $J$, i.e.\
\begin{equation}
\epsilon \equiv \frac{J_{\rm max} - J}{J_{\rm max}} \ll 1\,,
\end{equation}
most of the excitation goes into the first harmonic. Such a solution can be described by the profile
\begin{eqnarray}
\vec{F} &=& \vec{F}^{(0)} + \delta \vec{F} \,, \nonumber\\
\vec{F}^{(0)} &=& (a_0\cos \omega v,a_0 \sin \omega v)\,,\label{ringprofile}
\end{eqnarray}
with $a_0= a \sqrt{J/J_{\rm max}}$. The sum rule~(\ref{sumrule}) gives
\begin{equation}
\frac{2\ V R^2}{g^2} \sum_{m=1}^\infty m^2 |\delta \vec{F}_m|^2 = \epsilon N_1 N_5 \,. \label{ringsum}
\end{equation}
For typical states, the dominant harmonic is then $\overline{m} \sim \sqrt{\epsilon N_1 N_5 }$. We have $|\delta \vec{F}|/|\vec{F}^{(0)}| \sim \sqrt{\epsilon}/\overline{m} \sim 1/\sqrt{N_1 N_5}$, so the geometry is a fuzzy ring, with thickness much less than its radius.
As in the $J=0$ case, we can think of the naive geometry as obtained by taking the $\overline{m} \to \infty$ limit, or equivalently by interpolating the geometry outside the fuzz down to the core of the ring. This gives~\cite{Lunin:2002qf,Balasubramanian:2005qu}
\begin{eqnarray}
H_5 &\approx& 1 + \frac{Q_5}{L}\int_0^L \frac{dv}{|\vec{x}-\vec{F}^{(0)}(v)|^2}
\,,\nonumber\\
H_1 &\approx& 1 + \frac{J_{\rm max}}{J}\frac{Q_5}{L}\int_0^L \frac{|\dot {\vec{F}}^{(0)}|^2dv}{|\vec{x}-\vec{F}^{(0)}(v)|^2}
\,,\nonumber\\
A^i &\approx& \frac{Q_5}{L}\int_0^L\frac{\dot F^{(0)i} dv}{|\vec{x}-\vec{F}^{(0)}(v)|^2} \,,
\end{eqnarray}
which is shown in~\cite{Iizuka:2005uv,Balasubramanian:2005qu} to be a special case of the black ring~\cite{Elvang:2004rt,Bena:2004de,Gauntlett:2004qy}.
Because of the factor of ${J_{\rm max}}/{J}$, the cancellation of singular behaviors that gives rise to a smooth geometry~\cite{Maldacena:2000dr,Lunin:2002iz} no longer occurs, and there is a singularity in the core of the ring. Using ``ring coordinates'' as in~\cite{Elvang:2004rt,Bena:2004de} the flat metric $dx_4^2$ on $\mathbb{R}^4$ is
\begin{equation}
dx_4^2=\frac{a_0^2}{(X-Y)^2} \left[\frac{dY^2}{Y^2-1} +(Y^2-1) d\psi^2 + \frac{dX^2}{1-X^2} +(1-X^2) d\phi^2\right]\,, \label{fullsol}
\end{equation}
and $\mathbb{R}^4$ is foliated by surfaces of constant $Y$ with topology $S^1 \times S^2$. The coordinates $X,Y$ take values in the range $-1 \leq X \leq 1$ and $-\infty < Y \leq -1$ and $\psi,\phi$ are polar angles in two orthogonal planes in $\mathbb{R}^4$ with period $2\pi$. The angle $\psi$ is along the ring and the ring singularity is located at $Y=-\infty$.
In terms of the ring coordinates we have
\begin{equation}
H_{1}=1+\frac{Q_{1}}{\Sigma}\,, \quad H_{5}=1+\frac{Q_{5}}{\Sigma}\,, \quad \text{where} \quad \Sigma=\frac{2a_0^2}{X-Y}\,,
\end{equation}
and
\begin{equation}
A_\psi = \frac{R}{2} (1+Y)\,, \quad B_\phi= \frac{R}{2} (1+X)\,,\quad \zeta_{\psi \phi} = \frac{Q_5}{2}\left[Y-\frac{1-Y^2}{X-Y}\right]\,.
\end{equation}
In the near-ring limit it is useful to switch from the ring coordinates $X,Y$ to $\theta,x_\bot$ :
\begin{equation}
X\approx -\cos\theta\,, \qquad 1+Y \approx -\frac{a_0}{x_\bot}\,,
\end{equation}
where the angle coordinate $\theta$ combines with $\phi$ to form an $S^2$ and $x_\bot$ is the radial coordinate transverse to the ring. The ring singularity is now located at $x_\bot=0$. The leading behaviors (simplified again to $Q_1 = Q_5 = Q$) are
\begin{equation}
H_5 = H_1 \approx \frac{R}{2cx_\bot} \,,\quad
A_\psi \approx -\frac{Q c}{2x_\bot }\,,\quad
B_\phi \approx \frac{R}{2}(1-\cos\theta) \,,\quad
\zeta_{\psi \phi} \approx - \frac{Q}{2}(1-\cos\theta)\,,
\end{equation}
where we have introduced $c = \sqrt{J/J_{\rm max}}= \sqrt{1-\epsilon}$.
The naive near-ring metric becomes
\begin{eqnarray}
ds_{\rm near}^2 &\approx & \frac{2 c x_\bot}{R} \Big[-\Big(dt-\frac{Qc}{2 x_\bot} d\psi \Big)^2 + R^2 \left(d{y}+\frac{1-\cos\theta}{2} d\phi\right)^2\Big] \nonumber\\
&&+ \frac{Rc}{2x_\bot} \Big[ dx_\bot^2 + x_\bot^2 (d\theta^2 +\sin^2\theta d\phi^2) \Big]+
\frac{cQ^2}{2Rx_\bot} d\psi^2 + \sqrt{V} dz_4^2\,\,.\label{nearring}
\end{eqnarray}
For $c=1$ this is smooth at $x_\bot = 0$, but for $c < 1$ it becomes singular there. The near-ring dilaton is simply $e^{\Phi} = g$ and the RR potential is given by
\begin{equation}
C_2 \approx 2c x_\bot dt \wedge \left[dy + \frac{1-\cos\theta}{2} d\phi\right] + Q c^2 \left[ dy + \left(1+\frac{1}{c^2}\right) \frac{1-\cos\theta}{2} d\phi\right]\wedge d\psi\,.
\end{equation}
In the near-ring limit there are four local charges corresponding to D1 and D5 branes wrapped on the $y$ circle and the torus, KK monopoles wrapping the $y\psi$ directions and the torus and momentum charge along the $\psi$ direction.\footnote{Note that in the near-ring geometry~(\ref{nearring}) the circumference of the $\psi$-circle seems to go to zero at large~$x_\bot$. However, this occurs outside of the range of validity of~(\ref{nearring}). In the full solution~(\ref{fullsol}) the 1's in the harmonic functions prevent the $\psi$-circle from shrinking.}
\subsection{No black onion rings}\label{subsec:noonionring}
As we proceed toward smaller $x_\bot$, the $y$-circle again shrinks. However, this is merely a coordinate effect: the metric in the $x_\perp$-$y$ plane is just $\R^2$, with $y$ an angular coordinate. A $T$-duality provides a useful description only if the shrinking circle does not cap off smoothly,
as in the $J=0$ metric~(\ref{naivemet}). Hence there is no repetition of the layered structure found before: there is no black onion ring.
The first breakdown of the naive geometry~(\ref{nearring}) is due to the divergence of the curvature, because of the uncanceled $1/x_\bot$ in $g_{\psi\psi}$ and the squashing of the Hopf fibration. The curvature invariant is calculated to be
\begin{equation}
R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} = \frac{22}{R^2 x_\bot^2} \epsilon^2\,.
\end{equation}
This defines the breakdown radius~$x_{\bot\rm b} = \epsilon/R$.
As for the $J=0$ case there are two other radii to compare. From the discussion below Eq.~(\ref{ringsum}) it follows that the fuzzball radius is
\begin{equation}
x_{\bot\rm f} \sim r_1/\sqrt{N_1 N_5} = g/R\sqrt V \,.
\end{equation}
To obtain the entropy radius, the area in Planck units of a torus surrounding the ring is
\begin{equation}
\frac{\mbox{8d area}}{l_p^8}\sim L_{\psi}\times L_{y} \times L_{S^2}\times L_{T^4} \times e^{-2\Phi} \sim Q \sqrt V x_\bot R \sqrt{\epsilon}/g^2
\,. \label{arearing}
\end{equation}
Equating this to the entropy $\sqrt{\epsilon N_1 N_5}$, we obtain $x_{\bot S} = x_{\bot\rm f} = g/R\sqrt V$.
The matching of the fuzzball and entropy radii for the ring has been noted previously~\cite{Lunin:2002qf}. But unlike the $J=0$ case considered above, the breakdown radius differs from these:
\begin{equation}
\frac{x_{\bot \rm b}}{x_{\bot {\rm f}}} = \frac{x_{\bot \rm b}}{x_{\bot S}} = \frac{\epsilon \sqrt{V}}{g} \,.
\end{equation}
This ratio can be either large or small.
The interesting case is when ${x_{\bot {\rm f}, S} \gg x_{\bot \rm b}}$: the fuzzballs appear at a radius where the curvature is still small.\footnote{
The curvature is smaller than the $1/\mu^2$ that might have been expected from the curvature in the original F1-$p$ frame ($\mu$ is defined below Eq.~(\ref{quant})). This happens because terms arising originally from $B_{\mu\nu}$ combine with the metric to produce a smoother Hopf-fibered metric. In the parameter regime where the F1-$p$ duality frame applies, the curvature becomes stringy and there is a higher-derivative black hole solution~\cite{Dabholkar:2006za}. }
Thus they are good supergravity solutions, and give a parametrically valid description of the states in this regime. It is interesting to ask whether the naive geometry shows any signs of this premature breakdown.
For comparison, in the enhan\c{c}on~\cite{Johnson:1999qt} and the ${\cal N}=1^*$ geometries~\cite{Polchinski:2000uf}, singularities are resolved by branes expanding out to radii where the naive curvature is small. In these cases, brane probes give an indication of this: if one tries to add branes to the singularity, they feel a repulsive potential at radii where the curvature is still small. This does not seem to be the case for the black ring: one can consider atypical solutions with larger harmonics, and these can approach the ring much more closely. In the Klebanov-Tseytlin geometry~\cite{Klebanov:2000nc}, resolved in supergravity~\cite{Klebanov:2000hb}, a flux takes an unphysical negative value at finite radius; nothing analogous happens here.
The signal of the breakdown of the naive geometry for the black ring seems to be the entropy radius. If the naive geometry were valid, we could consider a torus thinner than $x_{\bot S}$, and the number of quantum states contained within would be larger than the exponential of the Bekenstein-Hawking entropy for the torus. It is natural to conjecture that this cannot happen: that if a system has a Hilbert space of dimension~${\cal D}$, then the states must be distinguishable at a radius where a surrounding surface has area $\log \cal D$, in Planck units.
For ${x_{\bot {\rm f}, S} \ll x_{\bot \rm b}}$, we have not yet found a good description.
\section{Discussion}\label{sec:discussion}
Our study of two-charge fuzzballs has led to some surprises.
For $J=0$, we find that the appropriate duality frame depends on the size of the fuzzball state, which is determined by the average harmonic $\overline{m}$. For typical states, the best supergravity description is not in terms of smooth D1-D5 solutions but rather has stringy sources. We emphasize the importance of three radii: the radius of the typical fuzzball, the radius where the transverse area is equal to the microscopic entropy, and the radius where the curvature approaches the string scale. For the two-charge system, these three radii agree, meaning in particular that the supergravity description is beginning to break down for typical states. This triple agreement is well-known in the original F1-$p$ duality frame; it is therefore unsurprising to find it here since the II$'$ frame with F1-$p$ charges is actually the correct duality frame for the typical fuzzball.
Fuzzballs with other values of $\overline{m}$ are parametrically valid in one of the supergravity pictures, or in the free \mbox{CFT}. These descriptions accurately capture dynamical behavior and excited states, not just BPS properties.
For three-charge black holes the entropy is $S_{\mbox{\scriptsize 3-charge}} \sim \sqrt{N_p N_1 N_5}$. When $N_p \ll N_1, N_5$, the geometry resembles the two-charge geometry at large radius. It begins to differ at the entropy radius~(\ref{area1}, \ref{area2}) that would correspond to $S_{\mbox{\scriptsize 3-charge}}$. This is
\begin{equation}
r_{\mbox{\scriptsize 3-charge}}(N_p) \sim \sqrt{N_p}\, r_{\rm b} \,.
\end{equation}
We see that the correct description of these solutions can be any of IIB, IIA, M, or II$'$, depending on $N_p$.
For $J \neq 0$, we have found a regime near $J_{\rm max}$ where the fuzzball solutions are of low curvature. It is interesting that the naive solution gives no direct indication of breakdown at the corresponding radius. The curvature is small, and probe branes see no breakdown. The key indicator seems to be the entropy radius: if the naive geometry were the correct description down to smaller radii, there would not be room for all the microstates. This leads us to conjecture that if some sets of microstates give rise to a common geometry, then this geometry must break down when the transverse area is of order the entropy in Planck units.
If we apply this to the Schwarzschild geometry in a naive way, the entropy radius $r_{\rm S}$ is the Schwarzschild radius $r_{\rm s}$. If we pass through this radius into the interior where $r < r_{\rm s}$, there are then too many microstates unless we begin to see deviations from the Schwarzschild geometry: this is the fuzzball proposal. Of course it is a speculation to extend such a principle from the two-charge geometry to Schwarzschild, but we have noted other parallels in \S~\ref{subsec:fuzzgeos}.\\%\S2.3.4.
{\bf Acknowledgments}\\
We would like to thank J.~Maldacena, E.~Martinec, and S.~Mathur for helpful discussions. We have also benefited from discussions with N.~Iizuka, K.~Skenderis, M.~Taylor and other participants at the Aspen Center for Physics, supported in part by National Science Foundation Grant No. PHYS-1066293.
F.C. is supported by National Science Foundation Grant No. PHY11-25915. B.M. is supported by the NSF Graduate Research Fellowship Grant No. DGE-1144085. J.P. is supported by National Science Foundation Grant Nos. PHY11-25915 and PHY13-16748. A.P. is supported by National Science Foundation Grant No. PHY12-05500.
|
2,877,628,088,571 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Various experimental observations over the last few decades have conclusively established the robustness of the Standard Model (SM). Nonetheless, there are a few issues demonstrating the presence of physics beyond the SM, for example, the nature and existence of dark matter (DM) \cite{Zwicky:1937zza, Rubin:1970zza, Clowe:2003tk,Bertone:2004pz,ArkaniHamed:2008qn,Dodelson:1993je}, small but non-vanishing neutrino masses \cite{RoyChoudhury:2019hls,Fukuda:1998mi,Aghanim:2018eyx}, observed baryon asymmetry of the Universe \cite{Sakharov:1967dj,Kolb:1979qa, Davidson:2008bu,Buchmuller:2004nz,Strumia:2006qk}, origin of flavor structure, etc. Therefore, apprehending the nature of physics beyond the standard model (BSM) gets inescapable, and in this context, symmetry is assumed to play a significant role, e.g., ensuring the appropriate mechanism for achieving the tiny neutrino masses,
stability of DM, confining flavour structure, and so on. It is thus, intriguing to build models beyond the SM adopting new symmetries.
\vspace{2mm}
The Scotogenic model, proposed by Ma \cite{Ma:2006km,Ma:2009gu} is probably the simplest model that generates the small neutrino masses at one-loop level and also simultaneously accounts for the dark matter (both inert scalar and fermionic), see for example a legion of works in the literature \cite{LopezHonorez:2006gr,Gustafsson:2007pc,Dolle:2009fn, Suematsu:2009ww, Schmidt:2012yg, Singirala:2016kam} and references therein. Various other works have realized neutrino mass at one-loop \cite{Restrepo:2019ilz,Babu:2019mfe,Chen:2019okl,Ma:2019yfo,Nomura:2019lnr}. Further, the pioneering work of introducing modular flavor symmetries to quark and neutrino sectors is seen in the literature of \cite{Feruglio:2017ieh,Feruglio:2017spp,King:2020qaj} to highlight predictable flavor structures. The basic idea behind using the modular symmetry is either to nullify or minimize the necessity to include flavon fields other than modulus $\tau$. Some of the effective models based on modular symmetry of recently published papers \cite{King:2017guk,Altarelli:2010gt,Ishimori:2010au,King:2015aea} justify the statement above. The breaking of flavor symmetry takes place when this complex modulus $\tau$ acquires VEV. The main issue of the perplexing vacuum alignment is avoided, the only requirement is a certain kind of mechanism which can fix the modulus $\tau$. Resultantly, this has prompted a restoration of the possibility that modular symmetries are symmetries of the extra dimensional space-time with Yukawa couplings dictated by their modular weights \cite{Criado:2018thu} hence, transform systematically under this framework, where there is a functional dependence of these couplings on modular forms, which verily are holomorphic function of $\tau$. To put it in a different way, these couplings come to pass under a non-trivial representation of a non-Abelian discrete flavor symmetry approach \cite{Altarelli:2010gt}, to such an extent that it can remunerate the utilization of flavon fields, which undoubtedly are not required or limited in understanding the flavor structure. In reference to above, it was fathomed that there are numerous groups accessible i.e., basis characterized under modular group of $A_4$ \cite{Abbas:2020qzc,King:2020qaj,Wang:2019xbo,Lu:2019vgm, Kobayashi:2019gtp,Nomura:2019xsb,Behera:2020sfe}, $S_4$ \cite{Penedo:2018nmg,Gui-JunDing:2019wap,Liu:2020akv,Kobayashi:2019xvz,Wang:2019ovr}, $A_5$ \citep{Ding:2019zxk,Novichkov:2018nkm}, larger groups \cite{Kobayashi:2018wkl}, various other modular symmetries and double covering of $A_4$ \cite{Nomura:2019lnr,Ma:2015fpa,Mishra:2019oqq,Novichkov:2020eep}, predictions regarding masses, mixing \cite{King:2013xba,King:2009fk}, and CP phases distinctive to quarks and/or leptons are made. \vspace{1mm}
This paper contains, minimal scotogenic model~\cite{Avila:2019hhv,Dasgupta:2019rmf,Ma:2012ez,Bouchand:2012dx,Fraser:2015mhb,Rojas:2018wym,Hagedorn:2018spx,Kitabayashi:2018bye,Pramanick:2019qpg,Tang:2017rhv}, constructed, based on modular $A_4$ symmetry in which mass generation for neutrinos is done at one-loop level and it also provides a stable DM candidate. A minimal Scotogenic model can be appreciated by using modular forms having higher weights, which have a dependence on weight-2 triplet Yukawa couplings. Thus, field contents and model's structure are much simpler than previous models~\cite{Nomura:2019jxj,Okada:2019mjf}. Our model encompasses two different sets of SM singlet heavy neutrinos i.e., $N_{Ri}$ \& $S_{Li}$, $(i=1,2,3)$, which transform as triplets under $A_4$, with modular weight $k_I=-1$ and $k_I=1$ respectively. Likewise, the inert Higgs doublet is allocated a non-zero modular weight as $k_I=-2$. Interestingly, modular weights help in impersonating the additional $Z_2$ symmetry, hence, it is not necessary to use $Z_2$ symmetry for constructing scotogenic model and realizing the stability of DM.
The layout of this paper is as follows. In Sec. \ref{sec:realization} we outline our model framework with discrete $A_4$ modular flavor symmetry and its appealing feature resulting in simple mass structure for the charged and neutral leptons with two types of sterile neutrinos. We then provide a brief discussion on the generation of light neutrino masses and their mixing in Sec. \ref{sec:radiative}. In Sec. \ref{sec:analysis} numerical correlational study between observables of neutrino sector and input model parameters is established. Comments on lepton flavour violating decays $\mu \to e \gamma$ decays and muon $g-2$ anomalies are presented in Sec.\ref{sec:comment}. Further, Sec.
\ref{sec:dark} comprises the discussion on fermionic dark matter followed by our conclusions in Sec.\ref{sec:con}.
\section{MODEL FRAMEWORK}
\label{sec:realization}
Here, we take the privilege of introducing the model framework, investigating the impact of $A_4$ modular symmetry on neutrino and dark matter phenomenology. The SM particle spectrum is enriched with three right-handed ($N_R$) and three left-handed heavy fermions ($S_L$) to meet the purpose. We impose a local $U(1)_X$ symmetry to avoid certain unwanted interactions and a scalar singlet $\rho$ to break it spontaneously. The scalar sector is extended with an inert scalar doublet $\eta$, to realize neutrino mass at one-loop. The assigned modular weight mimics $Z_2$ symmetry by playing a vital role in forbidding the neutrino mass at tree-level and also in stabilizing the fermionic dark matter. The representation of different fields of the model under ${ SU(2)_L\times U(1)_Y \times U(1)_X} \times A_4$ symmetries and their modular weights are given in the Table~\ref{tab:fields-linear}. In addition, the non-trivial transformation of Yukawa and scalar couplings and their modular weights are furnished in Table~\ref{tab:couplings}.
\begin{table}[h!
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c||c|c|c|c|c|}\hline\hline
& \multicolumn{6}{c||}{Fermions} & \multicolumn{3}{c|}{Scalars}\\ \hline \hline
& ~$e_R$~& ~$\mu_R$~ & ~$\tau_R$~& ~$L_L$~& ~$N_R$~& ~$S_L$~& ~$H$~&~ $\eta$ ~&~$\rho$ \\ \hline
$SU(2)_L$ & $1$ & $1$ & $1$ & $2$ & $1$ & $1$ & $2$ &$2$ & $1$ \\\hline
$U(1)_Y$ & $-1$ & $-1$ & $-1$ & $-\frac12$ & $0$ & $0$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $0$ \\\hline
$U(1)_X$ & $1$ & $1$ & $1$ &$1$ & $1$ & $0$ & $0$ &$0$ &$1$ \\\hline
$A_4$ & $1$ & $1'$ & $1''$ & $1, 1^{\prime \prime}, 1^{\prime }$ & $3$ & $3$ & $1$ & $1$ &$1$ \\ \hline
$k_I$ & $-1$ & $-1$ & $-1$ & $-1$ & $-1$ & $1$ & $0$ & $-2$ & $0$\\ \hline
\hline
\end{tabular}
\caption{Particle content of the model and their charges under ${ SU(2)_L\times U(1)_Y\times U(1)_X}\times A_4 $, where $k_I$ is the modular weight.}
\label{tab:fields-linear}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}\hline
{Couplings} & ~{ $A_4$}~& ~$k_I$~ \\\hline
{ $\bm{Y}=(y_1,~y_2,~y_3)$} & ${\bf 3}$ & ${\bf 2}$ \\\hline
$\bm{\lambda_\eta}$ & $\bf{1}$ & $\bf{8}$ \\ \hline
$\bm{\lambda^\prime_{\eta }}$ & $\bf{1}$ & $\bf{4}$ \\ \hline
\end{tabular}
\caption{Transformation of the Yukawa and quartic couplings under $A_4$ symmetry and their corresponding modular weights.}
\label{tab:couplings}
\end{center}
\end{table}
The scalar potential of the model is given by
\begin{eqnarray}
\mathcal{L}_V &=& \mu^2_H (H^\dagger H)+\lambda_H (H^\dagger H)^2+\mu^2_{\rho}(\rho^\dagger\rho)+\lambda_{\rho}(\rho^\dagger\rho)^2+\lambda_{H\rho}(H^\dagger H)(\rho^\dagger\rho) +\lambda_\eta \zeta_2(\eta^\dagger \eta)^2 \nonumber \\
&& +\lambda^\prime_{\eta}\Big[\mu^2_{\eta}(\eta^\dagger \eta)+\zeta_3(H^\dagger H)(\eta^\dagger \eta) +\zeta_4(H^\dagger \eta)(\eta^\dagger H)+ \frac{\zeta_5}{2}((H^\dagger \eta)^2 +~{\rm H.c}) \nonumber\\
&& + \zeta_6(\eta^\dagger \eta)(\rho^\dagger\rho)\Big].
\end{eqnarray}
Here, $H = \left(0 ~~(v+h)/\sqrt{2}\right)^T$ is the SM Higgs doublet, $\eta = \left(\eta^+ ~~(\eta_R+i \eta_I)/\sqrt{2}\right)^T$ denotes the inert doublet and the complex scalar $\rho = \frac{1}{\sqrt{2}}(v_{\rho}+h_{\rho}+iA_{\rho})$ breaks the $U(1)_X$ local gauge symmetry spontaneously. The mass mode of $A_\rho$ is eaten up by the $U(1)_X$ associated gauge boson $Z^\prime$, attains the mass $M_{Z^\prime} = g_X v_\rho$. In the above potential, $\zeta_i$'s are the free parameters and the scalar coupling $\lambda^\prime_{\eta}$ is the singlet representation of $A_4$ with modular weight 4, which can be expressed in terms of the components of weight-2 triplet Yukawa couplings \cite{Feruglio:2017spp},
\begin{eqnarray}
\lambda^\prime_{\eta}=y_1^2 +2y_2y_3.
\end{eqnarray}
For simplicity, we avoid $H-\rho$ mixing i.e., $\lambda_{H\rho}=0$. The mass spectrum of scalar sector \cite{Lindner:2016kqk} can be written as follows:
\begin{eqnarray}
&&M_{h}^2 = 2\lambda_H v^2,\nonumber\\
&&M_{\rho}^2 = 2\lambda_\rho v_\rho^2,\nonumber\\
&&M_{\eta^\pm}^2 = \lambda^\prime_{\eta} \left[ \mu^2_{\eta} + \zeta_3 \frac{v^2}{2} + \zeta_6 \frac{v_\rho^2}{2} \right],\nonumber\\
&&M_{\eta_R,\eta_I}^2 = \lambda^\prime_{\eta} \left[\mu^2_{\eta} + (\zeta_3+\zeta_4\pm \zeta_5) \frac{v^2}{2} + \zeta_6 \frac{v_\rho^2}{2}\right].
\end{eqnarray}
In order to construct a simplified version of charged leptons mass matrix, left-handed doublets (i.e., three generations ($L_{e_L}, L_{\mu_L}, L_{\tau_L} $)) are considered to transform as $\bm{1}, \bm{1}^{\prime\prime}, \bm{1}^{\prime}$ respectively under the $A_4$ symmetry with assignment of modular weight, $k_I=-1$ for each generation. Analogously, the right-handed charged leptons ($e_R,\mu_R,\tau_R$) transform under $A_4$ as $\bm{1}, \bm{1}^{\prime}, \bm{1}^{\prime\prime}$, and carry a modular weight, $k_I=-1$.
The SM Higgs is uncharged under the new symmetries, to make the scenario a bit simplistic.
The charged leptons interaction Lagrangian is given by
\begin{align}
\mathcal{L}_{M_\ell}
&= y_{\ell_{}}^{ee} \overline{L}_{e_L} H e_R + y_{\ell_{}}^{\mu \mu} \overline{L}_{\mu_L} H \mu_R + y_{\ell_{}}^{\tau \tau} \overline{L}_{\tau_L} H \tau_R
+ {\rm H.c.}. \label{Eq:yuk-Mell}
\end{align}
The mass matrix for charged leptons achieves a diagonal structure, following, the spontaneous breaking of electroweak gauge symmetry. Moreover, one can obtain the observed masses for the charged leptons by adjusting the Yukawa couplings. Hence, the obtained mass matrix is represented as follows
\begin{align}
M_\ell = \begin{pmatrix} y_{\ell_{}}^{ee} v/\sqrt{2} & 0 & 0 \\
0 & y_{\ell_{}}^{\mu \mu} v/\sqrt{2} & 0 \\
0 & 0 & y_{\ell_{}}^{\tau \tau} v/\sqrt{2} \end{pmatrix} =
\begin{pmatrix} m_e & 0 & 0 \\
0 & m_\mu & 0 \\
0 & 0 & m_\tau \end{pmatrix},
\label{Eq:Mell}
\end{align}
where $m_e$, $m_\mu$ and $m_\tau$ are the observed charged lepton masses.
\subsection{Dirac and pseudo-Dirac interaction terms for the neutrinos}
The right (left) handed heavy fermions contrary to SM leptons are considered as triplet under $A_4$ modular group with a $U(1)_X$ charge of $1 (0)$ and modular weight $k_I=-1 (+1)$. The usual Dirac interactions of neutrinos with SM Higgs can not be defined with aforesaid charges. The introduction of modular Yukawa couplings with transformation represented in Table\,\ref{tab:couplings} along with inert scalar doublet $\eta$ are necessary to write such interactions. Moreover, the Yukawa couplings {$\bm{Y}(\tau) = \left(y_{1}(\tau),y_{2}(\tau),y_{3}(\tau)\right)$}, are expressed in terms of Dedekind eta-function $\eta(\tau)$ and its derivative, as discussed in (Appendix of \citep{Feruglio:2017spp}).
Hence, the invariant Dirac interaction Lagrangian, which involves the active neutrinos along with the right and left-handed heavy fermions, can be represented in the following forms:
\begin{align}
\mathcal{L}_{D}
&= \alpha_D \overline{L}_{e_L} \widetilde{\eta} (\bm{Y} N_R)_{1} + \beta_D \overline{L}_{\mu_L} \widetilde{\eta} (\bm{Y} N_R)_{1^{\prime}}
+ \gamma_D \overline{L}_{\tau_L} \widetilde{\eta} (\bm{Y} N_R)_{1^{\prime \prime}} + {\rm H.c.}, \label{Eq:yuk-MD}
\end{align}
\begin{align}
\mathcal{L}_{LS}
&= \left[\alpha'_D \overline{L}_{e_L} \widetilde{\eta} (\bm{Y} S_L^c)_{1} + \beta'_D \overline{L}_{\mu_L} \widetilde{\eta} (\bm{Y} S_L^c)_{1^{\prime}}
+ \gamma'_D \overline{L}_{\tau_L} \widetilde{\eta} (\bm{Y} S_L^c)_{1^{\prime \prime}}\right] \frac{\rho}{\Lambda} + {\rm H.c.}. \label{Eq:yuk-LS}
\end{align}
Adjacently, the $A_4$ and $U(1)_X$ symmetric charges for heavy fermions are imposed in such a way that their usual Majorana mass terms are forbidden. However, the mixing between the additional leptons are allowed, which can be written as follows \cite{Behera:2020sfe}
\begin{eqnarray}
\mathcal{L}_{M_{RS}}
&=& \left[\alpha_{NS} \bm{Y} (\overline{S_L} N_R)_{\rm symm} + \beta_{NS} \bm{Y} (\overline{S_L} N_R)_{\rm Anti-symm} \right]\rho^\dagger + {\rm H.c.} \nonumber \\
&=&\alpha_{NS}\big[ y_1(2 \bar S_{L_1} N_{R_1} - \bar S_{L_2} N_{R_3} - \bar S_{L_3} N_{R_2})+y_2(2 \bar S_{L_2} N_{R_2} - \bar S_{L_1} N_{R_3} - \bar S_{L_3} N_{R_1}) \nonumber \\
&&+ y_3(2 \bar S_{L_3} N_{R_3} - \bar S_{L_1} N_{R_2} - \bar S_{L_2} N_{R_1}) \big] \rho^\dagger +\beta_{NS}\big[ y_1( \bar S_{L_2} N_{R_3} - \bar S_{L_3} N_{R_2}) \nonumber\\ &&+y_2( \bar S_{L_3} N_{R_1} - \bar S_{L_1} N_{R_3})+ y_3( \bar S_{L_1} N_{R_2} - \bar S_{L_2} N_{R_1})\big] \rho^\dagger + {\rm H.c.} \label{Eq:yuk-M}
\end{eqnarray}
Here, $\alpha_{NS}$ and $\beta_{NS}$ represent free parameters, the first term in (\ref{Eq:yuk-M}) is symmetric and second term is anti-symmetric product for $\bar S_L N_R$ making $\bm{3_s}$ and $\bm{3_a}$ representations of $A_4$. Using $\langle \rho \rangle = v_\rho/\sqrt{2}$, the resulting mass matrix is found to be
\begin{align}
M_{RS}&=\frac{v_\rho}{\sqrt2}
\left(
\frac{\alpha_{NS}}{3}\left[\begin{array}{ccc}
2y_1 & -y_3 & -y_2 \\
-y_3 & 2y_2 & -y_1 \\
-y_2 & -y_1 & 2y_3 \\
\end{array}\right]
+
\beta_{NS}
\left[\begin{array}{ccc}
0 &y_3 & -y_2 \\
-y_3 & 0 & y_1 \\
y_2 & -y_1 &0 \\
\end{array}\right]
\right).
\label{MRS_matrix}
\end{align}
The mass matrix for the six heavy leptons, in the basis $( N_R, S_L)^T$, can be given as
\begin{eqnarray}
M_{Hf}= \begin{pmatrix}
0 & M_{RS}\\
M^T_{RS} & 0
\end{pmatrix},\label{mrs matrix}
\end{eqnarray}
which upon diagonalization provides three doubly degenerate mass pairs ($M_k$) and the digonalization of $M_{RS}$ with a simplified form is discussed in \cite{Behera:2020sfe}.
\section{Radiative Neutrino mass}
\label{sec:radiative}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=45mm,width=70mm]{nmass_linear.pdf}
\caption{Radiatively generated neutrino mass.} \label{fig:radia}
\end{center}
\end{figure}
Since, the usual Dirac mass terms of neutrinos with SM Higgs are forbidden by the assigned symmetries, one can generate light neutrino masses at one-loop level and the corresponding Feynman diagram is displayed in Fig \ref{fig:radia}.
\iffalse
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{loop.png}
\caption{Loop diagram using mass eigenstates}
\end{center}
\end{figure}
The term above assists in accomplishing an overall mass splitting between the real and imaginary parts of the scalar doublet i.e ($\eta$). Utilizing the suitable Feynman rules, helps us to define scotogenic mass term as given in eqn.(\ref{Eq:scoto})
where, $m_\pm$ is the mass eigenstate of $\eta_R$ and $\eta_I$ respectively.
\begin{equation}
\pm Y^D_{ij}Y^{Ls}_{jk} \lambda^\prime_{\eta} \int \frac{d^4 k}{(2\pi)^4} \frac{(\cancel{k}+m_{N_k})}{(k^2 -m_\pm)(k^2 -m_{N_k})} \label{Eq:scoto}
\end{equation}
\fi
The expression of the neutrino mass from one loop radiative corrections is written as
\begin{equation}
(\mathcal{M_\nu})_{ij} = \sum_k\frac{{(Y_D)}_{ik}{(Y_{LS})}_{jk}}{16\pi^2} ~\left[\frac{M^2_{\eta_R}}{M^2_{\eta_R} - M_k^2}~ {\rm ln} \frac{M^2_{\eta_R}}{M_k^2}-\frac{M^2_{\eta_I}}{M^2_{\eta_I} - M_k^2}~ {\rm ln} \frac{M^2_{\eta_I}}{M_k^2}\right]. \label{Eq:nmasss}
\end{equation}
Here, $M_k$ is the mass of the heavy fermion inside the loop, $Y_D$ and $Y_{LS}$ are the Yukawa coupling matrices correspond to the interaction of neutrinos with $N_R$ and $S_L$ respectively and are given by
\begin{align}
Y_D&=
\left[\begin{array}{ccc}
\alpha_D & 0 & 0 \\
0 & \beta_D & 0 \\
0 & 0 & \gamma_D \\
\end{array}\right]
\left[\begin{array}{ccc}
y_1 &y_3 &y_2 \\
y_2 &y_1 &y_3 \\
y_3 &y_2 &y_1 \\
\end{array}\right]_{LR}.
\label{Eq:MD}
\end{align}
\begin{align}
Y_{LS}&= \frac{v_\rho}{\Lambda \sqrt{2}}
\left[\begin{array}{ccc}
\alpha^\prime_D & 0 & 0 \\
0 & \beta^\prime_D & 0 \\
0 & 0 & \gamma^\prime_D \\
\end{array}\right]
\left[\begin{array}{ccc}
y_1 &y_3 &y_2 \\
y_2 &y_1 &y_3 \\
y_3 &y_2 &y_1 \\
\end{array}\right]_{LS}.
\label{Eq:Mls}
\end{align}
The mass matrix in eqn.\eqref{Eq:nmasss}, can be reduced to the simplified form as follows with the assumption $M^2_k \ll m^2_0$, where $m^2_0 = (M^2_{\eta_R} + M^2_{\eta_I})/{2}$.
\begin{eqnarray}
({\cal M}_\nu)_{ij}= \frac{\zeta_5 \lambda^\prime_{\eta} v^2}{16 \pi^2 m^2_0} \sum_k (Y_D)_{ik} (Y_{LS})_{kj} M_k,\label{Eq:nmasss2}
\end{eqnarray}
where, we have used $M^2_{\eta_R} - M^2_{\eta_I} = \zeta_5 \lambda_{\eta}^\prime v^2$. When specific mass ranges are considered for $m_{\eta_R}$, $m_{\eta_I}$ and $M_k$, this formula helps to generate both linear seesaw and inverse seesaw \cite{Deppisch:2004fa,Dev:2012sg,Hirsch:2009mx}. The neutrino mass matrix (\ref{Eq:nmasss2}) is numerically diagonalized through the relation $U^\dagger \mathbb{M} U = {\rm diag}(m_1^2, m_2^2,m_3^2)$, where $\mathbb{M} = {\cal M}_\nu {\cal M}_\nu^\dagger$ and $U$ is an unitary matrix.
Thus, the neutrino mixing angles can be extracted from the matrix elements of the diagonalizing matrix $U$, through the generic expressions:
\begin{eqnarray}
\sin^2 \theta_{13}= |U_{13}|^2,~~~~\sin^2 \theta_{12}= \frac{|U_{12}|^2}{1-|U_{13}|^2},~~~~~\sin^2 \theta_{23}= \frac{|U_{23}|^2}{1-|U_{13}|^2}. \label{eq:UPMNS}
\end{eqnarray}
Next, we attempt to determine the Jarlskog invariant ($J_{CP}$) as well as the effective Majorana mass parameter ($\langle m_{ee}\rangle$) through the following relations:
\begin{eqnarray}
&&J_{CP} = \text{Im} [U_{e1} U_{\mu 2} U_{e 2}^* U_{\mu 1}^*] = s_{23} c_{23} s_{12} c_{12} s_{13} c^2_{13} \sin \delta_{CP}. \\
&& \langle m_{ee}\rangle=|m_{\nu_1} \cos^2\theta_{12} \cos^2\theta_{13}+ m_{\nu_2} \sin^2\theta_{12} \cos^2\theta_{13}e^{i\alpha_{21}}+ m_{\nu_3} \sin^2\theta_{13}e^{i(\alpha_{31}-2\delta_{CP})}|.\nonumber \\
\end{eqnarray}
\section{Numerical Analysis}
\label{sec:analysis}
For constraining the model parameters, we use the current $3\sigma$ limit on neutrino mixing parameters for normal ordering (NO) from global-fit \cite{deSalas:2020pgw,Gariazzo:2018pei,Esteban:2020cvm}, which are given as
\begin{eqnarray}
&&\Delta m^2_{\rm atm}=[2.431, 2.622]\times 10^{-3}\ {\rm eV}^2,~~~~~
\Delta m^2_{\rm sol}=[6.79, 8.01]\times 10^{-5}\ {\rm eV}^2, \nonumber\\
&&\sin^2\theta_{13}=[0.02044, 0.02437],\ ~
\sin^2\theta_{23}=[0.428, 0.624],\ ~
\sin^2\theta_{12}=[0.275, 0.350]. \label{eq:mix}
\end{eqnarray}
The model parameters are so chosen, as to fit the current neutrino oscillation data given in Eqn. (\ref{eq:mix}), as follows:
\begin{align}
&{\rm Re}[\tau] \in [1,2],~~{\rm Im}[\tau]\in [1,2],~~ \{ \alpha_{D},\beta_{D},\gamma_D \} \in [0.1,1.0],~~\{ \alpha^\prime_{D},\beta^\prime_{D},\gamma^\prime_D \} \in ~[0.1,1.0], \nonumber \\
& \quad \alpha_{NS} \in [0.1, 0.5],\quad \beta_{NS} \in[0.05, 0.1],\quad v_\rho \in \nonumber [10^3,10^4] \ {\rm GeV}, \quad \Lambda \in [10^4,10^5] \ {\rm GeV}.
\end{align}
The parameters used are randomly looked over the above mentioned ranges and the allowed regions for those are first constrained by the observed $3\sigma$ range of solar and atmospheric mass squared differences and further restricted by the observed sum of active neutrino masses $\sum_i m_i < 0.12$ eV \cite{Aghanim:2019ame,Aghanim:2018eyx}. Furthermore, the range of modulus $\tau$ helps in validating the model with experimental results of neutrino masses (NO) is found to be 1\ $\lesssim\ $Re$[\tau]\lesssim$\ 2 and 1\ $\lesssim\ $Im$[\tau]\lesssim$\ 2. Hence, a very narrow range is satisfied by the modular Yukawa couplings, which are functions of $\tau$ (please refer Appendix of \cite{Feruglio:2017spp}) and their regions of validation are found as: 0.99\ $\lesssim\ $$y_1$$(\tau)\lesssim$\ 1, 0.1\ $\lesssim\ $$y_2$$(\tau)\lesssim$\ 0.75 and 0.1\ $\lesssim\ $$y_3$$(\tau)\lesssim$\ 0.25. The behaviour of Yukawa couplings with respect to real and imaginary parts of $\tau$ are illustrated in the left and right panels of Fig. \ref{yuk_reim_tau} respectively. Proceeding further, Fig. \ref{mix_angles} depicts the alteration of the sum of total neutrino masses with the mixing angles abiding to the $3\sigma$ regions. As mentioned in Sec. \ref{sec:radiative}, Fig. \ref{y_jcp}, helps us to have a glimpse of how Jarlskog CP invariant fits in the whole scenario, and found to be of the order of ${\cal O}(10^{-2})$, its connection with the reactor mixing angle is depicted in the left panel. The right panel of Fig. \ref{y_jcp}, expresses the complete parameter space for Yukawa couplings abiding to the sum of active neutrino masses.
Advancing further, the effective neutrino-less double beta decay mass parameter $m_{ee}$ is found to have its value as $0.06$ eV as seen from the left panel of Fig. \ref{M23_mee}, and the right panel of Fig. \ref{M23_mee} shows the interdependence of Jarlskog invariant with sum of active neutrino mass. As, here we are using a $A_4$ singlet coupling with $k_I=4$ ($\lambda_\eta'$), expressed in terms of Yukawa couplings i.e., triplet under $A_4$ with $k_I=2$, hence we explicitly show its correlation i.e., $\lambda^\prime_{\eta}$ with $y_1$ (left panel) and $\lambda^\prime_{\eta}$ with $y_2, y_3$ (right panel) of Fig.\ref{lambda_y123}.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{Rtau_y123.pdf}
\hspace*{0.2 true cm}
\includegraphics[height=50mm,width=75mm]{Imtau_y123.pdf}
\caption{Left panel indicates the interdependence of the modular Yukawa couplings ($y_1,y_2,y_3$) with the real part while right panel presents the imaginary part of modulus $\tau$. }
\label{yuk_reim_tau}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{ss13_nmass.pdf}
\includegraphics[height=50mm,width=75mm]{ss12_nmass.pdf}\\
\vspace*{0.3 true cm}
\includegraphics[height=50mm,width=75mm]{ss23_nmass.pdf}
\caption{Top left panel represents the interdependence of $\Sigma m_i$ with $\sin^2 \theta_{13}$, and $\sin^2\theta_{12}$ while the panel below displays its dependence on $\sin^2 \theta_{23}$.}
\label{mix_angles}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{ss13_jcp.pdf}
\hspace*{0.2 true cm}
\includegraphics[height=50mm,width=75mm]{y123_nmass.pdf}
\caption{Left panel makes an interdependence relation between the Jarlskog invariant with the reactor mixing angle while right panel reflects the alteration of sum of active neutrino masses with the modular Yukawa couplings.}
\label{y_jcp}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{nmass_mee.pdf}
\hspace*{0.2 true cm}
\includegraphics[height=48mm,width=72mm]{nmass_jcp.pdf}
\caption{Left panel above depicts the interdependence of effective neutrino mass of neutrinoless double beta decay with the sum of active neutrino masses, while, right panel shows the relation of Jarlskog invariant with sum of active neutrino masses.}
\label{M23_mee}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{y1_lambdaHn.pdf}
\hspace*{0.2 true cm}
\includegraphics[height=50mm,width=75mm]{y23_lambdaHn.pdf}
\caption{Left (Right) panel displays the correlation between $\lambda_\eta^\prime$, which is an $A_4$ singlet and having a modular weight of $k=4$, with $y_1$ ($y_2$,$y_3$).}
\label{lambda_y123}
\end{center}
\end{figure}
\section{Comment on LFV Decay ($\mu \rightarrow e \gamma$) and muon {\lowercase{$g-2$}} anomaly}
\label{sec:comment}
\begin{figure}[h!]
\includegraphics[height=45mm,width=70mm]{lfv_linear.pdf}
\caption{Feynman diagram expressed here showcase LFV rare decays $\ell_\alpha \to \ell_\beta \gamma$ and muon $ g-2$ ($\alpha = \beta =\mu$) in context of current model.}\label{lfvfeyn}
\end{figure}
The quest in looking for lepton flavour violating decay mode $\mu \to e \gamma$ plays an exceptionally pivotal role in the hunt for new physics beyond the SM.
Many experiments are looking for this decay mode with great effort for an improved sensitivity, and the current limit on its branching Br$(\mu\rightarrow e\gamma)< 4.2\times 10^{-13}$ is from MEG collaboration \cite{TheMEG:2016wtm}. Also the measured muon anomalous magnetic moment shows around $3 \sigma$ discrepancy with its SM predicted value, which is given as \cite{Tanabashi:2018oca,Dev:2020drf,Blum:2013xva,Bennett:2006fi}
\begin{equation}
\Delta a_\mu = a^{\rm exp}_\mu - a^{\rm SM}_\mu = (26.1 \pm 7.9)\times 10^{-10}.
\end{equation}
In the present framework, the LFV process $\mu \rightarrow e \gamma$ and muon $g-2$ occur at one loop level through standard Yukawa interactions. The Feynman diagram for this is displayed in Fig. \ref{lfvfeyn}. The branching ratio for the rare decay $\ell_\alpha \to \ell_\beta \gamma$ is given as \cite{Chekkal:2017eka}
\begin{equation}
{\rm Br}(\ell_\alpha \rightarrow \ell_\beta \gamma)=\frac{3(4 \pi)^3 \alpha}{4 G_F^2}|A_D|^2\times {\rm Br}(\ell_\alpha \rightarrow \ell_\beta \nu_\alpha \bar{\nu}_{\beta}),
\end{equation}
where, $G_F\approx 10^{-5}~{\rm GeV}^{-2}$ (i.e. Fermi constant) and $\alpha$ being the electromagnetic fine structure constant and $A_D$ is the dipole contribution, hence, expressed as
\begin{equation}
A_D=\sum_i \frac{(Y_D)_{\alpha i}~(Y^*_{LS})_{\beta i}~g(x)}{2(4\pi)^2 M^2_{\eta^\pm}}. \label{yukawaN}
\end{equation}
Here, $Y_D$ and $Y_{LS}$ being the Yukawa coupling matrices as shown in eqn.\eqref{Eq:MD} and \eqref{Eq:Mls}, $g(x)$ is the loop function, with $x=\frac{M^2_{k}}{M^2_{\eta^\pm}}$, expressed as
\begin{equation}
g(x)=\frac{1}{6}\left[\frac{1-2x(3+1.5x+x^2-3x {\rm log}x)}{(1-x)^4}\right].
\end{equation}
For $\alpha=\beta$, the Feynman diagram of Fig. \ref{lfvfeyn} will give contribution towards the muon anomalous magnetic moment, given as
\begin{equation}
\Delta a_\mu=\frac{1}{16 \pi^2}\left[\frac{m^2_\mu}{M^2_{\eta^\pm}}\sum_i (Y_D)_{\mu \mu} (Y^*_{LS})_{\mu \mu}~g(x)\right].
\end{equation}
The muon $g-2$ can also be obtained from the $Z^\prime$ and $\mu$ mediated loop, which will be suppressed due to the large mass difference.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{meta_mu2e.pdf}
\includegraphics[height=50mm,width=75mm]{meta_amu.pdf}
\caption{The left (right) panel represents the variation of the LFV branching ratio of $\mu \rightarrow e \gamma$ process (muon $g-2$) with the charged inert scalar mass. }
\label{lfv1}
\end{center}
\end{figure}
In the left and right panels of Fig. \ref{lfv1}, we have represented the dependence of the branching fraction of $\mu \rightarrow e \gamma$ and anomalous muon magnetic moment $\Delta a_\mu$, on the inert charged scalar mass, which are found to lie within the experimental limits.
The variation of $\mu \to e \gamma $ branching fraction and $\Delta a_\mu$ with the modular Yukawa couplings, consistent with neutrino mass constraints are displayed in Fig. \ref{Yuk_lfv}.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{y123_mu2e.pdf}
\includegraphics[height=50mm,width=75mm]{y123_amu.pdf}
\caption{Variation of the $\mu \to e \gamma$ branching fraction and muon $g-2$ with the Yukawa couplings exhibited in the left and right panels respectively. }\label{Yuk_lfv}
\end{center}
\end{figure}
\section{Fermionic Dark matter}
\label{sec:dark}
The model includes six heavy Majorana neutrinos which are doubly degenerate, out of which two of the lightest mass eigenstates can serve as dark matter candidates, provided the inert scalar particles are heavier.
Before we move on to DM study, we first diagonalize the Majorana mass matrix of eqn. \ref{MRS_matrix}. For simplicity, we assume the coupling of symmetric part is dominant ($\alpha_{NS} > \beta_{NS}$). We diagonalize the reduced mass matrix with a TBM rotation and then by the normalized eigenvector matrix \cite{Behera:2020sfe}. We have implemented the model in LanHEP package \cite{Semenov:1996es} and then extracted the results from micrOMEGAs \cite{Pukhov:1999gg, Belanger:2006is, Belanger:2008sj} package.
We wish to compute the relic density for a particular benchmark. We confine our discussion by fixing $\alpha_{NS} = 0.5$, $v_{\rho} = 5$ TeV and also the Yukawa couplings in the range $0.1 \lesssim y_{2,3} \lesssim 0.25$. As we see from Fig. \ref{yuk_reim_tau}, $y_1$ does not vary much and thus, $y_{2,3}$ dictate the mass range of DM i.e., $\sim 650 - 950$ GeV. Choosing equal values ($\alpha_{\rm DM}$) for the couplings $\alpha_D,\beta_D,\gamma_D$ and $\alpha^\prime_D,\beta^\prime_D,\gamma^\prime_D$, we project the DM abundance as a function of its mass in Fig. \ref{relic_plot}. The annihilation channels (shown in Fig. \ref{relic_feyn}) with lepton and anti-lepton pair in the final state in $\eta$-portal ($t$-channel) and $Z^\prime$-portal ($s$-channel), contribute to relic density. One can see that the $s$-channel contribution gives resonance on the either side of $M_{\rm DM} = M_{Z^\prime}/{2}$, with $M_{Z^\prime} = 1.6$ TeV.
Moving to detection prospects, $\eta$ and $Z^\prime$ have no direct interactions with quarks, hence study of tree-level DM-nucleon scattering is not possible. One-loop contribution to DM scattering off nuclei will be well below experimental upper limits (both spin-independent and spin-dependent) and do not show any impact on model parameters \cite{Ibarra:2016dlb}.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=35mm,width=55mm]{DM_2.pdf}~~~~~~
\includegraphics[height=35mm,width=55mm]{DM_1.pdf}
\caption{Feynman diagrams for t and s-channel annihilation of DM, whose contribution is towards the relic density.}
\label{relic_feyn}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=50mm,width=75mm]{relic.pdf}
\caption{Variation of abundance of fermionic DM as a function of its mass for various values of couplings. Black horizontal dashed lines stand for the $3\sigma$ bound of Planck satellite data \cite{Aghanim:2018eyx}.}
\label{relic_plot}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:con}
In this paper, the main motive of the model is to implement $A_4$ modular symmetry to see its novelty in neutrino phenomenology through scotogenic framework. We have realized neutrino mass at one loop level successfully by introducing an inert scalar doublet $\eta$ ($A_4$ singlet with modular weight $-2$) and six heavy fermions $N_R$ and $S_L$ (triplets under $A_4$ with modular weights $-1$ and $+1$ respectively). As we are dealing with $A_4$ modular symmetry, the Yukawa couplings are defined as $A_4$ triplet ($\bm Y$) with modular weight $2$, and the scalar couplings for terms involving $\eta$ as $A_4$ singlets ($\bm \lambda_\eta$, $\bm \lambda_\eta^\prime$) with weights $4,8$ respectively. An additional $U(1)_X$ is imposed to avoid unwanted Majorana mass terms and a complex scalar singlet $\rho$ is introduced to spontaneously break this local gauge symmetry.
Modular symmetry not only avoids adding new flavon fields for neutrino phenomenology but also plays a vital role in ensuring dark matter stability. A particular flavor structure for the neutrino mass matrix is achieved along with neutrino mixing. We have used the procedure of numerically diagonalising the neutrino mass matrix and fixed the model parameters in such a way that they remain compatible with present $3\sigma$ range of oscillation data. Proceeding further, we have established the present model's contribution towards lepton flavor violating decay $\mu \to e\gamma$, compatible with upper bound set by MEG collaboration. We also found that the contribution to muon $g-2$ anomaly (i.e. $\Delta a_\mu$) is in the range of $10^{-12} - 10^{-14}$ satisfying the experimental cut-off. Finally, we have addressed dark matter phenomenology of the lightest stable fermion spectrum. With stringent bounds on Yukawa couplings confining dark matter mass, we have obtained the relic density compatible to Planck data for a particular benchmark of values for model parameters. We found that the annihilations with lepton-anti lepton pair in the final state via $\eta$ and $Z^\prime$ ($U(1)_X$ associated) portal contribute to relic density. Tree-level direct detection is not feasible as $\eta$ and $Z^\prime$ do not couple to quarks directly. To conclude, $A_4$ modular symmetry stands tall, providing rich neutrino phenomenology by avoiding the set of flavon fields as used in the conventional frameworks and also stabilizing dark matter candidate. The present paper remains an example, discussing the above aspects in the light of modular symmetry.
\iffalse
assigning them with appropriate charges under $A_4$ \& local $U(1)_X$ symmetry along with modular weight leads to realization of the linear seesaw mass matrix by letting us forbid the Majorana terms. The extra advantage of using modular symmetry in the framework was to pin-down the usage of $Z_2$ symmetry, despite that, dark matter could be explained suavely only with appropriate modular weights allocation. Moreover, the singlet scalar played a vital role in spontaneous breaking of $U(1)_X$ local symmetry and provided masses to the heavy fermions. In addition, Yukawa couplings makes a non trivial transformation under $A_4$ modular group which steals the limelight from having to use flavon fields and does the needful. Hence, a particular flavor structure for the neutrino mass matrix is achieved along with neutrino mixing. We have used the procedure of numerically diagonalising the neutrino mass matrix and fix the model parameters in such a way that it remains compatible and allowed with present $3\sigma$ range of oscillation data.
\fi
\acknowledgments
MKB and SM want to acknowledge DST for its financial help. RM acknowledges the support from SERB, Government of India, through grant No. EMR/2017/001448.
\bibliographystyle{my-JHEP}
|
2,877,628,088,572 | arxiv | \section{Introduction}
Laser cooling is an effective tool to reduce the temperature of confined ions, particularly from temperatures of up to several thousands of kelvin down to the Doppler limit, which is commonly in the mK range \cite{ita,buch,dem,mett}.
For magnesium ions, this has been demonstrated under various confinement conditions \cite{nag,rct,died,bir,dho}.
Such cooling is beneficial for the stable confinement in traps over extended periods of time \cite{ita,torr}, and essential for precision spectroscopy as it reduces spectral line broadening caused by the Doppler effect \cite{mett,dem}. For other systems, including highly charged ions, laser cooling is not a method of choice, owing to the lack of suitable (fast) optical transitions \cite{pr}. Resistive cooling \cite{ita} can be an effective method for such systems, especially if they carry high electric charge. However, the minimal energy is usually limited to energies which correspond to the ambient temperature on the scale of several kelvin \cite{rcool}. Hence, sympathetic cooling with simultaneously confined laser-cooled ions is a good possibility for these ions to reach the mK regime \cite{ita,piet}.
Here, we discuss laser cooling of singly charged magnesium ions in a Penning trap \cite{gab89,werth,gho}, following their dynamic capture \cite{schn,spec1} from an external source. In such situations, they commonly have high initial energies unsuitable for efficient laser cooling. Under conditions similar to the present ones, laser cooling times have been reported to be of the order of many minutes \cite{gru}. We have found that a combination of laser cooling and buffer gas cooling is capable of reducing the ion kinetic energy by more than 8 orders of magnitude within seconds. The ions `crystallize' into structures given by their mutual Coulomb repulsion in the presence of the confining trap potential, similar to the results in \cite{died,bir,dre,dre2,horn,mit,rich}, for which we find agreement with non-neutral plasma theory. The mesoscopic size of several thousands to several tens of thousands of ions is advantageous for sympathetic cooling, as such crystals are large enough to provide a sufficiently large cold bath for other charged particles to be cooled.
\section{Experimental Setup}
\label{setsec}
The experiments have been performed with the SpecTrap experiment \cite{spec0,spec1} located at the HITRAP facility \cite{kluge} at GSI and FAIR, Germany.
The experimental setup (Fig. \ref{set})
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\columnwidth]{figure1}
\caption{\small Sectional view of the SpecTrap setup. (A) Penning trap, (B) Magnet, (C) LHe dewar, (D) LN$_2$ dewar, (E) pulsed drift tubes, (F) non-destructive ion detector, (G) CCD camera. For details see text.}
\label{set}
\end{center}
\end{figure}
has previously been described in detail in \cite{spec1}.
Briefly, a cylindrical Penning trap is located in the homogeneous field of a superconducting magnet and is cooled to liquid-helium temperature.
Fig. \ref{set} shows a sectional view of the setup with the Penning trap (A) installed in the cold bore and in the center of the magnetic field of the surrounding superconducting magnet (B) of Helmholtz geometry \cite{helm}. The cold bore with the trap and its cryo-electronics is cooled by liquid helium (C) which is shielded by liquid nitrogen (D).
The ions are transported into the trap from above, via a low-energy UHV beamline connecting the ion sources with the trap. Ions can be obtained either from a dedicated pulsed source of singly charged ions \cite{sou}, from other external ion sources such as electron beam ion sources \cite{sparc}, or from the HITRAP low-energy beamline \cite{bea}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.75\columnwidth]{figure2}
\caption{\small Schematic of the SpecTrap Penning trap.}
\label{trap}
\end{center}
\end{figure}
A set of pulsed drift tubes (E) \cite{book,keefe} located above the trap allows the deceleration of ion bunches from transport energies of the order of keV per charge to energies of the order of 100\,eV per charge, suitable for dynamic capture \cite{schn} and subsequent storage in the trap center.
The ion number can be estimated from a non-destructive measurement of the induced charge signal when the ion bunch enters the trap. To that end, a dedicated low-noise charge amplifier detector (F) has been built and operated \cite{nid}. Imaging of the stored ions is done via radial ports with an outside CCD camera (G).
Fig. \ref{trap} shows a sectional view of the Penning trap. It is a cylindrical open-endcap 5-pole Penning trap \cite{gab89} (one segmented ring and two compensation electrodes between endcap electrodes E1 and E2) with additional capture electrodes C1 and C2. The latter are used for dynamic capture \cite{schn,spec1} of externally produced ions by creating a potential well after incoming ions have entered the trap.
The ring electrode located in the optical plane is split into four segments for radial electronic excitation and detection, with one hole of 4.8\,mm diameter in each segment for optical access. Radial ports in the plane of the trap center guide the fluorescence light via a two-lens system to the outside photon counter and CCD camera (EM-CCD C9100-24B, Hamamatsu).
The light collection solid angle amounts to 0.09\,sr or 0.7\% of $4\pi$. A first lens with 25\,mm focal length and located at that
distance from the trap center collimates the light, and a
second lens with 150\,mm focal length focuses it on the detector. The measured magnification factor of the system under the present conditions is 4.69(8). At a detector pixel size of 13\,$\mu$m, this corresponds to a resolution of about 3\,$\mu$m, which by optical imperfections is increased to about 10\,$\mu$m for the present observations.
The excitation laser for optical detection and laser cooling enters the trap from below, along the central axis ($z$-axis).
Laser cooling of the stored $^{24}$Mg$^+$ ions is achieved on the red-detuned side of the 279.55-nm $^{2}\text{S}_{1/2} \rightarrow {^{2}\text{P}_{3/2}}$ transition with a natural linewidth of $\Gamma=2\pi \times 41.8 \times 10^6$\,s$^{-1}$ \cite{nist}.
Frequency-quadrupling of light from a commercial infrared fiber laser produces the required light with a spectral width of less than 1\,MHz and a few tens of mW of maximum power. The laser can be tuned at a rate of up to 200\,MHz/s \cite{caz}.
In short, an experimental cycle consists of the following steps:
\begin{itemize}
\item ion bunch production in an external source
\item transport at energies of up to 5\,keV per charge
\item ion deceleration in pulsed drift tubes
\item dynamic ion capture in the trap
\item ion cooling and spectroscopy, CCD imaging.
\end{itemize}
Ions can be accumulated (`stacked') by capturing additional ion bunches while ions remain confined in the trap \cite{ros}. Hence, it is possible to subsequently load the trap from different ion sources. It is one advantage of Penning traps that a broad range of different mass-to-charge ratios can be stored simultaneously \cite{werth,gho}.
Singly charged magnesium ions from the pulsed external source have been captured and stored for studies of the temporal dynamics of ion cooling and crystal formation, and their geometric properties. Upon dynamic capture, the ions have kinetic energies of up to several hundreds of eV per charge (typically, 400\,eV per charge have been used), which is far outside the realm of efficient laser cooling. Therefore, a combination of buffer gas and laser cooling is applied. The following section is dedicated to a model of the expected cooling behaviour.
\section{cooling model}
\label{coolmod}
We have developed a simple yet realistic description of the effect of combined buffer gas and laser cooling. It extends the semi-classical model of Doppler cooling as presented in \cite{Wes07} in order to describe the evolution of the fluorescence signal during the formation process of ion Coulomb crystals in a Penning trap. Based on a rate-equation formalism, we find analytic solutions for the time dependence of the energy and thus for the fluorescence rate of a single particle. To this end, we add a recoil-heating term and an exponential cooling term to the rate equation. The latter accounts for the buffer gas cooling \cite{ita}. The laser frequency is tuned linearly with time.
According to the formalism presented in \cite{Wes07}, the scaled energy $\epsilon$ of a single particle with mass $m$ confined in a harmonic potential has a time derivative given by
\begin{eqnarray}
\frac{d\epsilon}{d\tau}=&-&\gamma_{1}(\epsilon-\epsilon_{1}) +\frac{4}{3}r\frac{1}{2\sqrt{\epsilon r}}\text{Im}(Z)\nonumber\\&+&\frac{1}{2\sqrt{\epsilon r}}\left(\text{Re}(Z)+\delta\,\text{Im}(Z)\right),
\label{Eq:1}
\end{eqnarray}
with $Z=i/\sqrt{1-(\delta+i)^2/4\epsilon r}$. In this equation, the energy $E$ of the particle, the laser detuning $\Delta$ and the recoil energy $E_{\text{R}}=(\hbar k_{z})^2/2m$ are scaled by the energy $E_0$ such that
$\{ \epsilon , \delta , r\} \equiv \{ E , \hbar\Delta , E_{\text{R}} \}/E_{0}$.
Here, $E_0 \equiv \hbar\Gamma\sqrt{(1+s_0)}/2$, where $\Gamma\sqrt{(1+s_0)}$ is the power-broadened linewidth. The on-resonance saturation parameter $s_0$ is determined according to $s_0=I/I_0$, where $I$ is the intensity of the laser at the position of the ions and $I_0$ is the saturation intensity.
The time $t$ is scaled by $t_0$ which is the inverse of the on-resonance fluorescence rate, such that $\tau \equiv t/t_{0}$, where $t_0$ is given by
\begin{equation}
t_{0}=\left(\frac{\Gamma}{2}\frac{s_0}{1+s_0}\right)^{-1}.
\end{equation}
$\Gamma$ is the decay rate of the excited state, $k_{z}$ is the $z$-component of the excitation laser wave vector, $\gamma_{1}$ is an exponential cooling rate factor and $\epsilon_{1}$ is the minimum energy that can be achieved by this exponential cooling mechanism. It is represented by the first term in Eq.\,(\ref{Eq:1}), which for buffer gas cooling is given by \cite{mk}
\begin{align}
\gamma_{\text{1}}=\frac{q}{m}\frac{1}{\mu_{0}}\frac{p/p_{0}}{T/T_{0}}.
\label{9}
\end{align}
The damping coefficient $\gamma_1$ depends on the ion mobility $\mu_{0}$ of the buffer gas, the residual gas pressure $p$ and the temperature $T$ of the buffer gas normalized by the standard pressure $p_{0}$ and the standard temperature $T_{0}$, respectively. The second and third terms of Eq.\,(\ref{Eq:1}) describe the laser-particle interaction, including stochastic heating of the particle due to photon recoil and laser Doppler cooling. The factor $4/3$ in the second term is true for isotropic emission characteristics. The scaled fluorescence rate can be found to be \cite{Wes07}
\begin{align}
\gamma_{\text{sc}} \equiv \frac{dN_{\text{ph}}}{d\tau}=\frac{1}{2\sqrt{\epsilon r}} \text{Im}(Z).
\label{Eq:3}
\end{align}
The laser detuning has the form $\Delta=\Delta_{i}+\Delta_{m}\times t$, where $\Delta \equiv \omega-\omega_0$ is the detuning of the actual laser frequency $\omega$ from the resonance frequency $\omega_0$. $\Delta_i$ is the initial laser detuning at time $t=0$ and $\Delta_{m}$ is the scan rate. The time $t=0$ denotes the start of the cooling process; this corresponds to the time when the ions are captured in the trap where buffer gas and cooling laser beam are present.
We perform the calculation for a helium buffer gas pressure of $p=4\times 10^{-9}$\,mbar and a buffer gas temperature of 4\,K.
The ion mobility for magnesium in a helium buffer gas is $\mu_{0}\approx 23\times 10^{-4}$\,m$^2$s$^{-1}$/V \cite{lorne}. The corresponding damping coefficient is $\gamma_{1}=0.52$/s.
The initial energy of the ion is 400\,eV and the laser parameters are $\Delta_{i}=-2\pi \times 200$\,MHz (initial detuning), $\Delta_{m}=2 \pi \times 5$\,MHz/s (scan rate), and $s_0=0.4$. The numerical results of the evaluation of Eq.\,(\ref{Eq:1}) and Eq.\,(\ref{Eq:3}) are depicted in Fig.\,\ref{Fig:1}, where the energy and the scaled fluorescence rates are shown as a function of time.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figure3_new}
\caption{Ion energy (black dashed curve, left axis) and scaled fluorescence rate (red curve, right axis) as a function of time during the cooling process.}
\label{Fig:1}
\end{figure}
Their behavior as a function of time can be divided into four different regimes:
\begin{itemize}
\item[i)] the initial cooling phase is dominated by buffer gas cooling, as laser cooling is very inefficient at such high ion energies
\item[ii)] at an energy of about 1\,eV, the laser detuning corresponds roughly to the half-width of the velocity distribution, leading to a rapid cooling and reduction of the width of the velocity distribution and to a characteristic fluorescence peak in the spectra
\item[iii)] ions at the detuning-dependent Doppler cooling temperature
\item[iv)] laser heating after crossing the resonance.
\end{itemize}
The appearance of a feature such as the fluorescence peak in ii) indicates that the laser line profile and the ion velocity distribution have maximum overlap, see also \cite{rct2}. Thus, phase ii) is characterized by strong laser cooling that leads to a rapid narrowing of the initially broad velocity distribution of the ions, until a quasi-equilibrium state with a rather narrow velocity distribution is reached at the beginning of phase iii). The quasi-equilibrium is characterized by the ion temperature depending only on the laser detuning. In phase ii), the ion energy is reduced by about 5 orders of magnitude within a second.
The temperature of an ion ensemble after the appearance of the fluorescence peak can be sufficiently low for the formation of ordered structures. Nevertheless, the observation of a fluorescence peak does not necessarily coincide with the ion cloud entering a liquid-like or crystalline state. It is neither a necessary nor a sufficient condition. Under specific experimental conditions, the ionic ensemble at that point has reached a temperature sufficient for entering a liquid-like \cite{horn} or a crystal-like state \cite{blue,new2}, as was shown in corresponding measurements in rf traps. However, we note that the characteristic `kink' in the fluorescence spectra observed in rf traps exhibits slightly more complicated dynamics, since the mechanism of rf-heating needs to be considered.
A useful quantity for the characterization of ion plasmas is the plasma parameter $\Gamma_{\text{p}}$ \cite{mal} which measures the Coulomb energy between ions relative to their thermal energy. It is defined by
\begin{equation}
\Gamma_{\text{p}} \equiv \frac{q^2}{4 \pi \epsilon_0 a_{\text{WS}} k_{\text{B}} T}
\end{equation}
where $q$ is the ion charge, $T$ is the ion temperature and $a_{\text{ws}}=(4/3\,\pi n)^{1/3}$ is the Wigner-Seitz radius \cite{kit} measuring the effective ion-ion distance at a given ion number density $n$. Ion
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure4_new2}
\caption{Phase diagram of the plasma parameter $\Gamma_{\text{p}}$ for the present conditions. Dotted line: Doppler limit of Mg$^+$ ions of 1\,mK. Gray area: possible ion number densities in thermal equilibrium. For details see text.}
\label{gamma}
\end{figure}
cooling increases the value of $\Gamma_{\text{p}}$: commonly one speaks of a weakly correlated plasma (a gas-like state) for $\Gamma_{\text{p}} \ll 1$, and of a strongly correlated plasma for $\Gamma_{\text{p}} \gtrsim 1$. Theoretical studies predict a fluid-like behaviour for $ 174 \gtrsim \Gamma_{\text{p}} \gtrsim 2$ \cite{gil} and a crystal-like behaviour for $\Gamma_{\text{p}} \gtrsim 174$ \cite{dub2}, which has been corroborated experimentally \cite{jens}. For the magnesium ions at a density of $5 \times 10^7$/cm$^3$, this value is reached for $T \approx 5$\,mK.
Fig. \ref{gamma} shows a phase diagram in temperature-density-space with the plasma parameter $\Gamma_{\text{p}}$ for the present trapping voltage of $U=50\,$V and the magnetic field of $B=4.1\,$T. The dotted line indicates the Doppler limit of Mg$^+$ ions of 1\,mK. The gray area shows the possible range of ion number densities $n$ in thermal equilibrium for the present trapping parameters. For sufficiently low temperature and in thermal equilibrium, the ion plasma performs a global rotation about the magnetic field axis and takes the shape of an ellipsoid of revolution with constant density \cite{dub2}. The global rotation is induced by the magnetic field used for confinement, and hence not observed in rf traps. As one important consequence, the ion number density $n$ is related to the global rotation frequency $\omega_r$ of the plasma by \cite{bre}
\begin{equation}
\label{enn}
n=\frac{2m\epsilon_0}{q^2} \omega_r (\omega_c-\omega_r),
\end{equation}
in which $\omega_r$ is bounded by the magnetron frequency $\omega_-$ and the reduced cyclotron frequency $\omega_+$ given by
\begin{equation}
\omega_{\pm} = \frac{\omega_c}{2} \pm \left( \frac{\omega_c^2}{4} - \frac{\omega_z^2}{2} \right)^{1/2}.
\end{equation}
Here, $\omega_c=qB/m$ and $\omega_z^2=qUC_2/(md^2)$ \cite{gab89}.
For a $^{24}$Mg$^+$ ion in the present trap with a characteristic size $d=7.062$\,mm, a well-depth efficiency parameter $C_2=0.578$ and at a trapping voltage of $U=50$\,V, these frequencies are $\omega_z=2\pi\times 241.4$\,kHz, $\omega_-=2\pi\times 11.4$\,kHz and $\omega_+=2\pi \times 2.55$\,MHz.
Eq.\,(\ref{enn}) holds true as long as the Debye length
\begin{equation}
\lambda_{\text{D}}=\sqrt{\frac{\epsilon_0 k_{\text{B}} T}{n q^2}}
\end{equation}
is much smaller than the dimensions of the ion cloud \cite{bol}. For our laser-cooled Mg$^+$ ions, this is the case as $\lambda_{\text{D}}$ is of the order of $\mu$m, while the crystal size is of the order of mm.
For the possible range of $\omega_r$, Eq.\,(\ref{enn}) leads to densities between $n_{\text{min}}=n(\omega_r=\omega_-)=3.1\times 10^7$/cm$^{3}$ and $n_{\text{max}}=n(\omega_r=\omega_c/2)=1.8\times 10^9$/cm$^{3}$.
For a given density $n$ (or, equivalently, global rotation frequency $\omega_r$), the aspect ratio $\alpha$ of the cloud (axial to radial extension) is determined by the trapping voltage $U$ according to the formalism given in \cite{bre}.
In Fig. \ref{gamma}, the initial position of the ions in ($T,n$)-space directly upon capture into the trap is indicated (red dot), as well as the position at the end point of cooling (blue dot). Upon capture, the ions are assumed to have a density given by the measured ion number (detector F in Fig. \ref{set}) distributed over the trapping volume. After cooling, the density is determined from the measured inter-particle distance in the crystal as discussed in section \ref{densi}. Note, that the initial and final temperatures are estimated from the initial axial ion energy and the observed shell structure, respectively, the latter of which may also form below $\Gamma_{\text{p}} \approx 174$, depending on experimental detail. Note also, that the cooling path indicated serves purposes of illustration only and is not the actual (unknown) cooling path of the ions in the ($T,n$)-plane. At the end point of cooling, the ions enter an ordered state, the structure of which is the topic of the following section.
\section{Shell structure of mesoscopic ion crystals}
The geometric properties of ion plasmas and the formation of ion Coulomb crystals have been described in detail for example in \cite{died,bir,dub1,dub2,dub3,mit,rich,horn,dre,dre2,bol,bol2}. For the present ion numbers
of the order of $10^3$ to $10^5$ (`mesoscopic'), and aspect ratios (axial extension to radial extension) of typically $\alpha \ll 1$, the so-called `planar-shell model' is a good approximation to describe the geometry of the confined plasmas.
It applies to the case of a spheroidal plasma with a radius sufficiently large such that the curvature of the shell planes can be neglected close to the trap axis. While in a real plasma the number of shells depends on the radial position in the crystal and decreases towards the edges, this model describes the plasma throughout as a series of $S$ parallel planes at axial positions $z_i$ with area ion number density $\sigma_i$. For the sake of simplicity, it does not explicitly account for correlations between shells, in which case one would also expect lateral offsets in the ion positions between neighbouring shells \cite{mit}.
Fig. \ref{ebenen} depicts the geometry and the involved quantities. This discussion closely follows along the lines presented in \cite{dub2}, however to the end of interpreting our measurements, it is instructive to restate some of the results given in \cite{dub2}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\columnwidth]{figure5}
\caption{\small Planar-shell model geometry and the involved quantities. This model does not account for correlations in ion position between different shells.}
\label{ebenen}
\end{center}
\end{figure}
For our situation, minimizing the energy per ion implies that the area charge density $q\sigma_i$ of each lattice plane is identical and that the lattice planes are spaced by a uniform distance $D$ \cite{dub2}. The total area number density $\sigma$ is the sum of all $\sigma_i$,
and the spacing $D$ is linked to $\sigma$ and $S$ via the relation
\begin{gather}
D = \frac{\sigma}{n S}.
\label{eq:D_layerSpacing}
\end{gather}
The contributions to the energy per particle are the self-energy of the set of $S$ planes, the energy due to the external potential, and the (negative) correlation energy associated with each 2D lattice plane. The total energy per particle reads \cite{dub2}
\begin{gather}
\frac{E}{N}=\pi e^2\left[L\sigma-\frac{1}{6}\frac{\sigma^2}{n}\right] + \frac{U_{\text{corr}}}{N},
\end{gather}
where $2L$ is the axial extension of the crystal and $U_{\text{corr}}$ is the ion-ion correlation energy given by \cite{dub2}
\begin{gather}
\frac{U_{\text{corr}}}{N}=\frac{e^2}{a_{\text{ws}}} \left[\frac{2\pi^2}{9}\left(\frac{\bar{\sigma}^2}{S}\right)-\frac{\eta}{2}\left(\frac{\bar{\sigma}}{S}\right)^{1/2}\right],
\label{eq:u_corr}
\end{gather}
where $\bar{\sigma}=\sigma a^2_{\text{ws}}$ as indicated in Fig. \ref{ebenen}. In this equation, $\eta$ accounts for the Madelung energy \cite{kit} of the 2D lattice. We use the value $\eta=3.921$ of the hexagonal lattice, which has the lowest Madelung energy in 2D \cite{kit}.
A structure of $S$ parallel ion planes has a higher energy than a uniformly spread charge, which is reflected in the first term of Eq.\,(\ref{eq:u_corr}) being positive. The second term accounts for the ion-ion correlations within each plane. It is negative, which promotes the formation of a finite set of ordered planes. Hence, the number of planes that form results from the competition between these two terms.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figure6}
\caption{\small Correlation energy per particle as a function of $\bar{\sigma}=\sigma a^2_{\text{ws}}$ given for various numbers of shells $S$ (red curve, left hand scale) and the distance $D$ for which the energy per particle has a minimum as a function of the normalized area (straight blue lines, right hand scale).}
\label{shell}
\end{center}
\end{figure}
The correlation energy per particle $U_{\text{corr}}/N$ takes a minimum value (with respect to the plane number $S$) for
\begin{equation}
S_{\text{min}}=\left[16\pi^2/(9\eta)\right]^{2/3} \bar{\sigma},
\label{eq:S_min}
\end{equation}
in which case the distance $D$ between two planes is given by
\begin{equation}
\frac{D}{a_{\text{ws}}}=\left(\frac{3\eta^2}{4\pi}\right)^{1/3}=1.54.
\label{eqd}
\end{equation}
Since the number $S$ must be an integer, $S=S_{\text{min}}$ can only be fulfilled for certain values of $\bar{\sigma}$, and the actual value of $S$ will be an integer close to $S_{\text{min}}$.
Fig. \ref{shell} shows the correlation energy per particle $U_\text{corr}/N$ according to Eq.\,(\ref{eq:u_corr}) for shell numbers $S=1$ to $S=7$. For the given range of $\bar{\sigma}$, the shell number $S$ that gives the minimum correlation energy was chosen to calculate the inter-shell distance $D$ with Eq.\,(\ref{eq:D_layerSpacing}).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figure7}
\caption{\small Planar-shell model prediction of the crystal shell structure as a function of $\bar{\sigma}=\sigma a^2_{\text{ws}}$.}
\label{layer}
\end{center}
\end{figure}
One can observe that for larger $\bar{\sigma}$, the variation of the minimum correlation energy gets smaller, hence the actual correlation energy is for large shell numbers $S$ close to the minimum value. Likewise, the variation of the shell distance $D$ decreases for large shell numbers and approaches $D\approx 1.54\,a_{\text{ws}}$. The corresponding
axial plane positions as a function of the normalized area charge density are shown in Fig. \ref{layer}.
It illustrates the stepwise increase of the number of shells for increasing charge density, i.e. for increasing ion number.
\section{Experimental Results}
\subsection{Cooling behaviour and fluorescence}
Magnesium ions have been prepared in the trap according to the experimental cycle discussed in section \ref{setsec}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\columnwidth]{figure8_new}
\caption{\small Time evolution of the fluorescence signal for three different values of the laser cooling parameters. Blue curves: data. Red curves: theory. In all cases, $E=400$\,eV and $\epsilon_1E_0=k_{\text{B}} \times 4$\,K. }
\label{scan1}
\end{center}
\end{figure}
As indicated in section \ref{coolmod}, the cooling of the ion cloud can be monitored by observation of the fluorescence rate as a function of time. Fig. \ref{scan1} shows the time evolution of the fluorescence signal for three different sets of cooling laser parameters.
In case (a), the laser parameters were chosen such that a pronounced fluorescence peak is visible. In (b), with smaller initial detuning and higher scan rate, the influence of the laser becomes visible only close to a critical detuning of
\begin{equation}
\Delta \lesssim -\frac{1}{\sqrt{3}} \frac{\Gamma}{2} \sqrt{1+s_0},
\end{equation}
such that the fluorescence peak is not pronounced. In (c), the scan rate is so high that phase iii (laser cooling equilibrium) is never reached, and the ions remain at a temperature of around 100\,K before phase iv (laser heating) sets in. The red curves in Fig. \ref{scan1} show the predictions of the model presented in section \ref{coolmod} according to Eq.\,(\ref{Eq:3}), where the value of the parameter $\gamma_1$ has been adjusted in each case in order to apply the single-particle model to the experimental many-particle system. In all cases discussed in this work, the value of $\gamma_1$ lies within a factor of 2.5 which is on account of a variation of the helium gas pressure between different experimental runs.
We have performed systematic measurements of the appearance time $t_{\text{peak}}$ of the fluorescence peak as a function of laser parameters. Fig. \ref{scan2} shows the appearance time as a function of the initial laser detuning and the laser scan rate, respectively.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figure9}
\caption{\small Left: Appearance time of the fluorescence peak as a function of the initial laser detuning for a constant scan rate of $\Delta_m=2\pi \times 8$\,MHz/s. Right: same as a function of the laser scan rate for constant initial detuning $\Delta_i=-2\pi\times 208$\,MHz.}
\label{scan2}
\end{center}
\end{figure}
To calibrate the measurements, we have done an independent measurement of the damping constant $\gamma_1$ as input parameter for the model presented in section \ref{coolmod}, assuming again an initial ion energy of 400\,eV. The model prediction for the peak appearance time is plotted in Fig. \ref{scan2} as dashed lines. The left hand graph shows the appearance time as a function of the initial detuning for constant scan rate. The right hand graph shows the same as a function of the scan rate for constant initial detuning.
Obviously, the appearance time increases with decreasing initial detuning $|\Delta_i|$, since the energy taken away per cooling cycle decreases. Also, the appearance time increases with increasing scan rate $\Delta_m$, which can be understood in that the laser spends less time at large detuning where the dissipated energy per cooling cycle is highest. The cooling model from section \ref{coolmod} (dashed lines) agrees well with the data.
\subsection{Crystal formation}
When the laser cooling reduces the kinetic energy of the confined ions sufficiently, they `freeze' in the effective potential given by the trap and their mutual Coulomb interactions, as has been demonstrated in numerous experiments, see for example \cite{died,bir,dre,dre2,horn,mit,rich}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figure10}
\caption{\small Top: Detected fluorescence during ion cooling and crystal formation extracted from fluorescence images. Bottom: Selected images of the ion cloud during cooling. All images shown are false-colour images of the UV fluorescence with light colour representing the highest intensity.}
\label{res1}
\end{center}
\end{figure}
Unlike crystals known from solid-state physics, these Coulomb crystals have no intrinsic binding force, but a mutual repulsion inside the common external potential well of the trap.
In a Penning trap, the equilibrium state of an ion crystal is an ordered structure as described in \cite{dub2} that performs a global rotation about the trap's central axis ($z$-axis) at a frequency $\omega_r$ set by the initial conditions, and bounded by the magnetron frequency $\omega_-$ and the reduced cyclotron frequency $\omega_+$ \cite{bol}, as discussed in section \ref{coolmod}.
The global rotation at $\omega_r$ leads to a smearing-out of the observed structure in the $x$- and $y$-directions, if the exposure time is not negligible with respect to the inverse of the rotation frequency.
The visibility of the shells, however, is to a large extent unaffected by this, such that the present images resolve the shell structures even for long exposure times.
For the conditions present in our experiment, the global rotation frequency has been determined to be close to the magnetron frequency, and hence the clouds have aspect ratios much smaller than unity, i.e. they are of oblate shape.
Fig. \ref{res1} shows the measured fluorescence rate
as a function of time during ion cooling, and shows ion images at selected times. This data is from the same measurement as the data in Fig. \ref{scan1} (a).
As expected, the initially diffuse ion cloud increases in density during cooling. In particular, when crossing the fluorescence peak at $t=14$\,s, there is a sudden increase in density from the diffuse situation at $t=13$\,s to the denser distributions at $t=14$\,s and $t=15$\,s. The shell structure becomes visible a few seconds after that fluorescence peak, from about $t=27$\,s on, and intensifies as the laser is further scanned towards resonance, see $t=36$\,s and $t=46$\,s. Note, that there is no indication of any crystalline feature in the images when crossing the fluorescence peak between $t=13$\,s and $t=15$\,s.
\subsection{Geometric structure}
\label{densi}
The ion crystals under investigation consist of several thousands of Mg$^+$ ions. Hence, they fall into the category of `mesoscopic' ion crystals which are large
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figure11}
\caption{\small a) CCD image of an ion crystal with scale given. The circle indicates the trapping region visible to the camera. b) Cross section of the fluorescence, showing 4 shells.}
\label{pic}
\end{center}
\end{figure}
enough to display a shell structure and are still subject to surface effects, not having reached the universal lattice structure of macroscopic crystals \cite{dub2}. Fig. \ref{pic} (a) shows a detailed CCD image of a crystal.
\begin{figure}[h!]
\centering
\includegraphics[width=1\columnwidth]{figure12}
\caption{Images of a Mg$^+$ ion crystal with the number of ions and hence shells decreasing with time. For details see text.}
\label{fourclouds}
\end{figure}
The figure also indicates the observable part of the trapping region as the interior of the dashed circle, and the vertical section in the middle from which the cross section (b) is taken. In this way, all comparable images in this work have been taken and evaluated.
Four images of a mesoscopic crystal and corresponding cross sections studied for 7 minutes are shown in Fig. \ref{fourclouds}, stored in a magnetic field of $B=4.1$\,T and a trapping voltage of $U=50$\,V.
The structure with parallel planar shells is visible both in the images and the cross sections. The presented images have a temporal separation of 100\,seconds and an exposure time of 5\,seconds each.
The buffer gas pressure is such that the number of ions and the number of lattice planes decreases on this timescale, allowing a convenient observation of structures with varying ion number. For each image, the contrast has been adjusted individually such that the shell structure is clearly visible. To show the real intensity relations, the intensity profile through the crystal center is also displayed. Here, each shell appears as a small deviation from the average crystal profile.
\begin{figure*}
\includegraphics{figure13}
\caption{Left: temporal evolution of the ion crystal cross section depicted in Fig. \ref{fourclouds}. Red dots are theory values from the planar-shell model, for details see text. Right: the corresponding residual cross sections. Up to eleven crystal shells can be seen.}
\label{eiffel}
\end{figure*}
The temporal evolution of this shell structure is shown in Fig. \ref{eiffel} (left). It displays the measured cross sections as indicated in Fig. \ref{pic} as a function of time. Each cross section is integrated over 5 seconds exposure time, such that for the 420 seconds, the 84 displayed cross sections are obtained.
A number of selected cross sections are shown in the right part of Fig. \ref{eiffel} as residual cross sections, in which the measured cross section is normalized to its running average. For large numbers of shells, the contrast in the crystal center is small and for the largest crystal (leftmost cross section), the individual shells are not clearly visible. The total number of shells can, however, be determined by comparison with smaller crystals because the shell positions and shell distances are preserved.
In Fig. \ref{eiffel}, the parameter $\bar{\sigma}$ was calculated from the integrated fluorescence of a cross section. With this method the area density was determined for each frame, and the vertical shell positions $z_i$ were calculated according to Eq.\,(\ref{eq:D_layerSpacing}).
The calculated positions $z_i$ of the crystal shells are shown by the red lines in Fig. \ref{eiffel}. In addition to the number of shells $S$ and the shell positions, also the times - and corresponding densities - where transitions from $S+1$ to $S$ occur are predicted correctly.
From the results shown in Figs. \ref{fourclouds} and \ref{eiffel}, the value of the Wigner-Seitz radius was determined as $a_{\text{ws}}=19.1\,\mu$m, corresponding to a density of $n=3.4\times 10^7\,$/cm$^{3}$, close to the minimum density of $3.1\times 10^7$/cm$^{3}$. The corresponding rotation frequency and aspect ratio are $\omega_r=2\pi\times 12.2$\,kHz and $\alpha \approx 1/24$.
The total number of particles stored in the trap is $N= 4/3\, \pi z_0 r_0^2 \times n$, where $z_0$ and $r_0=z_0/\alpha$ denote the axial and the radial crystal radius, respectively. For the time $t=0$\,s one finds $z_0=150\,\mu$m and thus $r_0=3.6\,$mm, which gives a total particle number of $N\approx 3\times 10^5$. Assuming that the visible volume is determined by the laser beam with a waist of $w_0=1\,$mm, about $3\times 10^4$ Mg$^+$ ions are visible in the images.
\subsection{Two-Species Ion Crystals}
Two-species ion crystals \cite{new1,new2} were formed by sympathetic cooling of dark ions after their injection into a cloud of laser-cooled magnesium ions.
Two-species crystals composed of Mg$^+$ and ions with masses $m$=2\,u (H$^+_2$), $m$=12\,u (C$^+$), $m$=28\,u (N$^+_2$), and $m$=44\,u (CO$^+_2$) were studied, such that a mass-to-charge ratio range of 2 to 44 was covered.
The characteristics of the mixed-ion crystals can be classified with regard to the mass-to-charge ratio of the sympathetically cooled species. After injection of CO$^+_2$ into the trap, the axial extent of the ion cloud increased due to the larger total number of ions, since the density remained unchanged. Consequently, additional crystal shells were formed.
Fig. \ref{fig:m44_sympathetic} shows an example of the temporal fluorescence evolution, cloud images before and after loading of CO$_2^+$, and the evolution of the crystal structures. Dips in the fluorescence signal are produced by switching of the capture electrode during ion injection. Red crosses mark the injection processes, for which the parameters were intentionally chosen such that no CO$_2^+$ was captured into the trap.
After the dip, the fluorescence signal recovers to the value expected without switching of the capture electrodes.
The injection of CO$^+_2$ becomes apparent by an increase of the fluorescence signal and the number of crystal shells.
Since the CO$_2^+$ ions are not fluorescing, it is not possible to determine their distribution directly. However, one may assume that CO$_2^+$ and Mg$^+$ are radially separated, since upon loading of CO$_2^+$ ions, the overall fluorescence increases while the fluorescence per shell remains approximately constant. This indicates that the number of Mg$^+$ in the observed volume increased, because the CO$_2^+$ ions accumulate at larger, unobservable radii and push the Mg$^+$ ions to the trap center and thus into the laser beam. Also, the clearly observable shell structure suggests a temperature below 100\,mK, at which Mg$^+$ and CO$_2^+$ ions should undergo centrifugal separation, similar to the cases discussed in \cite{sep1,sep2}.
After injection of N$^+_2$ ions ($m$=28\,u), effects similar to the case of CO$^+_2$ were observed. For injected C$^+$ ions ($m$=12\,u), the crystal structure was conserved and the axial cloud extent increased due to the formation of additional crystal shells. However, unlike the cases of N$_2^+$ and CO$_2^+$, the injection of C$^+$ caused no increase of the total fluorescence. Since the cloud extent increased nonetheless, this means that the fluorescence per shell was reduced. The fluorescence reduction cannot be explained by significant heating since the crystal structure was preserved, which corroborates a radial separation of the ion species. In this case, the lighter C$^+$ ($m$=12\,u) ion accumulate in the trap center, whereas the Mg$^+$ ions are forced to larger radii from which they contribute less to the detected fluorescence.
\begin{figure*}
\includegraphics{figure14}
\caption{Fluorescence signal (bottom left) of a Mg$^+$ cloud during the consecutive injection of three bunches of CO$_2^+$ ions ($m$=44\,u) and four capture switching cycles without ion loading (red `X'). Three crystal images (right) show the crystal structure at different times (blue dots in fluorescence signal). The temporal evolution of the cross section for each video frame (top left) reveals the temporal evolution of the crystal structure.}
\label{fig:m44_sympathetic}
\end{figure*}
After injection of H$_2^+$ - the lightest ion species under investigation - a loss of fluorescence per volume was observed. Here, the fluorescence signal decreased to a small fraction of the initial value after the capture of H$_2^+$, whereas the axial cloud extent remained constant. In analogy to the discussion of the other ion species, this indicates a radial separation of H$_2^+$ and Mg$^+$ with the lighter hydrogen in the center of the trap. However, no crystalline shell structure could be observed in this two-species ion cloud, either since the signal-to-noise ratio was insufficient or because the ordered structure was actually lost. As a consequence, a fluorescence decrease due to an unidentified heating process cannot be entirely excluded in this case. Nevertheless, the assumption of centrifugal separation is justified since the mass difference between Mg$^+$ and H$^+_2$ is the largest of all ion species under investigation. Apparently, centrifugal separation is to be expected even for comparatively large temperatures.
Overall, these investigations prove that two-species ion crystals were formed over a large range of charge-to-mass ratios of the involved species with large numbers of dark ions. Although centrifugal separation is a possible limitation for spectroscopy of sympathetically cooled species, this shows that sympathetic cooling down to crystalline structures is possible also for externally produced ions at initially high energies. This concept can be extended to multi-species crystals with ions from different sources.
\section{Summary}
We have applied a combination of buffer gas cooling and laser cooling to externally produced Mg$^+$ ions captured and confined in a Penning trap. This technique has been found to reduce the ion kinetic energy by eight orders of magnitude within seconds, leading to the ions entering a crystalline state. We have observed the temporal evolution of the ion fluorescence that reflects the ion kinetic energy and find agreement with a model of combined buffer gas and laser cooling. We have studied the geometric properties of the resulting ion crystals and find agreement with the planar-shell model which applies to ion crystals of the present size, i.e. so-called `mesoscopic' ion crystals consisting of several thousands to several tens of thousands of ions.
When other ion species are captured and confined together with already stored and cooled Mg$^+$ ions, they are sympathetically cooled and together form two-species ion crystals, with properties depending on the combination of mass-to-charge ratios, in agreement with theory of centrifugal separation. The present findings demonstrate highly efficient cooling of ions in a Penning trap upon capture from external sources at medium to high transport energies, including sympathetic cooling of ion species for which no laser-cooling transition exists. This facilitates precision spectroscopy of confined ions from external sources as it allows efficient cooling and hence a suppression of the influence of the Doppler effect. When the initial buffer gas cooling is spatially or temporally separated from the laser cooling, this method is also suitable for sympathetic cooling of highly charged ions into a Doppler-free regime.
\section{Acknowledgement} We thank Hamamatsu for loan of the CCD camera, and Zoran Andelkovic, Bernhard Maa\ss, Alexander Martin, Oliver Kaleja, Kristian K\"onig, J\"org Kr\"amer, Tim Ratajczyk and Rodolfo Sanchez for their support in the conduction of the experiment. We gratefully acknowledge the
support by the Federal Ministry of Education and Research
(BMBF, Contract Nos. 05P15RDFAA, the Helmholtz International Centre for FAIR
(HIC for FAIR) within the LOEWE program by the federal state
Hessen, the Deutsche Forschungsgemeinschaft (DFG contract BI 647/5-1) and
the Engineering and Physical Sciences Research Council
(EPSRC). S.S. and T.M. acknowledge support
from HGS–HIRe. The experiments have been performed
within the framework of the HITRAP facility at the Helmholtz
Center for Heavy Ion Research (GSI) at Darmstadt and the
Facility for Antiproton and Ion Research (FAIR) at Darmstadt.
|
2,877,628,088,573 | arxiv | \section{Introduction}\label{sec:introduction}
Energy management (or demand management) is a technique that changes the electricity usage patterns of end users in response to the changes in the price of electricity over time~\cite{Albadi-JEPSR:2008,Liu-STSP:2014}. With the advancement of distributed energy resources (DERs), the technique can also be used to assist the grid or other energy controllers such as a shared facility authority (SFA)~\cite{Tushar-ISGT:2014} to operate reliably and proficiently by supplying energy to them~\cite{Tushar-TSG:2014}. %
The majority of energy management literature focuses mainly on three different pricing schemes: time-of-use pricing; day-ahead pricing; and real-time pricing~\cite{Yi-TSG:2013}. Time-of-use pricing~\cite{Asano-TPS:1992} has three different pricing rates: peak, off-peak and shoulder rate based on the use of electricity at different times of the day. Day-ahead pricing~\cite{Torre-TPS:2002} is, in principle, determined by matching offers from generators to bids from energy users (EUs) so as to develop a classic supply and demand equilibrium price at an hourly interval. Finally, real-time pricing~\cite{Yi-TSG:2013} refers to tariffed retail charges for delivering electric power and energy that vary hour-to-hour, and are determined from wholesale market prices using an approved methodology. Other popular dynamic pricing schemes include critical peak pricing, extreme day pricing, and extreme day critical peak pricing~\cite{Yi-TSG:2013}. It is important to note that in all of the above mentioned pricing schemes all EUs are charged at the same rate at any particular time.
Due to government subsidies to encourage the use of renewables~\cite{Fischer-JEEM:2008}, more EUs with DERs are expected to be available in smart grid. This will lead to a better completion of a purchasing target for an energy controller, and thus more saving from its buying cost. Particularly, for energy controllers such as an SFA that relies on the main grid as its primary source of energy~\cite{Tushar-ISGT:2014}, the opportunity for trading energy with EUs can greatly reduce their dependency, and consequently decrease their cost of energy purchase~\cite{Tham-JTSMCS:2013}. Nevertheless, not all EUs would be interested in trading energy with the energy controller if the benefit is not attractive~\cite{Naveed-Energies:2013}. This can precisely happen to EUs with merely limited energy capacity, or to EUs that are highly sensitive to the inconvenience caused by the trading of energy whose expected return could be very small. In this case, the EUs would store the energy or change its consumption schedule rather than selling it to the energy controller~\cite{Tushar-TIE:2014}. However, one possible way to address this is to pay them a relatively higher price per unit of energy, compared to the EUs with very large DERs, without affecting their revenue significantly. In fact, allowing discriminate pricing not only considerably benefits EUs with lower energy capacity without significantly affecting others, as we will see shortly, but also benefits the SFA by reducing its total cost of energy purchase when adopting this flexible pricing.
\begin{table}
\caption {Numerical example of a discriminate pricing scheme where an SFA requires $40$ kWh of energy from two EUs and the SFA's total price per unit of energy to pay to the EUs is $40$ cents/kWh.}
\centering
\begin{tabular}{|c||c|c|}
\hline
& \textbf{Case 1} & \textbf{Case 2}\\
\hline
Payment to EU1 (cents/kWh) & $20$ & $18$\\
\hline
Payment to EU2 (cents/kWh)& $20$ & $22$\\
\hline
Energy supplied by EU1 (kWh) & $35$ & $32$\\
\hline
Energy supplied by EU2 (kWh) & $5$ & $8$\\
\hline
Revenue of EU 1 (cents) & $700$ & $576$ (-$17\%$)\\
\hline
Revenue of EU 2 (cents) & $100$ & $176$ (+$76\%$)\\
\hline\hline
\textbf{Cost to the SFA} (cents) & $800$ & $752$ (-$6\%$)\\
\hline
\end{tabular}
\label{table:motivation}
\end{table}
For instance, consider the numerical example given in Table \ref{table:motivation} where the SFA buys its required $40$ kWh energy from EU1 and EU2. EU1 has $50$ kWh and EU2 has $10$ kWh of energy to sell to the SFA. In case 1, the SFA pays the same price $20$ cents/kWh to each of them, and EU1 and EU2 sell $35$ and $5$ kWh respectively to the SFA. Hence, the revenues of EU1 and EU2 are $700$ and $100$ cents respectively, and the total cost to the SFA is $800$ cents. In case 2, the SFA uses discriminate pricing to motivate EU2 to sell more to the SFA. Therefore, it pays $22$ cents/kWh to the EU2 and $18$ cents/kWh to EU1. Now, due to this increment of price EU2 increases its selling amount to $8$ kWh, and the SFA procures the remaining $32$ kWh from EU1. Therefore, the revenues changes to $576$ and $176$ cents for EU1 and EU2 respectively, and total cost to the SFA reduces to $752$ cents. Thus, from this particular example it can be argued that discriminate pricing can be considerably beneficial to EUs with small energy (revenue increment is $76\%$) in expense of relatively lower revenue degradation (e.g., $17\%$ in the case of EU1) from EUs with larger DERs. It also reduces the cost to the SFA by $6\%$. Therefore, discriminate pricing is advantageous for reducing SFA's cost and also for circumstances where the SFA motivates the participation of EUs with both large and small DERs in the energy trading. Hence, there is a need for investigation as to how this pricing scheme can be adopted in a smart grid environment.
To this end, we take \emph{the first step} towards discussing the properties of a discriminate pricing scheme. The idea of discriminate pricing was first used to design a consumer-centric energy management scheme in \cite{Tushar-TSG:2014}. However, no insight was provided into the choice of different prices that are paid to different EUs. In this paper, we first propose a scheme by using a two-stage Stackelberg game. In the proposed scheme, the EUs with smaller energy generation can expect higher unit selling price, and the price is adaptive to their available energy for sale and their sensitivity to the inconvenience of energy exchange. At the same time, the scheme is designed to minimize the total purchasing cost to the energy controller whereas each EU also receives its best utility based on its available energy, its sensitivity to the inconvenience, and the offered price by the SFA. We prove the existence of a solution to the proposed game, and use a backward induction method to determine how the unit price set by the energy controller is affected by an EU's various parameters. We further derive a closed form expression for differing price generation considering some conditions on the energy controller's cost function. Finally, we present some numerical cases to show the properties of the proposed discriminate pricing scheme.
We stress that current grid systems do not allow such discriminate pricing among EUs. However, we envision it as a further addition to real-time pricing schemes in future smart grid. Examples of such differentiation can also be found in standard Feed-in-Tariff (FIT) schemes~\cite{2012-solarchoice}.
\section{System Description and Problem Formulation}\label{sec:system-model}
\begin{figure}[b!]
\centering
\includegraphics[width=\columnwidth]{inconvenience}
\caption{Example of how EUs are sensitive to the inconvenience caused by a change of price, which thus affects their amount of energy to trade with the SFA.} \label{fig:example}
\end{figure}
Consider a smart community consisting of a large number of EUs, an SFA and the main electric grid. Each EU can be a single user, or group of users connected via an aggregator that acts as a single entity~\cite{Fang-J-CST:2012}. EUs are equipped with distributed energy resources (DERs) such as wind turbines and solar arrays. They can sell their energy, if there is any remaining after meeting their essential loads, to the SFA or to the main grid to make some extra revenue. Since the grid's buying price is significantly low in general~\cite{McKenna-JIET:2013}, it is reasonable to assume that each EU would be more interested in selling its energy to the SFA instead of selling to the grid. Alternatively, an EU can store its energy or schedule its equipment instead of selling the energy to the SFA if the return benefit is not attractive, i.e., if the price is not convenient enough for the EU to trade its energy. We briefly explain this phenomenon by an example in Fig.~\ref{fig:example}.
In Fig.~\ref{fig:example}, we use the same example of Table~\ref{table:motivation} and show how sensitive an EU is to the inconvenience of trading energy caused by the change of price per unit of energy. As can be seen from the figure, EU1 has considerably lower essential load than EU2, and thus has a larger available energy to supply to the SFA. As a result, in case 1, EU1 supplies $35$ kWh of energy to the SFA whereas EU2 supplies $5$ kWh for the same per unit price of $20$ cents/kWh after using the energy for their other flexible loads. However, in case 2, the SFA adopts a discriminate pricing scheme and changes the per unit price to $18$ cents/kWh and $22$ cents/kWh to pay to EU1 and EU2 respectively. Due to the change of price, the expected return for EU2 becomes larger from trading its energy at the expense of the revenue degradation from EU1. Consequently, energy trading becomes more inconvenient for EU1 where as at the same time it becomes more appealing for EU2. As shown in Fig.~\ref{fig:example}, due to their sensitivities to the inconvenience caused by the change of price, EU1 reduces its amount of energy for selling to $32$ kWh (i.e., by increasing its use of the remaining available energy for other purposes such as storage) whereas EU2 increases its amount of energy for selling to $8$ kWh in case 2. In this paper, we quantify this sensitivity of each EU to the relative inconvenience through an \emph{inconvenience parameter\footnote{Where a higher and lower value of this parameter refers to the higher and lower sensitivity of an EU respectively to the inconvenience caused by energy trading.}}, as we will see shortly, and analyze its effects on the total cost to the SFA. The SFA refers to an energy controller\footnote{For the rest of this paper, we will use SFA to indicate an energy controller as discussed in Section~\ref{sec:introduction}.} that controls the electricity consumed by the equipment and machines that are shared and used by EUs on daily basis. The SFA does not have any energy generation capacity, and therefore depends on EUs and the main grid for its required energy. The SFA is connected to the main grid and all EUs via power and communication lines~\cite{Fang-J-CST:2012}.
To this end, let us assume that $N$ EUs in a set $\mathcal{N}$ are taking part in energy trading with the SFA. At a particular time of the day, the SFA's energy requirement is $E_r$, and each EU $i\in\mathcal{N}$ has an available energy of $E_i$ after meeting its essential load from which it can sell $e_i$ to the SFA. The main objective of each EU $i$ is to make some extra revenue by selling $e_i$ to the SFA at a price $c_i$ per unit of energy. However, the choice of $e_i$ is reasonably affected by the inconvenience parameter $\alpha_i$, which is a measure of sensitivity of EU $i$ to the inconvenience it faces to trade its energy. In this regard, we define a utility function $U_i$ for each EU $i$ that captures the effect of this inconvenience, and is assumed to possess the following properties:
\begin{enumerate}[i)]
\item The utility function is an increasing function of $e_i$ and $c_i$, and a decreasing function of inconvenience parameter $\alpha_i$. That is $\frac{\delta U_i}{\delta e_i}, \frac{\delta U_i}{\delta c_i}>0$, and $\frac{\delta U_i}{\delta \alpha_i}<0$. $\alpha_i$ captures the fact that the utility will decrease for an EU if its sensitivity to the inconvenience of trading energy increases.
\item The utility function is a concave function of $e_i$, i.e., $\frac{\delta^2U_i}{\delta {e_i}^2}<0$. Therefore, the utility can become saturated or even decrease with an excessive $e_i$. This can be interpreted by the fact that since EUs with DERs are equipped with a battery with limited capacity in general, excessive supply of energy once exceeding a certain limit would risk the depletion of battery due to the aging effect upon the battery, and consequently decrease the EU's utility.
\end{enumerate}
Formally, we define $U_i~\forall i$ as
\begin{eqnarray}
U_i = e_ic_i + (E_i-\alpha_i e_i)e_i.
\label{eqn:utility}
\end{eqnarray}
In \eqref{eqn:utility}, $e_ic_i$ is the direct income that the EU $i$ receives from selling its energy to the SFA at a price $c_i$ per unit of energy. $(E_i-\alpha_i e_i)e_i$ refers to the possible loss for the EU's inconvenience sensitivity $\alpha_i>0$. Different values of $\alpha_i$ reflect different negative impacts of energy supply on an EU's utility, and an EU can set higher $\alpha_i$ if it prefers to sell less. For example, the effects of $c_i$ and $\alpha_i$ on an EU's utility from its energy trading is shown in Fig.~\ref{fig:effect-utility}. Now, with the goal of maximizing utility, the objective of each EU can be expressed as
\begin{eqnarray}
\max_{e_i}\left[e_ic_i + (E_i - \alpha_ie_i)e_i\right].\label{eqn:obj-eu}
\end{eqnarray}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{utilityEU}
\caption{Effect of parameters such as $\alpha_i$ and $c_i$ on the achieved utility of an EU are shown in this figure. As can be seen in the figure, for the same $\alpha_i$ a higher $c_i$ encourages an EU to sell more to the SFA and thus the maximum utility shifts towards a higher value on the right. By contrast, a higher $\alpha_i$ causes more inconvenience to the EU which can lead the utility even to a negative value (i.e., cost) for greater energy trading.} \label{fig:effect-utility}
\end{figure}
However, as a buyer of energy, the SFA wants to minimize its total cost $J$ of energy purchase from EUs and the grid. In this paper, we consider the following cost function to capture the total cost to the SFC for buying its required energy from EUs and the grid:
\begin{eqnarray}
J = \sum_{i=1}^N \left(e_ic_i^k + a_ic_i + b_i\right)+c_g\left(E_r - \sum_i e_i\right),\label{eqn:cost}
\end{eqnarray}
such that
\begin{eqnarray}
\sum_i c_i \leq C,~c_\text{min}\leq c_i\leq c_\text{max},\label{eqn:const-sfa}
\end{eqnarray}
where $C$ is the total unit energy price~\cite{Tushar-TSG:2014}, and $c_\text{min}$ and $c_\text{max}$ are the lower and upper limits of unit price that the SFA can pay to any EU~\cite{Tushar-TSG:2014}. In \eqref{eqn:cost}, $e_ic_i^k$ corresponds to the direct cost $e_ic_i$ that is weighted by $c_i^{k-1}$ to generate discriminate prices for EUs with different $\alpha_i$, and the term $(a_ic_i + b_i),~a_i,b_i>0$ accounts for other costs such as transmission cost and store of purchased energy cost~\cite{Tushar-TSG:2014}. $c_g\left(E_r - \sum_i e_i\right)$ is the cost of purchasing energy from the grid. $C$ scales a set of normalized prices to generate the unit price $c_i$. It is fixed for a particular time and can be determined by the SFA using any real-time price estimator, e.g., the estimator proposed in~\cite{2008IEEE-JTPS_Yun}.
Now, the SFA's objective is to set a price $c_i$ per unit of energy for each EU $i$ that not only minimizes its total cost in \eqref{eqn:cost} but also pays a price to each EU according to their inconvenience parameters, and thus encourages them to take part in energy trading with the SFA. Therefore, the objective of the SFA can be defined as
\begin{eqnarray}
\min_{c_i}\left[\sum_{i=1}^N \left(e_ic_i^k + a_ic_i + b_i\right)+c_g\left(E_r - \sum_i e_i\right)\right],\label{eqn:obj-sfa}
\end{eqnarray}
such that \eqref{eqn:const-sfa} is satisfied.
We stress that \eqref{eqn:obj-eu} and \eqref{eqn:obj-sfa} are related via $e_i$ and $c_i$, and can be solved in a centralized fashion. However, considering that the nodes in the system are distributed, it is more advantageous to define a solution approach that can be implemented distributedly according to the parameter setting within the system~\cite{Rad-JTSG:2010}. In this regard, we propose to use a game theoretic formulation. In \cite{Tushar-TSG:2014}, the effect of changing $C$ on the cost to a seller was investigated, and a distributed algorithm was proposed to design a consumer-centric smart grid via capturing this effect. In this paper, we focus on exploring the influence of different EUs' behavior on the choice of price by the SFA, and the resultant cost incurred to it. To that end, we propose a two-stage Stackelberg game in the next section.
\section{Two-Stage Stackelberg Game}\label{sec:stackelberg-game}
To determine energy trading parameters $e_i$ and $c_i$, on the one hand, each EU $i$ needs to decide on the amount of energy $e_i$ that it wants to sell to the SFA according to its inconvenience sensitivity and the offered price. On the other hand, based on the amount of energy offered by each EU and its inconvenience parameter, the SFA agrees on the price vector $\mathbf{c} = \left[c_1, c_2, \hdots, c_N\right]$ that it wants to pay to each EU such that the cost $J$ to the SFA is minimized. Thereupon, this sequential interaction can be modeled as a two-stage Stackelberg game~\cite{Bu-TEPC:2013}, which is formally defined as
\begin{eqnarray}
\Omega = \{(\mathcal{N}\cup\{\text{SFA}\}), \{\mathbf{E}_i\}_{i\in\mathcal{N}}, \{U_i\}_{i\in\mathcal{N}}, J, \mathbf{c}\}.\label{eqn:game-formal}
\end{eqnarray}
In \eqref{eqn:game-formal}, $(\mathcal{N}\cup\{\text{SFA}\})$ is the set of total players in the game where each EU $i\in\mathcal{N}$ is a follower, and $\{\text{SFA}\}$ is the leader. $\mathbf{E}_i$ is the strategy vector of each follower $i$ and $U_i$ is the utility that the follower $i$ receives from choosing its strategy $e_i\in\mathbf{E}_i$. $J$ is the cost incurred to the SFA for choosing the strategy vector $\mathbf{c}$.
As the leader of $\Omega$, the SFA chooses its strategy vector $\mathbf{c}$ in the first stage of the game such that its cost function in \eqref{eqn:cost} is minimized, and the constraints in \eqref{eqn:const-sfa} are satisfied. In the second stage of the game, each EU $i\in\mathcal{N}$ independently chooses $e_i$ in order to maximize its utility in \eqref{eqn:utility} in response to $c_i$ chosen by the SFA. Consequently, $\Omega$ reaches the equilibrium solution of the game.
\subsection{Solution Concept}
A general solution of a multi-stage Stackelberg game such as the proposed $\Omega$ is the sub-game perfect equilibrium (SPE)~\cite{Bu-TEPC:2013}. A common method to determine the SPE of a Stackelberg game is to adopt a backward induction technique that captures the sequential dependencies of decisions between stages of the game~\cite{Bu-TEPC:2013}. To that end, we first analyze how each EU would maximize its benefit by playing its best response to the price offered by the SFA in stage two. Then, we explore how the SFA decides on different prices to pay to different EUs according to their offered energy and inconveniences. We note that, due to the method for game formulation, $\Omega$ will possess a SPE if there exists a solution in both stages of the decision making process by the SFA and EUs. In fact, the existence of a solution in pure strategies is not always guaranteed in a game~\cite{ChaiBo-TSG:2014}, and hence there is a need to investigate the existence of a solution in the proposed $\Omega$.
\begin{theorem}\label{thm:theorem-1}
A unique SPE exists for the proposed two-stage Stackelberg game $\Omega$ if $k=2$ in \eqref{eqn:cost}.
\end{theorem}
\begin{proof}
According to the backward induction technique, each EU $i\in\mathcal{N}$ decides on their energy trading parameters $e_i~\forall i$ at the second stage of the game to maximize \eqref{eqn:obj-eu}. It is a strictly concave function of $e_i$ as $\frac{\delta^2U_i}{\delta c_i^2} = -2\alpha_i$ and $\alpha_i>0$. Hence, EU's decision making problem has a unique solution. Furthermore, in the first stage of the game, the SFA optimizes its price $c_i$ to pay to each EU $i$. Now, we note that if $k=2$ in \eqref{eqn:cost}, which is a general choice of quadratic cost function for electricity utility companies and controllers~\cite{Tushar-TSG:2014,Rad-JTSG:2010}, the cost function \eqref{eqn:cost} is strictly convex with respect to $c_i$. Thus, for the amount of energy offered by each EU, the choice of different price to pay to each $i$ also possesses a unique solution, which minimizes \eqref{eqn:obj-sfa}. Hence, the game $\Omega$ possesses a unique SPE, and thus Theorem~\ref{thm:theorem-1} is proved.
\end{proof}
\subsection{Analysis of Energy Trading Behavior}
In this section, we show how the energy trading behavior of the SFA and EUs are affected by different decision making parameters such as the price set by the SFA, and the inconvenience that is caused to each EU for trading its energy. First, we consider the second stage of the game where each EU $i$ plays its best response to the price $c_i$ offered by the SFA. Since the utility function in \eqref{eqn:utility} is differentiable, we obtain the first order derivative $\frac{\delta U_i}{\delta e_i}$, and $U_i$ attains its maximum when $\frac{\delta U_i}{\delta e_i}=0$. Therefore, from \eqref{eqn:utility}, the best response function of EU $i$ to a given $c_i$ can be expressed as
\begin{eqnarray}
e_i^*(c_i) = \frac{c_i + E_i}{2\alpha_i},\label{eqn:relation-energy-price}
\end{eqnarray}
which leads to the following proposition:
\begin{proposition}
For an offered price $c_i$, the amount of energy $e_i$ that an EU $i$ is willing to sell, from its available energy $E_i$, to the SFA decreases with the increase of its sensitivity to inconvenience $\alpha_i$. In other words, an EU with inconvenience parameter $\alpha_i$ would be more willing to sell its energy to the SFA for a higher price per unit of energy.
\end{proposition}
The SFA's cost, on the other hand, is determined by the price $c_i^*$ that it wants to pay to each EU $i$ for its offered energy $e_i^*$. Therefore, in the first stage of the game the SFA determines the price $c_i^*~\forall i$ having the knowledge of the energy vector $\mathbf{e} = [e_1^*, e_2^*, \hdots, e_N^*]$ offered by all EUs via \eqref{eqn:relation-energy-price}. Now, the Lagrangian for the SFA's optimization problem in \eqref{eqn:obj-sfa} is given by
\begin{eqnarray}
\Gamma = \sum_i \left(e_i^*c_i^{k}+a_ic_i+b_i\right)&+&c_g(E_r-\sum_i e_i^*)\nonumber\\ &+&\lambda(C-\sum_i c_i),
\label{eqn:lagrange-approach}
\end{eqnarray}
where $\lambda$ is the Lagrange multiplier and
\begin{eqnarray}
\frac{\delta\Gamma}{\delta c_i} = 0.\label{eqn:lagrange-diff}
\end{eqnarray}
In \eqref{eqn:lagrange-approach}, we only consider the case when $c_\text{min}\leq c_i\leq c_\text{max}$, i.e., the Lagrange multiplier associated with $c_\text{min}$ and $c_\text{max}$ are assumed to be zero\footnote{The conditions for $c_i^* = c_\text{min}$ and $c_i^* = c_\text{max}$ are considered at the solution of the $c_i$ in \eqref{eqn:price-final}.}. Now replacing the value $e_i^*$ in \eqref{eqn:lagrange-approach} from \eqref{eqn:relation-energy-price}, \eqref{eqn:lagrange-diff} can be expressed as
\begin{eqnarray}
\frac{k+1}{2\alpha_i}c_i^k + \frac{kE_i}{2\alpha_i}c_i^{k-1} + a_i - \frac{c_g}{c_n}-\lambda = 0.
\label{eqn:lagrange-diff-2}
\end{eqnarray}
Now, for the general case\footnote{We will consider $k=2$ for the rest of the paper.} $k = 2$,
\begin{eqnarray}
3c_i^2 + 2E_ic_i + 2\alpha_i(a_i-\lambda) - c_g = 0,
\label{eqn:lagrange-diff-3}
\end{eqnarray}
and consequently,
\begin{eqnarray}
c_i = \frac{-E_i + \left[E_i^2 - 3\left(2\alpha_i(a_i-\lambda)-c_g\right)\right]^{\frac{1}{2}}}{3}.\label{eqn:price-closed-form}
\end{eqnarray}
In \eqref{eqn:price-closed-form}, $\lambda$ and $a_i~\forall i$ are design parameters, and thus constant for a particular system. $\lambda$ needs to be chosen significantly higher than $a_i~\forall i$ such that $c_i$ always possesses a positive value. Note that we skip the other solution of $c_i$ in \eqref{eqn:price-closed-form} for the same reason.
From \eqref{eqn:price-closed-form}, we note that for the same generation and grid price, \emph{a higher price needs to be paid to an EU $i$ with higher inconvenience parameter $\alpha_i$ compared to an EU $j,~j\not= i$ with $\alpha_j<\alpha_i$ to encourage it to sell energy}. Nevertheless, in cases when $c_i>c_\text{max}$ and $c_i<c_\text{min}$, the SFA sets $c_i$ to the respective limits. Hence, the choice of price by the SFA to pay to each EU $i$ at the SPE can be expressed as
\begin{eqnarray}
c_i^* = \begin{cases}
c_\text{min}, & \text{$c_i<c_\text{min}$}\\
\frac{-E_i + \left[E_i^2 - 3\left(2\alpha_i(a_i-\lambda)-c_g\right)\right]^{\frac{1}{2}}}{3}, & \text{$c_\text{min}\leq c_i\leq c_\text{max}$}\\
c_\text{max}, & \text{$c_i>c_\text{max}$}
\end{cases}.
\label{eqn:price-final}
\end{eqnarray}
\section{Case Study}\label{sec:numerical-experiment}
\begin{figure}[b!]
\centering
\includegraphics[width=\columnwidth]{figure5}
\caption{Effect of available energy and inconvenience parameters on the price per unit of energy that the SFA selects to pay to each EU. The first term of each tuple on the horizontal axis refers to the inconvenience parameter $\alpha_i$ of an EU, and the second term indicates the available energy $E_i$.} \label{fig:figure-5}
\end{figure}
To show the properties of the proposed discriminate pricing scheme, we consider an example in which a number of EUs are interested in trading their energy with the SFA in the time slot of interest. We assume that the available energy to each EU, after meeting its essential load, is uniformly distributed within [50,~250], and the energy required by the SFA for the considered time slot is $650$ kWh. The value of $\lambda$ is chosen to be $1000$. The grid's selling price is set to be $50$ cents/kWh, and $c_\text{max}$ and $c_\text{min}$ are assumed to be $38$ and $10$ cents/kWh\footnote{Price $c_\text{min}$ is marginally greater than the price of $8.45$ cents/kWh that a grid typically pays to buy energy from DERs~\cite{2012-solarchoice}.} respectively. These two values are chosen such that the SFA can pay to each EU a price, which is lower than the grid's selling price, and at the same time is higher than the grid's buying price. This condition is necessary to motivate all EUs to trade their energy only with the SFA instead of the grid. Nonetheless, we highlight that all parameter values are chosen particularly for this case study only and that these values may vary between different case studies.
In Fig.~\ref{fig:figure-5}, we show how the price per unit of energy is decided by the SFA for each EU. According to \eqref{eqn:price-final}, for a particular grid price $c_g$, the unit price $c_i$ that the SFA pays to each EU $i$ depends on 1) EU's inconvenience parameter $\alpha_i$, and 2) the available energy $E_i$ to each EU. First, we consider five EUs with the same $E_i = 150$ kWh, but with different inconvenience in selling their energy to the SFA. We note that the SFA tends to pay more, within the constraint in \eqref{eqn:const-sfa}, to the EU with higher sensitivity to inconvenience. In fact, a higher inconvenience parameter refers to the state at which trading energy with the SFA is not a convenient option for an EU. Therefore, to encourage the EU to sell the energy the SFA needs to increase its unit price to pay. However, if $c_i$ becomes more than $c_\text{max}$, the SFA pays $c_\text{max}$ to the EU as shown in the case of the last EU with inconvenience parameter $\alpha_i=3$ in Fig.~\ref{fig:figure-5}.
By contrast, for the same sensitivity to inconvenience, the SFA pays a higher price to an EU with lower available energy and vice versa. In fact, a lower available energy could stop an EU from selling the energy to the SFA as it might not bring significant benefit to the EU at a lower price. Hence, to provide more incentive to the EU, the SFA needs to pay a relatively higher price per unit of energy. However, EUs with larger amount of energy can still obtain higher utilities from trading a considerable amount of energy with the SFA even at a relatively lower price, as explained by the example in Table \ref{table:motivation}. Thus, the SFA pays comparatively a lower price to such EUs to minimize the cost of energy trading, such that the energy trading does not effect their utilities significantly\footnote{We note that the lowest price per unit of energy $c_\text{min}$ is assumed to be higher than the buying price of the grid. Therefore, any EU with a higher available energy would benefit more from trading with the SFA instead of trading with the grid.}.
After showing how prices are set by the SFA to pay to different EUs, we now show the effect of different behavior of EUs in a group on the total cost to the SFA. First we note that EUs' behaviors are dominated by their inconvenience parameters $\alpha$. For example, if an EU with very large available energy does not want to sell its energy at the offered price it can set its $\alpha$ high and thus insignificantly (or, not at all) take part in energy trading. Hence, we can model different EUs behaviors by simply changing their $\alpha_i~\forall i$. To this end, we assume a network with $10$ EUs that have the same available energy $150$ kWh but different inconvenience parameters to sell their energy to the SFA. For this particular case, we consider total unit energy price $C = 380$ cents/kWh so that even when all EUs are paid at $c_\text{max}$, the constraints in \eqref{eqn:const-sfa} are still satisfied, and the unit price for each EU remains lower than the grid's selling price. We compare the performance with an equal distribution scheme (EDS) such as in \cite{Wayes-J-TSG:2012}, where $C$ is equally divided to pay to each EU for buying its energy. That is in EDS each EU is paid a price\footnote{For $N=10$, each EU is paid a price $38$ cents/kWh.} $\frac{C}{N}$ per unit of energy, where $N$ is the total EUs in the network.
\begin{table}[h!]
\centering
\caption{Different behavioral cases of EUs in the network (a total of 10 EUs) -- where the number of EUs with a particular inconvenience parameter $\alpha_i \in \{1,2,3\}$ is specified.}
\begin{tabular}{|c|c|c|c|}
\hline
Cases & $\alpha_i = 1$ & $\alpha_i = 2$ & $\alpha_i = 3$\\
\hline
1 & 6 EUs & 2 EUs & 2 EUs\\
\hline
2 & 4 EUs & 3 EUs & 3 EUs\\
\hline
3 & 2 EUs & 4 EUs & 4 EUs\\
\hline
4 & 2 EUs & 2 EUs & 6 EUs\\
\hline
5 & 1 EU & 1 EU & 8 EUs\\
\hline
6 & 0 & 0 & 10 EUs \\
\hline
\end{tabular}
\label{table:1}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{Figurecomp1}
\caption{Effect of behavior of EUs on the total amount of energy that the SFA buys from each class of EUs.} \label{fig:figure-6}
\end{figure}
To that end, we categorize the behavior of EUs into six different cases based on the number of EUs with particular inconvenience parameters $\alpha_i$ in the group as shown in Table~\ref{table:1}. Although we have chosen only three integer values of $\alpha_i\in\{1, 2, 3\}$, other fractional values within this range are equally applicable to define different levels of sensitivity to inconvenience. Now first we see from Fig.~\ref{fig:figure-6} that as the number of EUs with $\alpha_i = 1$ dominates the group, the SFA buys most of its energy from them. For example, in case 1 and case 2, the number of EUs with $\alpha_i=1$ is higher in the system and consequently, the SFA buys significantly large amount of energy from them in these two cases compared to the other cases, as shown in Fig~\ref{fig:figure-6}. However, as their number reduces the SFA needs to buy more energy from the other two types of EUs, based on their percentage of presence in the group, with relatively higher payment. In the extreme case, i.e., case 6, the SFA needs to buy all its energy from EUs with $\alpha_i = 3$ as there are no other types of EUs in the system. Consequently, this trend of energy trading affects the total cost to the SFA from buying its energy from EUs and the grid. We show these effects separately in Table \ref{table:2}.
\begin{table*}[t!]
\centering
\caption{Cost to the SFA in dollars for different EUs' behaviors (cases stated in Table \ref{table:1}).}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Different Costs & Case 1 & Case 2 & Case 3 & Case 4 & Case 5 & Case 6\\
\hline
Cost for buying from EUs with $\alpha_i = 1$ & 68.92 & 45.94 & 22.97 & 22.97 & 11.48 & 0\\
\hline
Cost for buying from EUs with $\alpha_i = 2$ & 23.6 & 35.4 & 47.2 & 23.6 & 11.8 & 0 \\
\hline
Cost for buying from EUs with $\alpha_i = 3$ & 24.36 & 38.4 & 48.72 & 73.08 & 97.44 & 121.8\\
\hline
Cost for buying from the grid & 53.91 & 98.16 & 142.42 & 155.18 & 186.88 & 218.58\\
\hline
Total cost for proposed scheme & 170.79 & 216.06 & 261.32 & 274.89 & 307.62 & 341\\
\hline
Total cost for EDS & 341 & 341 & 341 & 341 & 341 & 341\\
\hline
$\%$ reduction in total cost & $49.91\%$ & $36.63\%$ & $23.36\%$ & $19.38\%$ & $9.78\%$ & $0\%$\\
\hline
\end{tabular}
\label{table:2}
\end{table*}
From Table \ref{table:2}, first we note that the amount of energy that the SFA buys from the grid increases as the categories of EUs change from case 1 to case 6 in the system. This is due to the fact that as the number of EUs with higher sensitivity to inconvenience increases in the group, the total amount of power that the SFA can trade with EUs becomes lower. Hence, the SFA needs to procure the remainder of required energy from the grid at a higher price. Secondly, the cost to the SFA to buy energy from EUs with higher inconvenience parameters also increases its cost significantly as the SFA needs to pay a higher price to them. For example, consider the different cost that is incurred to the SFA for buying energy from different types of EUs in case $1$. From Fig~\ref{fig:figure-6}, we can see that the amount of energy that the SFA buys from EUs with $\alpha_i = 1$ is almost five times the amount it buys from EUs with $\alpha_i = 2$ and $3$. However, the resultant cost is only three times more than the cost to buy from EUs with higher sensitivity. Therefore, more EUs with lower sensitivity to inconvenience allows the SFA to procure more energy at a comparatively lower cost. Therefore, the total cost incurred by the SFA increases significantly with an increase in the number of EUs with higher inconvenience parameters as can be seen from Table \ref{table:2}.
We also compare the total cost that is incurred to the SFA with the case when the SFA adopts an EDS scheme for energy trading in Table \ref{table:2}. In an EDS scheme, the cost to the SFA remains the same for all type of EU groups as the cost does not depend on their categories. From Table \ref{table:2}, the proposed scheme shows considerable benefit for the SFA in terms of reduction in total cost when there are a relatively higher number of EUs with lower inconvenience parameters in the group. For example, as shown in Table~\ref{table:2}, the cost reduction for the SFA is $49.9\%$ and $36.63\%$ respectively for case 1 and 2. According to the current case study, the average total cost reduction for the SFA is $23.18\%$ compared to the EDS. However, the cost increases with the increase of number of EUs with high inconvenience parameters, and becomes the same as the EDS scheme when all the EUs in the group become highly sensitive to the inconvenience of energy trading, i.e., $\alpha_i = 3,~\forall i$ as can be seen from Table \ref{table:2}.
\section{Conclusion}\label{sec:conclusion}
In this paper, a discriminate pricing scheme has been studied to counterbalance the inconvenience experienced by energy users (EUs) with distributed energy resources (DERs) in trading their energy with other entities in smart grids. A suitable cost function has been designed for a shared facility authority (SFA) that can effectively generate different prices per unit of energy to pay to each participating EU according to an inconvenience parameter for the EU. A two-stage Stackelberg game, which has been shown to have a unique sub-game perfect equilibrium, has been proposed to capture the energy trading between the SFA and different EUs. The properties of the scheme have been studied at the equilibrium by using a backward induction technique. A theoretical price function has been derived for the SFA to decide on the price that it wants to pay to each EU, and the properties of the scheme are explained via numerical case studies. By comparing with an equal distribution scheme (EDS), it has been shown that discriminate pricing gives considerable benefit to the SFA in terms of reduction in total cost. One interesting future extension of the proposed scheme would be to design an algorithm that can capture the decision making process of the SFA and EUs in a distributed fashion. Also, finding a mathematical theorem that would explain the benefits to the SFA due to the discriminate pricing scheme is another possible extension of this work. Finally, the design of a scheme (i.e., game) with imperfect information about the inconvenience parameters also warrants future investigation.
|
2,877,628,088,574 | arxiv | \subsubsection{} #1
\bigskip}
\newcommand{\blockn}[1]{\par #1 \bigskip}
\newcommand{\blockp}[1]{\par #1}
\newcommand{\Th}[1]
{
\bigskip
\textbf{Theorem : }{\itshape #1}
\bigskip
}
\newcommand{\Prop}[1]
{
\bigskip
\textbf{Proposition : }{\itshape #1}
\bigskip
}
\newcommand{\Cor}[1]
{
\bigskip
\textbf{Corollary : }{\itshape #1}
\bigskip
}
\newcommand{\Lem}[1]
{
\bigskip
\textbf{Lemma : }{\itshape #1}
\bigskip
}
\newcommand{\Def}[1]
{
\bigskip
\textbf{Definition : }{\itshape #1}
\bigskip
}
\newcommand{\Dem}[1]{
\smallskip
\textbf{Proof : } \par
{#1} $\square$
\bigskip
}
\hyphenation{Gro-then-dieck}
\begin{document}
\pagestyle{plain}
\title{The convolution algebra of an absolutely locally compact topos}
\author{Simon Henry}
\maketitle
\begin{abstract}
We introduce a class of toposes called ``absolutely locally compact'' toposes and of ``admissible'' sheaf of rings over such toposes. To any such ringed topos $(\mathcal{T},A)$ we attach an involutive convolution algebra $\mathcal{C}_c(\mathcal{T},A)$ which is well defined up to Morita equivalence and characterized by the fact that the category of non-degenerate modules over $\mathcal{C}_c(\mathcal{T},A)$ is equivalent to the category of sheaf of $A$-module over $\mathcal{T}$. In the case where $A$ is the sheaf of real or complex Dedekind numbers, we construct several norms on this involutive algebra that allows to complete it in various Banach and $C^*$-algebras: $L^1(\Tcal,A)$, $C^*_{red}(\Tcal,A)$ and $C^*_{max}(\Tcal,A)$. We also give some examples where this construction corresponds to well known constructions of involutive algebras, like groupoids convolution algebra and Leavitt path algebras.
\end{abstract}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext{\emph{Keywords.} topos, convolution algebra, $C^*$-algebra}
\footnotetext{\emph{2010 Mathematics Subject Classification.} 18B25, 03G30, 46L05}
\footnotetext{\emph{email:} [email protected]}
\renewcommand{\thefootnote}{\arabic{footnote}}
\tableofcontents
\section{Introduction}
\blockn{Both topos theory and non-commutative geometry are theories that are concerned with the study of certain ``generalized topological spaces'' that are too singular to be treated by ordinary topology, like the space of leaves of a foliation or the space of orbits of a group action or of a groupoid. Indeed $C^*$-algebras are supposed to describe some sort of generalized locally compact topological space while toposes are also generalized topological spaces. Moreover, a lot of tools available for ordinary topological spaces can be extended to one of theses generalized context, for example cohomology extend very naturally to toposes, measure theory and integration extend very naturally to $C^*$-algebras. There is also a lot of examples of objects to which one can attach both a $C^*$-algebra and a topos, for example foliations, dynamical systems and topological groupoids as mentioned above, but also graphs (and various generalization of graphs).}
\blockn{In this paper we will present a construction that attach to a topos satisfying certain conditions of local compactness and local separation an involutive convolution algebra (similar to the convolution algebra of compactly supported functions on a groupoid) that can then be completed into a reduced or a maximal $C^*$-algebra. In a large number of classical example of geometric object to which one can attach both a topos and a (reduced or maximal) $C^*$-algebra, the $C^*$-algebra can be recovered as the $C^*$-algebra attached to this topos by this construction.}
\blockn{More precisely, we start with a topos $\Tcal$ which is ``absolutely locally compact'' (see definition \ref{def_absloccpt}) endowed with an ``admissible'' (see definition \ref{DefAdmissibleSR} ) sheaf of rings $A$, then to any object of $\Tcal$ which is a bound and ``$A$-separating'' (see \ref{DefAdmissibleSR}) one can associate a (non unital) involutive algebra $\Ccal_c(\Tcal,A)$ which satisfies the following properties: the category of sheaf of $A$-modules over $\Tcal$ is equivalent to the category of non-degenerate $\Ccal_c(\Tcal,A)$-module, and through that equivalence, the forgetful functor from $\Ccal_c(\Tcal,a)$-module to abelian group corresponds to the functor of compactly supported section on $X$, introduced in \ref{Prop_gammac_colimit}, from sheaves of $A$-modules to abelian groups and the free $A$-module generated by $X$ corresponds to $\Ccal_c(\Tcal,A)$ seen as a module over itself.
}
\blockn{The usual case of groupoid algebra corresponds to the case where $A$ is the ring of Dedekind real or complex numbers of the topos (in the sense of \cite[D4.7]{sketches}. Assuming the axiom of dependant choice, this sheaf of ring is always admissible as soon as the topos is absolutely locally compact. Without the axiom of choice one need an additional assumption of complete regularity. But some other interesting case can be obtained for other ring, for example, Leavitt path algebras correspond to the case of a graph topos with a constant sheaf of ring (see \ref{example_Graph}).}
\blockn{Finally, as we work in a constructive context (in order to obtain a relative version of all the results) it may happens that a ringed topos satisfying all the assumptions does not have a single $A$-separating bound but a family of $A$-separating objects whose co-product is a bound. Classically a co-product of $A$-separating objects is $A$-separating so one can always take the co-product of the family as a $A$-separating bound, but constructively this is only the case when the indexing set of the co-product is decidable. To accommodate with this situation the construction of $\Ccal_c(\Tcal,A)$ is performed in the more general case of a family of $A$-separating objects instead of a single $A$-separating bound and produce a $\mathbb{Z}$-enriched pseudo-category instead of an algebra. }
\blockn{In order to obtain these results we develop under the assumption above a notion of ``compactly supported section'' of a sheaf of $A$-module $M$ over an object $X$ of the topos. When the object is ``$A$-separating'' it corresponds to the subset of sections of $M$ over $X$ which are compactly supported on $X$, but for a general $X$ compactly supported sections on $X$ are not necessarily a special case of sections and they are defined by the fact that for a fixed $M$ the compactly supported of $M$ on objects of $\Tcal$ form a cosheaf, with summation along the fiber as the functoriality.}
\blockn{Section \ref{Sec_localcompactness} contains some preliminary on locally compact Hausdorff (regular) locales and compactly supported sections (in the usual sense) over these. Section \ref{Sec_topoProp} contains preliminary on the ``topological'' assumption (separation, quasi-decidability, absolute local compactness etc...) that we will need on toposes and some of their consequences. Section \ref{Sec_Main} is the heart of the article, it contains the more general definition of compactly supported sections and all the main result of the paper. Finally, section \ref{Sec_examples} contains various example of toposes to which this construction applies and which gives back certain classical algebras.}
\blockn{We finish this introduction be some general mathematical preliminary. This paper in written in framework of constructive mathematics, we allow unbounded quantification but we do not use the unbounded replacement axiom. So it corresponds for example to the internal logic of an elementary topos as extended in \cite{shulman2010stack}. Object of this base topos $\Scal$ are called sets, and in the rest of the article by ``topos'' we mean bounded topos over $\Scal$, so ``Grothendieck toposes'' internally in $\Scal$, in the sense of categories of $\Scal$-valued sheaves over an internal site. If $\Tcal$ is a topos a $\Tcal$ topos is a Grothendieck topos in the internal logic of $\Tcal$, in the sense that it is described by an internal site. The $2$-category of $\Tcal$-toposes is naturally equivalent to the $2$-category of toposes bounded over $\Tcal$.}
\blockn{If $\Ccal$ is a category, $|\Ccal|$ is the class of object of $\Ccal$. By pseudo category we mean a category possibly without identity elements, all the pseudo categories we will consider are enriched in abelian group, hence they are the several objects generalization of non-unital non-commutative rings. }
\blockn{If $\Ccal$ is a site, the category of co-sheaf of sets or of abelian groups is the opposite of the category of sheaf with value in the opposite of the category of sets or of abelian group. If two sites $\Ccal$ and $\Ccal'$ defines the same topos $\Tcal$ then the category of cosheaf over $\Ccal$ and $\Ccal'$ are equivalent, this is what we call the category of cosheaf (of sets or of abelian group) over $\Tcal$. }
\blockn{A generating family of a topos $\Tcal$ is a family of object $X_i$ such that equivalently every object can be covered by the $X_i$ or the $X_i$ form a site of definition of the topos for the induced topology. A bound of a topos is an object whose sub-object form a generating family.}
\section{Locally compact locales and compactly supported functions}
\label{Sec_localcompactness}
\blockn{Locales are the object of study of point-free topology and are very close to topological spaces. A locale $X$ is defined formally by the data of a poset $\Ocal(X)$ thought of as the poset of open subspace of $X$. More precisely, $\Ocal(X)$ must be a frame or equivalently a complete Heyting algebra, i.e. it has all supremum (called union and denoted $\cup$) and all infimum (called intersection and denoted $\cap$) and binary infimum distribute over arbitrary supremum:
\[ a \cap \bigcup_{i \in I} b_i = \bigcup_{i \in I} \left( a \cap b_i \right) \]
In particular $\Ocal(X)$ has a minimal element denoted $\emptyset$ and a maximal element denoted $X$.
A morphism of locales (also called a continuous map) $f :X \rightarrow Y$ is given by the data of an order preserving map $f^{-1}: \Ocal(Y) \rightarrow \Ocal(X)$ preserving finite infimum and arbitrary supremum. As the notation suggest, $f^{-1}$ should be thought of as the pre-image function acting on open subspaces. Every topological space $X$ defines a locale whose corresponding frame is $\Ocal(X)$, any continuous map between topological spaces defines a maps between the corresponding locales and this identifies the full subcategory of sober\footnote{a topological space in which every irreducible closed subset has a unique generic points. Every Hausdorff or locally Hausdorff topological space is sober, the underlying space of a scheme is always sober.} topological spaces with a full subcategory of the category of locales called spatial locales. For a detailed introduction to locales theory with the precise connection to topological spaces we refer the reader to \cite{borceux3} or \cite{picado2012frames}, we will use \cite{sketches} for specific results.
Any locale defines a topos of sheaves (indeed one only needs the poset of open subspaces and the notion of open coverings to define the topos of sheaves) and this construction actually identifies the category of locales with a full subcategory of the category of toposes which is called the category of localic toposes (see \cite[A4.6]{sketches}). We will not distinguishes between locales and localic toposes in the present paper.
}
\block{Locales are extremely important in topos theory, first because of the above observation, but also because for any object $X$ of a topos $\Tcal$ the poset of sub-object of $X$ form a complete Heyting algebra hence defines a locale which we will call the underlying locale of $X$ denoted\footnote{It can also be seen as the localic reflection of the slice topos $\Tcal_{/X}$ hence the notation $\text{Loc}(\Tcal_{/X})$} by $\text{Loc}(X)$ or $\text{Loc}(\Tcal_{/X})$. Any morphism $f: X \rightarrow Y $ defines naturally a continuous map $\text{Loc}(f) : \text{Loc}(X) \rightarrow \text{Loc}(Y)$ (with $f^{-1}$ being simply the pullback of sub-object along $f$.)
}
\blockn{An example where the underlying locale of an object $X$ of $\Tcal$ appears naturally is in the description of the object $\mathbb{R}_{\Tcal}$ of Dedekind real numbers of $\Tcal$. This object can be described using internal logic (see \cite[D4.7]{sketches}) but it can be alternatively describe by the following universal property:
There is an isomorphism functorial in $X$:
\[ Hom_{\Tcal}(X,\mathbb{R}_{\Tcal}) \simeq Hom_{\text{Loc}}(\text{Loc}(X), \mathbb{R}) \]
where $\mathbb{R}$ is the locale of real numbers\footnote{Without the law of excluded middle this can be different from the topological space of Dedekind real number, see \cite[D4.7]{sketches} especially lemma $4.7.4$ and the observations after its proof.}.
Similarly, one has:
\[ Hom_{\Tcal}(X,\mathbb{C}_{\Tcal}) \simeq Hom_{\text{Loc}}(\text{Loc}(X), \mathbb{C}) \]
with $\mathbb{C}_{\Tcal}$ the object of Dedekind complex number and $\mathbb{C}$ the locale of complex number.
}
\block{Let $U,V \in \Ocal(X)$ be two open subspaces of a locale $X$, one says that $U$ is well below $V$ and write $U \ll V$ if for every directed set of open subspace $(U_i)_{i \in I}$ such that $\bigcup_{i \in I} U_i =V$ there exists $i \in I$ such that $U \leqslant U_i$. Or equivalently, if for every family $(U_i)_{i \in I}$ of open subspace of $X$ such that $\bigcup U_i = V$ one has:
\[ U \leqslant \bigcup_{j=1}^n U_{i_j} \]
for some finite family of indices $i_1,\dots, i_n$.
A locale $X$ is said to be locally compact if
\[ \forall V \in \Ocal(X), V = \bigcup_{U \ll V} U \]
A locale $X$ is said to be compact if $X \ll X$. This corresponds exactly to the usual finite sub-covering property.
}
\block{In a locally compact locale the relation $\ll$ interpolates, i.e. if $U \ll V$ then there exists $W$ such that $U \ll W \ll V$, indeed, $V$ is the union of the $W \ll V$ and each such $W$ is the union of the $W' \ll W$ hence $V$ is the union of the $W'$ such that there exists $W$ with $W' \ll W \ll V$ such $W'$ are stable under finite union hence if $U \ll V$ then there is one of these $W'$ such that $U \leqslant W' \ll W \ll V$ which proves our claim.}
\block{One says that $U$ is rather below $V$, and denotes it $U \triangleleft V$, if there exists a $W \in \Ocal(X)$ such that $W \cup V= X$ and $W \cap U = \emptyset$. This is equivalent to the fact that the closure\footnote{The smallest closed sub-locale of $X$ containing $U$.} of $U$ is included in $V$, indeed the closed complement of $W$ corresponds to a closed sublocale of $X$ containing $U$ and included in $V$. One says that $X$ is regular if:
\[ \forall V \in \Ocal(X), V = \bigcup_{U \triangleleft V} U \]
It is a separation property essentially corresponding to the notion of regular Hausdorff (or $T3$) space.
When a locale $X$ is both locally compact and regular it is not very hard to see that:
\[ U \ll V \Leftrightarrow \left( U \triangleleft V \text{ and } U \ll X \right) \]
in more classical therms, $U \ll V$ mean that the closure of $U$ is compact and is included in $V$, $U \triangleleft V$ means that the closure of $U$ is included in $V$ and $U \ll X$ means that the closure of $U$ in $X$ is compact. These characterizations in terms of closure only works because $X$ is regular.
Moreover, for locally compact locales, being regular is equivalent to be Hausdorff (in the sense of having a closed diagonal), this is proved in \cite[II.4.8]{moerdijk2000proper}.
}
\block{\Def{A sheaf $\Fcal$ over a locally compact regular locale $X$ will be called \emph{c-soft} if for every $U \ll V$ and for every $s \in \Fcal(V)$ there exists $\widetilde{s} \in \Fcal(X)$ such that:
\[ s|_U = \widetilde{s}|_U \]
}
i.e. if when $U \ll V$ any section on $U$ which can be extended to $V$ can be extended to $X$.
This corresponds classically to the property that any section defined on a compact can be extended into a global section (because any section defined on a compact is automatically defined on a neighbourhood of the compact).
}
\block{\label{LemmeSoftExtensionInf}One says that $U \subset X$ is a neigbourhood of infinity if there exists $V$ such that $V \ll X$ and $V \cup U =X$. They corresponds to the actual neigbourhood of infinity in the one point compactification of $X$ (a presentation of one point compactification for locales is given in \cite{henry2014nonunital}).
\Lem{Let $X$ be a locally compact regular locale, $\Fcal$ a c-soft sheaf over $X$.
let $V$ be a neighborhood of infinity and $U$ such that $U \triangleleft V$, then if $s \in \Fcal(V)$ there exists $\widetilde{s} \in \Fcal(X)$ such that:
\[ s|_U = \widetilde{s}|_U \]
}
\Dem{Assume first that $U$ is a neighborhood of infinity as well, then let $W \ll X$ such that $U \cup W =X$.
As $U \triangleleft V$ one has $U \cap W \triangleleft V$ as as $W \ll X$ one has $U \wedge W \ll X$ hence $U \cap W \ll V$. Hence as $\Fcal$ is c-soft there exists a section $s' \in \Fcal(X)$ such that $s'|_{W \cap U} = s|_{W \cap U}$. Using the sheaf condition one can then define $\widetilde{s} \in \Fcal(X)=\Fcal(U \cup W)$ to be $s'$ one $W$ and $s$ on $U$ because the two definition agrees on $W \wedge U$ and this proves the lemma in the special case.
Now in the general case, as $V$ is a neighborhood of infinity, then there exist $W$ such that $W \ll X$ and $W \cup V =X$, by interpolation one can find $W \ll W' \ll X$ and as $W \ll W'$ there is a $U'$ such that $W \wedge U' = \emptyset$ and $U' \cup W' = X$. Hence $U'$ is a neighborhood of infinity because $U' \cup W' =X$ and $U' \triangleleft V$ because $U' \wedge W = \emptyset$ and $V \cup W = X$, hence one can apply the previous special case to $U' \cup U \triangleleft V$ to obtains a section $\widetilde{s}$ which agree with $s$ on $U' \cup U$ and hence in particular on $U$.
}
}
\block{We will now discuss ``compactly supported sections''. This makes sense as soon as we are considering a sheaf $\Fcal$ which has a specific ``zero'' section marked, typically when $\Fcal$ is a sheaf of groups (that is the only case that we will consider in the present paper).
Hence let $X$ be a locally compact regular locale and $\Fcal$ be a sheaf on $X$ with a special section $0 \in \Fcal(X)$. We will say that:
\begin{itemize}
\item A section $s \in \Fcal(X)$ has support in $V \in \Ocal(X)$ if there exists $W$ such that $V \cup W =X$ and $s|_W = 0$.
\item That a section $s \in \Fcal(X)$ has compact support if it has support in $V$ for $V \ll X$.
\item That the $(U_i)_{i \in I}$ form a covering of the support of $s$ if $s$ has support in $\bigcup U_i$.
\end{itemize}
In fact one can define the support of a section $s$ has the closed complement of the open sublocale defined by ``$s=0$'' which gives the same notions as above.
One can immediately note that saying that $s$ has compact support is exactly the same as saying that $s$ is zero on some of infinity. Also:
\Lem{ If $s$ is a compactly supported function with support in $V \in \Ocal(X)$ then there exists $V' \ll V$ such that $s$ has support in $V'$.}
\Dem{Indeed, let $W$ be a neighborhood of infinity (in the sense of \ref{LemmeSoftExtensionInf}) such that $s|_W=0$ and $W \cup V=X$ then as $V = \bigcup_{V' \ll V} V' $ one has $W \cup \bigcup_{V' \ll V} V' =X$ if $U$ is any open subspace such that $U \cup W = X$ and $U \ll X$ there exists a $V' \ll V$ such that $U \subset V' \cup W$, in particular $V' \cup W = X$ hence $s$ has support in $V'$ and and $V' \ll V$.}
}
\block{\label{Lem_cptSup_extension}\Lem{Let $X$ be a regular locally compact locale, $\Fcal$ a c-soft sheaf of group on $X$. Let $U,V$ be two open subspaces of $X$ such that $U \ll V$ and let $s \in \Fcal(V)$. Then:
There exists a section $s' \in \Fcal(X)$ such that $s'|_U = s|_U$ and $s'$ has support in $V$. }
Using the exact same strategy as in the lemma \ref{LemmeSoftExtensionInf}, this can also be extended to the case where $U \triangleleft V$ and $V$ is a neighborhood of infinity, but as we will be mostly interested in compactly supported section in the rest of the paper we will not need this extension.
\Dem{Assume first that $U \ll V$, then let $Y$ and $Z$ such that $U \ll Y \ll Z \ll V$, let also $D$ and $D'$ be such that $Y \wedge D = Z \wedge D'=\emptyset$ and $Z \cup D = V \cup D'=X$ which exists because $Y \triangleleft Z$ and $Z \triangleleft V$. In particular as $D' \wedge Z= \emptyset$ and $Z \cup D = X$ one has $D' \triangleleft D$.
One can then take a $s_1 \in \Fcal( Y \cup D ) = \Fcal(Y) \times \Fcal(D)$ being equal to the restriction of $s$ in $\Fcal(Y)$ and $0$ one $D$. Moreover $U \cup D' \triangleleft Y \cup D$ and $D$ is a neighborhood of infinity because $D \cup Z =X$, hence by lemma \ref{LemmeSoftExtensionInf}, there exists a section $s' \in \Fcal(X)$ which is equal to $s_1$ on $U \cup D'$, hence is equal to $s$ on $U$ and $0$ on $D'$. As $Z \cup D' =X$ hence $V \cup D' =X$ this concludes the proof in the first special case.
}
}
\block{\label{Lem_partition}\Lem{Let $X$ be a locally compact regular locale and $\Fcal$ a c-soft sheaf of abelian group over $X$. Let $s$ be a compactly supported section of $\Fcal$, and let $(U_i)$ be a covering of the support of $s$.
Then there exists $U_1,\dots, U_n$ a finite sub-cover of $U_i$ of the support of $s$ and a decomposition:
\[ s = \sum_{i=1}^n s_i \]
such that for all $i$, $s_i$ has support in $U_i$.
}
\Dem{The fact that $s$ has compact support immediately gives the finite sub-cover. We then proceed by induction on the cardinality of the covering. If $n=0$ or $n=1$ the results is obvious. Otherwise, let $W$ containing the support of $s$ and $W \ll \bigcup U_i$, then one can find $V_n$ such that $V_n \ll U_n$ and $W \leqslant V_n \cup \bigcup_{i=1}^{n-1} U_i$. Using the above lemma one can find a global section $s_n$ which agree with $s$ on $V_n$ and has support in $U_n$. Then $s-s_n$ has support included in $U_1 \cup \dots \cup U_{n-1}$ hence by induction admit a decomposition into $s-s_n=s_1 + \dots + s_{n-1}$ with each $s_i$ having support in $U_i$. Writing $s=s_1+ \dots +s_n$ concludes the proof.
}
}
\section{Absolutely separated and absolutely locally compact toposes}
\label{Sec_topoProp}
\block{An object $X$ in a topos $\Tcal$ is said to be \emph{decidable} if internally in $\Tcal$ one has $\forall x,y \in X$ $x=y$ or $x=\neq y$, externally it means that $X \times X$ can be decomposed into a disjoint sum of its diagonal and another subobject called the co-diagonal of $X$.
If $f:X \rightarrow Y$ is a map in a topos, $X$ is said to be \emph{relatively decidable} over $Y$ if $X$ is decidable as an object of $\Tcal_{/Y}$, or equivalently if $Y \times_X Y$ can be decomposed in a disjoint sum of its diagonal sub-objects and another sub-objects.
}
\block{\Def{ \begin{itemize}
\item An object $X$ in a topos is said to be \emph{quasi-decidable} if it admits a covering by a family of decidable objects.
\item A topos $\Tcal$ is said to be quasi-decidable if all its objects are quasi-decidable, i.e. if the decidable object form a generating family.
\item A geometric morphism $f: \Ecal \rightarrow \Tcal$ is said to be quasi-decidable if internally in $\Tcal$, the $\Tcal$-topos corresponding to $\Ecal$ is quasi-decidable.
\end{itemize}
}
}
\blockn{The standard terminology is to call ``locally decidable'' a topos in which every object is a quotient of a decidable object. In general, if one wants to pass from a covering by a family of decidable objects to a covering by a single decidable object it suffices to take the co-product of these decidable objects, but the co-product will be again decidable only if the indexing set is itself decidable, hence this notion is equivalent to our definition only if it is true in the ground topos that every set is a quotient of a decidable set. This induces a major difference when one apply this notion relatively: with our definition isomorphisms are quasi-decidable (any set can be covered by singletons which are decidables) in fact any localic geometric morphism is quasi-decidable, while with the more classical definition isomorphism are ``locally decidable'' exactly if it is true internally in the target that every set is a quotient a decidable set, and more generally localic morphism may fails to be ``locally decidable''. This definition of quasi-decidability is hence the correct generalization of locally decidable to a more general base topos.
\bigskip
This small distinction is not the main reason for changing the name from locally decidable to quasi-decidable. It appears that neither of these notions is actually local: the ``absolutely locally compact'' toposes we are considering in the present paper are in general only locally quasi-decidable and ``locally locally decidable'' seemed like a terminology to avoid at all cost.
}
\block{\Prop{Let $f : \Ecal \rightarrow \Tcal$ be a geometric morphism, then $f$ is quasi-decidable if and only if every object of $\Ecal$ admit a covering by an object $X \in \Ecal$ which is relatively decidable over an object of the form $f^*(I)$ for $I \in |\Ecal|$. }
\Dem{A naive external translation of the definition of quasi-decidable topos using Kripke-Joyal semantics and its extension with unbounded quantification presented in \cite{shulman2010stack} would give the following:
$f$ is quasi-decidable if and only if for all $X \in |\Tcal|$, all $v : Y \rightarrow f^* X$ in $\Ecal$, there exists an object $ S \in |\Tcal|$ such that $S \rightarrow 1_{\Tcal}$ is an epimorphism, an object $I \rightarrow S$ in $\Tcal_{/S}$ and an object $D \rightarrow f^* I$ in $\Ecal$ such that $D$ is relatively decidable over $f^* I$ and there is an epimorphism from $D$ to $Y \times f^* S$ compatible to the maps to $f^* S$.
Indeed, $X$ corresponds to the universal quantification ``for all object $Y$ of $\Ecal$'', $S$ to the existential quantification `there exists a family...'', $I$ is the indexing set of the family, $D \rightarrow f^* I$ the $I$-indexed family of decidable objects of $\Ecal$ and the epimorphism $D \rightarrow X \times f^* S$ is the covering of $X$, defined over $S$.
It is then easy to eliminate all the redundant objects in this formulation to obtains the one in the proposition: Assuming the condition above, then taking $X=1_{\Tcal}$ and $Y$ arbitrary, one obtains $D$ relatively decidable over $f^* I$ and an epimorphism $D \twoheadrightarrow Y \times f^* S \twoheadrightarrow Y$. Conversely, assuming the condition in the proposition, let $X \in |\Tcal|$ and $v:Y \rightarrow f^*X$ in $\Ecal$, then $I \in |\Tcal|$ and $D \in |\Ecal|$ relatively decidable over $f^*I$ with an epimorphism $D \twoheadrightarrow Y$, taking $S=1_{\Tcal}$ gives all the objects of the condition above.
}
}
\block{\Cor{\begin{itemize}
\item Equivalences are quasi-decidable.
\item Localic morphisms are quasi-decidable.
\item Quasi-decidable morphisms are stable under compositions.
\end{itemize}
}
One could obviously gives internal proof of this as well, but especially for the last one the proof using the external characterization are simpler.
\Dem{\begin{itemize}
\item If $f$ is an equivalence, then every object is isomorphic to an object of the form $f^* X$.
\item If $f$ is localic then any object is a subquotient of an object of the form $f^*X$, but monomorphism $A \hookrightarrow B$ are automatically relatively decidable (their diagonal is an isomorphism and one can take the co-diagonal to be the empty) hence a subquotient of $f^* X$ is a quotient of an object relatively decidable over $f^*X$.
\item Let $f:\Fcal \rightarrow \Ecal$ and $g: \Ecal \rightarrow \Tcal$ be two quasi-decidable geometric morphisms. Let $U$ be an object of $\Fcal$, then $U$ is a quotient of an object $U'$ relatively decidable over an object of the form $f^* V$, and $V$ is in turn a quotient of an object $V'$ relatively decidable over an object $g^* W$.
Applying $f^*$ to the objects of $\Ecal$ above one obtains that $f^* V$ is a quotient of $f^* V'$ which is relatively decidable over $(g \circ f)^* W$. Pulling back $U'$ from $f^* V$ to $f^* V'$ one obtains an object $U''$ which still cover $U$ but is now relatively decidable over $f^*V'$ which is relatively decidable over $(g\circ f)^* W$ hence $U''$ is relatively decidable over $(g \circ f)^*$ hence this concludes the proof.
\end{itemize}
}}
\block{\Prop{A pullback of a quasi-decidable geometric morphism is again quasi-decidable.}
\Dem{We work internally in the target topos. We need to show that if $\Ecal$ is quasi-decidable then $\Ecal \times \Fcal \rightarrow \Fcal$ is quasi-decidable for any topos $\Fcal$. If $\Ecal$ is quasi-decidable then it has a generating family $(D_i)_{i \in I}$ of decidable objects (take for example all the subobjects of a family of decidable objects covering a bound of $\Ecal$). The pullback of the $D_i$ to $\Ecal \times \Fcal$ are again decidable objects and they form a family of generators of $\Ecal \times \Fcal$ as a $\Fcal$ topos, hence internally in $\Fcal$, $\Ecal \times \Fcal$ has a family of decidable generators which concludes the proof.
}
}
\blockn{We also have a factorization system, that will play no role in the present paper and which we just mention for completeness:}
\block{\Prop{Let $f :\Ecal \rightarrow \Tcal$ be a geometric morphism, the full sub-category $\Ecal_{f-qd}$ of objects of $\Ecal$ which are quotients of objects relatively decidable over an object of the form $f^*X$ for $X$ an object of $\Tcal$ is stable under subobject, all colimits and all finite limit. It is in particular a topos and the inclusion $\Ecal_{f-qd} \rightarrow \Ecal$ is the $h^*$ part of a hyperconnected geometric morphism $h$ and $f$ can be factored canonically as $\Ecal \rightarrow \Ecal_{f-qd} \rightarrow \Tcal$ with the second map being quasi-decidable.}
The geometric morphisms obtained as $\Ecal \rightarrow \Ecal_{f-qd}$ are precisely those which are hyperconnected (hence $f^*$ is fully faithful and its essential image is stable under sub-objects) and such that if $X$ in $\Ecal$ is relatively decidable over $f^*Y$ then $X$ is itself $f^* Y'$. Those geometric morphism will be called anti-decidable, and this is a factorisation system.
The proof is just a routine check, and we will not use this result anyway.
}
\block{Let us now recall the definition of proper and separated morphism from \cite{moerdijk2000proper}.
\Def{\begin{itemize}
\item A topos is said to be compact if its localic reflection is compact.
\item A geometric morphism $f :\Ecal \rightarrow \Tcal$ is said to be proper if internally in $\Tcal$ the $\Tcal$-topos corresponding to $\Ecal$ is compact.
\item A Geometric morphism $f : \Ecal \rightarrow \Tcal$ is said to be separated if the diagonal morphism $\Delta_f : \Ecal \rightarrow \Ecal \times_{\Tcal} \Ecal$ is proper.
\item A Topos $\Tcal$ is said to be separated if its morphism to the terminal topos is separated, or equivalently, if the geometric morphism $\Tcal \rightarrow \Tcal \times \Tcal$ is proper.
\end{itemize}
}
Good properties of these classes of geometric morphisms (stability under pullback and composition, characterization in terms of the hyperconnected/localic factorization and so on...) are proved in \cite{moerdijk2000proper}. )
}
\block{\label{stability_prop_C'}Let $C$ be a class of maps in some category with finite limits, one says that $C$ is stable under composition if when $f \in C$ and $g \in C$ and $f \circ g $ exists then $f \circ g \in C$, and one says that $C$ is stable under pullback if when $f :X \rightarrow Y$ is in $C$ and $g: Z \rightarrow Y$ is any map then the natural projection map $\pi_2: X \times_Y Z \rightarrow Z$ is in $C$.
\Prop{Let $C$ be a class of maps which is stable under composition and pullback in a category with finite limits. Let $C'$ be the class of map $f:X \rightarrow Y $ such that the diagonal map $\Delta_f :X \rightarrow X \times_Y X$ is in $C$ then:
\begin{enumerate}
\item $C'$ is stable under composition.
\item $C'$ is stable under pullback.
\item If $f \circ g$ is in $C$ and $f$ is in $C'$ then $g$ is in $C$.
\end{enumerate}
}
\Dem{Those are all very simple ``diagram chasing'' proof. They can be found for example in \cite{moerdijk2000proper} $II.2.1$ and $II.2.2$ written in the special case where $C$ is the class of proper geometric morphisms of toposes and hence $C'$ the class of separated geometric morphism.}
}
\block{\Def{We will say that a geometric morphism $f : \Ecal \rightarrow \Tcal$ is $\Delta$-separated if the diagonal map $\Delta_f : \Ecal \rightarrow \Ecal \times_{\Tcal} \Ecal$ is a separated geometric morphism.}}
\block{\label{Prop_Deltasep}\Prop{\begin{itemize}
\item $Delta$-separated morphisms are stable under compositions and pull-back.
\item If $f \circ g $ is separated and $f$ is $\Delta$-separated then $g$ is separated.
\item If $ f \circ g$ is $\Delta$-separated then $g$ is $\Delta$-separated.
\end{itemize}}
\Dem{This follows from proposition \ref{stability_prop_C'} and the well known fact that proper geometric morphism are stable under composition and pullback (see for example \cite{moerdijk2000proper} section $I$). For the last property one also need to use proposition B3.3.8 of \cite{sketches} which say that if $f$ is any geometric morphism then $\Delta_f$ is localic, that if $f$ is localic $\Delta_f$ is an inclusion and if $f$ is an inclusion then $\Delta_f$ is an isomorphism, which implies that in particular, as any isomorphism is proper, that any geometric morphism has its diagonal map $\Delta$-separated.}
}
\block{\label{P_quasiDecImpDeltaSep}\Prop{A quasi-decidable geometric morphism is $\Delta$-separated.}
\Dem{We will work internally in the target of the morphism and show that a quasi-decidable topos is $\Delta$-separated.
Let $\Tcal$ be a quasi-decidable topos, it has a generating family $(D_i)_{i \in I}$ of decidable objects. There exists a locale $\Lcal$ whose map to the terminal locale is an open surjection and such that, internally in $\Lcal$, $I$ is covered by a sub-object of $\mathbb{N}$ (for example, take $\Lcal$ to be the classifying locale of the theory of partial surjections from $\mathbb{N}$ to $I$, see \cite{joyal1984extension} V.3 just after proposition $2.$ for the details ).
Hence, internally in $\Lcal$ one has a (partially) $\mathbb{N}$ indexed generating family of decidable object, hence internally in $\Lcal$, $\Tcal$ admit a decidable bound, hence it is localic over the Schanuel topos\footnote{The topos of set endowed with a continuous action of the localic group of permutation of $\mathbb{N}$, see \cite[C5.4.4]{sketches} } . Now the Schanuel topos is $\Delta$-separated because the localic group of automorphism of $\mathbb{N}$ is separated and any localic geometric morphism is $\Delta$-separated (the diagonal of a localic geometric morphism is an inclusion and inclusions are always separated because their diagonal is an isomorphism). Hence internally in $\Lcal$, the (pullback of the) topos $\Tcal$ is $\Delta$-separated. But $\Delta$-separated maps descend along open surjection because separated maps do (see \cite[C5.1.7]{sketches}), hence $\Tcal$ is $\Delta$-separated.
}
}
\block{\Def{We will say that a morphism is absolutely separated if it is separated and quasi-decidable.}
We have a work in progress whose goal is to show that the following conditions for a geometric morphism are equivalents:
\begin{itemize}
\item $f$ is absolutely separated.
\item $f$ is separated and $\Delta$-separated.
\item $f$ is strongly separated, i.e. the diagonal of $f$ is tidy (cf. \cite{moerdijk2000proper}).
\end{itemize}
The equivalence of the last two conditions is already in \cite{moerdijk2000proper}, and we proved above that they are implied by the first condition. We hope to be able to publish a proof of the converse implication soon.
}
\blockn{We recall the main result of \cite{henry2015finitness}:}
\block{\label{HCfinitnessTh}\Th{A Topos which is hyperconnected and absolutely separated has a generating familly of objects which are internally finite and decidable. }
Moreover this result is valid within the internal logic of a topos.
}
\block{\Def{An object $X$ of a topos $\Tcal$ is said to be quasi-finite of cardinal $n$ object if internally in $\Tcal$ one has $\exists x \in X \Rightarrow X$ is finite decidable of cardinal $n$.
One says that it is quasi-finite if it is quasi-finite of cardinal $n$ for some $n$.
}
I.e. quasi-finite objects are not necessarily finite, but there is a sub-terminal object called there support such that they are finite of cardinal $n$ over their support. The support is just the interpretation of the proposition $\exists x \in X$.
}
\block{\label{PropQfinExist}\Prop{An absolutely separated topos has a basis of quasi finite objects.}
\Dem{If one has an absolutely separated topos $\Tcal$, then the hyperconnected geometric morphism to its localic reflection $\Lcal$ is also quasi-decidable by \cite[5.2]{henry2015finitness} and separated by \cite[II.2.5]{moerdijk2000proper}, hence one can apply theorem \ref{HCfinitnessTh} above to it and obtains internally in $\Lcal$ a basis of finite object. Local sections of the stack over $\Lcal$ of finite objects of $\Tcal$ are objects of $\Tcal$ that are finite over their domain of definition, one can then restrict to the open subspace of $\Lcal$ where their cardinal is equal to $n$ for any fixed $n$ and one get quasi-finite objects of $\Tcal$, and those clearly form a generating family. }
}
\block{\label{def_absloccpt}\Def{\begin{itemize}
\item A topos $\Tcal$ is said to be absolutely compact if it is compact and absolutely separated.
\item A topos $\Tcal$ is said to be absolutely locally compact if it has a generating family of objects $X$, such that the topos $\Tcal_{/X}$ is absolutely separated with a locally compact localic reflection.
\item An object $X$ of an absolutely locally compact topos $\Tcal$ is said to be separating if $\Tcal_{/X}$ is absolutely separated and has a locally compact localic reflection.
\end{itemize}
}}
\block{Under the conjecture mentioned below that absolutely separated is equivalent to strongly separated, absolutely compact would mean that the topos, its diagonal, and the diagonal of the diagonal are all proper. We also conjecture that a topos is absolutely locally compact if and only it is exponentiable, its diagonal is exponentiable and the diagonal of its diagonal is also exponentiable.}
\block{\label{Lem_ll_ext_int}\Lem{Let $\Tcal$ be a separated topos, let $X$ be an object of $\Tcal$ such that $\text{Loc}(\Tcal_{/X})$ is locally compact and Hausdorff. Let also $U$ be a subobject of $X$ such that $U \ll X$ (in $\text{Loc}(\Tcal_{/X})$). Then internally in $\Tcal$ there exists a finite object $F$ such that $U \subset F \subset X$. }
We insist on the fact that we do not claim that such an object exists externally, this is not true in most situation.
\Dem{$U$ can be seen as an open subspace of $\text{Loc}(\Tcal_{/X})$. Let $\overline{U}$ the closure of $U$ in this locale, as the locale is locally compact and Hausdorff and $U \ll X$ the closure is compact. We denote by $\Tcal_{/\overline{U}}$ the pullback of $\Tcal_{/X} \rightarrow \text{Loc}(\Tcal_{/X})$ along $\overline{U} \rightarrow \text{Loc}(\Tcal_{/X})$. By \cite[C2.4.12]{sketches} localic morphisms and hyperconnected morphisms are stable under pullback, hence the morphism $\Tcal_{/\overline{U}} \rightarrow \Tcal_{/X}$ is localic and the morphism $\Tcal_{/\overline{U}} \rightarrow \overline{U}$ is hyperconected, in particular, $\overline{U}$ is the localic reflection of $\Tcal_{/\overline{U}}$ hence (as $\overline{U}$ is compact) $\Tcal_{/\overline{U}}$ is proper as a topos and hence, as $\Tcal$ is separated the geometric morphism $\Tcal_{/\overline{U}} \rightarrow \Tcal$ is proper, as it is localic it means that it corresponds to a compact locale in the internal logic of $\Tcal$. Internally in $\Tcal$ one has hence an inclusion $U \subset \overline{U} \subset X$ with $U$ and $X$ two sets and $\overline{U}$ a compact sub-locale of $X$ (seen as a discrete locale). But (still internally in $\Tcal$) $X$ can be covered by the $\{ x \}$ for $x \in X$, hence there exists a finite collection of them which cover $\overline{U}$, and in particular $U$, the union of such a finite collection produces (internally) the desired finite set. }}
\block{\label{P_DeltaSepvsAbslocSep} The separating objects of an absolutely locally compact topos are not necessarily decidable. In fact:
\Prop{Let $\Tcal$ be an absolutely locally compact topos, then the following conditions are equivalent:
\begin{enumerate}
\item $\Tcal$ is quasi-decidable.
\item $\Tcal$ is $\Delta$-separated.
\item Every separating object of $\Tcal$ is decidable.
\item $\Tcal$ admit a generating family of decidable separating objects.
\end{enumerate}
}
\Dem{The implication $1. \Rightarrow 2.$ has been proved for a general topos in \ref{P_quasiDecImpDeltaSep}.
For the implication $2. \Rightarrow 3.$, if $X$ is a separating object of $\Tcal$ then $\Tcal_{/X}$ is separated, hence if $\Tcal$ is $\Delta$-separated the morphism from $\Tcal_{/X}$ to $\Tcal$ is separated by the third point of \ref{stability_prop_C'} and by \cite[II.1.3]{moerdijk2000proper} the map $\Tcal_{/X} \rightarrow \Tcal$ is separated precisely when $X$ is decidable.
The implications $3. \Rightarrow 4.$ and $4. \Rightarrow 1.$ are trivial. }
}
\block{We conclude this section with an important example of the above proposition:
\Prop{Let $\Tcal$ be an etendu, i.e. the topos of equivariant sheaves on an étale localic groupoid $G_1 \rightrightarrows G_0$.
Assume additionally that $G_0$ is separated (Hausdorff), and hence that $G_1$ is locally separated then the following condition are equivalent:
\begin{enumerate}
\item $\Tcal$ satisfies the equivalent condition of proposition \ref{P_DeltaSepvsAbslocSep} above.
\item $G_1$ is separated (Hausdorff).
\end{enumerate}
}
In the next section, we will see that the construction of the convolution algebra of a topos will be more subtle in the case where the separating objects are not decidable. by this proposition this corresponds (in the case of an etendu) to the situation of a non-Hausdorff étale groupoid. It will be rather evident for a reader familiar to the construction of the convolution algebra of an étale groupoid that this subtleties are exactly similar to the fact when we construct the $C^*$-algebra of an etale groupoid we need to replace continuous function on $G_1$ by linear combination of functions which are compactly supported on Hausdorff open subsets of $G_1$ and extended by $0$ on the rest of $G_1$.
\Dem{We will check that condition $2.$ is equivalent to the fact that $\Tcal$ is $\Delta$-separated which is one of the conditions of proposition \ref{P_DeltaSepvsAbslocSep}.
The fact that $\Tcal$ is the topos of the groupoid $G_1 \rightrightarrows G_0$ can be translated in the fact that one has a pullback square:
\[\begin{tikzcd}[ampersand replacement = \&]
G_1 \arrow{r} \arrow{d} \& G_0 \arrow{d} \\
G_0 \arrow{r} \& \Tcal \\
\end{tikzcd} \]
This square gives rise to another pullback square:
\[\begin{tikzcd}[ampersand replacement = \&]
G_1 \arrow{r} \arrow{d} \& \Tcal \arrow{d} \\
G_0 \times G_0 \arrow{r} \& \Tcal \times \Tcal \\
\end{tikzcd} \]
So as $G_0 \times G_0 \rightarrow \Tcal \times \Tcal$ is an etale covering, $\Tcal$ is $\Delta$-separated if and only if $G_1 \rightarrow G_0 \times G_0$ is separated, but as $G_0 \times G_0$ is separated and $\Delta$-separated (every locale is $\Delta$-separated) this is equivalent to the fact that $G_1$ is separated by \ref{stability_prop_C'}.
}
}
\section{Admissible sheaf of rings and the algebra of compactly supported functions }
\label{Sec_Main}
\block{\label{DefAdmissibleSR}\Def{Let $\Tcal$ be an absolutely locally compact topos and let $A$ be a (possibly non-commutative) unital ring object over $\Tcal$. One says that $A$ is admissible if there is a generating family $(X_i)_{i \in I}$ of separating objects of $\Tcal$ such that for each $i \in I$:
\begin{itemize}
\item When seen as a sheaf on the locale $Loc(\Tcal_{/X_i})$ by restriction to the sub-object of $X_i$, $A$ is c-soft.
\item $\Tcal_{/X_i}$ admit a generating family of quasi-finite objects of cardinals invertible in $A$.
\end{itemize}
}
Separating objects $X_i$ such that these two conditions holds will be called $A$-separating objects of $\Tcal$. Of course, if every integer is invertible in $A$ the second condition always holds.
The key example we have in mind is when $A$ is the sheaf of Dedekind real numbers or complex numbers, or eventually some algebra over this, for example an internal $C^*$-algebra or internal Banach algebra:
}
\block{\Prop{Assuming the axiom of dependant choice, for every absolutely locally compact topos if the ring $A$ contains the ring of Dedekind real numbers then $A$ is admissible and every separating object is $A$-separating. }
If we do not assume the axiom of dependant choice one needs to assume additionally that there is a generating family of separating objects $X_i$ such that the localic reflection of $\Tcal_{/X_i}$ is completely regular in addition of being locally compact and Hausdorff, which become automatic by Urysohn's lemma if one assumes the axiom of dependant choice.
\Dem{In this case, every integer is invertible in $A$ hence the second condition of definition \ref{DefAdmissibleSR} is automatic because of \ref{PropQfinExist}. The first conditions follow easily from complete regularity (which comes from Urysohn lemma, see \cite[XIV.5 and 6]{picado2012frames}) that allows to construct for any $U \ll V$ two subobjects of $X$ a positive real valued functions $f$ which is $1$ on $U$ and zero outside of $V$. then any section defined on $V$ can be extended to $X$ by multiplying by $f$ and extending by $0$ outside of $V$. }
}
\block{\label{Lem_partition2} If $X$ is a separating object of a topos $\Tcal$, or more generally any object such that $\Tcal_{/X}$ is locally compact, an arrow $s:X \rightarrow M$ for $M$ some abelian group object in $\Tcal$ can be said to be compactly supported on $X$ if it is compactly supported when seen as a section in $Loc(\Tcal_{/X})$ of the restriction of $M$ the $Loc(\Tcal_{/X})$, or to put it more explicitly if $X=U \cup W$ with $U \ll X$ and $s|_W=0$.
\Lem{Let $\Tcal$ be an absolutely locally compact topos, $A$ an admissible ring object over $\Tcal$ and $M$ an $A$-module in $\Tcal$ and $X$ a $A$-separating object.
Let $m$ be a compactly supported section of $M$ over $X$ and let $(U_i \hookrightarrow X)_{i=1 \dots n}$ a covering of the support of $m$ by sub-object of $X$. Then there exists $(\lambda_i)_{i=1 \dots n}$ compactly supported sections of $A$ on $X$ such that:
\begin{itemize}
\item for all $i$, $\lambda_i$ has support in $U_i$,
\item $\displaystyle \sum_{i=1}^n \lambda_i m =m. $
\end{itemize}
}
\Dem{Let $V$ be a sub-object of $X$ such that $V \ll \bigcup U_i$ and $V$ contains the support of $m$. Pick any $W$ such that $V \ll W \ll X$, as $A$ is c-soft over $X$, one has by lemma \ref{Lem_cptSup_extension} a section $\lambda \in A(X)$ such that $\lambda|_V=1$ and $\lambda$ has support in $W$ (in particular $\lambda$ is compactly supported),
and by lemma \ref{Lem_partition} one can find $\lambda_1,\dots,\lambda_n$ such that $\lambda_i$ has support in $W \wedge U_i$ and $\sum \lambda_i = \lambda$.
As $\lambda$ is equal to $1$ on $V$ and $m$ has support in $V$ one has $\lambda m =m$ and hence $\sum \lambda_i m = m$ which concludes the proof.
}
}
\block{\label{Prop_gammac_colimit}Let $\Tcal$ be an absolutely locally compact topos, and $A$ be an admissible ring object over $\Tcal$.
Let $V$ be a sheaf of $A$-module over $\Tcal$.
For any $A$-separating object $X$ of $\Tcal$ we define $\Gamma_c(X; V)$ to be the set of sections of $V$ over $X$ which are compactly supported.
If $f : V \rightarrow W$ is a map between two sheaves of $A$-modules, then post-composing with $f$ induces a map $f :\Gamma_c(X,V) \rightarrow \Gamma_c(X,W)$, which makes $\Gamma_c(X, \_ )$ into a functor.
\Prop{Let $A$ be an admissible sheaf of ring over an absolutely locally compact topos. Then for any $A$-separating object $X$, the functor $\Gamma_c(X, \_ )$ from the category of $A$-modules in $\Tcal$ to the category of commutative groups commute to all colimits and all finite limits.}
\Dem{
\begin{itemize}
\item As $\Gamma_c(X, \_)$ is an additive functor between additive categories, it commutes to the zero object and to bi-products, hence both to finite co-products and finite products.
\item $\Gamma_c(X, \_)$ commute to kernel (and hence to all finite limite): let $f:V \rightarrow W$ be a morphism of sheaf of $A$-module and let $P \hookrightarrow V$ be the kernel of $f$. A map with compact support from $X$ to $P$ is exactly a map $a$ from $X$ to $V$ which has compact support and such that $f \circ a = 0$, hence $\Gamma_c(X,P)$ is exactly the kernel of $f: \Gamma_c(X,V) \rightarrow \Gamma_c(X,W)$.
\item $\Gamma_c(X, \_)$ commutes to all finite colimits: as we already know that it is a left exact functor between abelian categories it is enough, for example by \cite[1.11.2 and 1.11.4]{borceux2}, to show that it send epimorphisms to epimorphisms.
Let $f: V \twoheadrightarrow W$ be an epimorphism between two sheaves of $A$-modules, and let $s: X \rightarrow W$ be a map with compact support, our goal is to lift it into a compactly supported function from $X$ to $V$.
As $f$ is an epimorphisms, there exists a covering $v_i:X_i \rightarrow X$ with for each $i$ a map $s_i : X_i \rightarrow V$ such that $f \circ s_i = s \circ v_i$. Because $X$ is $A$-separating one can assume that for each $i$, $X_i$ is quasi-finite of cardinal $k_i$, that its support is relatively compact in $X$ and that $k_i$ is invertible in $A$ and moreover there is a finite family $X_1, \dots, X_n$ of such object that cover the support of $s$.
One can then (be lemma \ref{Lem_partition2}) find functions $\lambda_1, \dots , \lambda_n$ from $X$ to $A$ such that for each $i$, $\lambda_i$ is supported in the support of $X_i$ and:
\[ \sum_{i=1}^n \lambda_i s = s \]
Finally, we define internally in $\Tcal_{/X}$:
\[ s' := \sum_{i = 1}^ n \left\lbrace
\begin{array}{ccc}
\frac{\lambda_i}{k_i} \displaystyle \sum_{v \in X_i } s_i(v) & \mbox{if} & \exists x \in X_i\\
0 & \mbox{if} & \lambda_i =0 \\
\end{array}\right.\]
It is well defined: for each $i$, as $\lambda_i$ is supported in the support of $X_i$ hence internally in $\Tcal_{/X}$ one has $\exists x \in X_i$ or $\lambda_i=0$. In the case $\exists x \in X_i$, the object $X_i$ of $\Tcal_{/X}$ is finite and decidable hence the summation over it is well defined and if the two condition holds simultaneously they both gives the results $0$.
$s'$ is a map from $X$ to $V$, its support is included in the union of the support of the $X_i$, hence it is a compactly supported map from $X$ to $V$ and:
\[ f(s') := \sum_{i = 1}^n \left\lbrace
\begin{array}{ccc}
\sum_{v \in X_i } \frac{\lambda_i s}{k_i} & \mbox{if} & \exists x \in X_i\\
0 & \mbox{if} & \lambda_i =0 \\
\end{array}\right.
= \sum_{i=1}^n \lambda_i s =s
\]
hence $f : \Gamma_c(X,V) \rightarrow \Gamma_c(X,W)$ is also an epimorphism.
\item $\Gamma_c(X, \_)$ commutes to arbitrary co-product: In the case of decidable co-product there is a very simple proof, any compactly supported map to the coproduct will factor through a finite (decidable) co-product by compactness and we already have commutation to finite co-product so the proof is finished. But unfortunately, this argument is not sufficient for a non decidable coproduct and unless the base topos is assumed to be locally decidable, decidable coproduct will not suffice. For the general case we will need the lemma \ref{Lem_gammac_indexedcoprod} below.
Let $(V_i)_{i \in I}$ be a family of $A$-module.The comparison map:
\[ \bigoplus \Gamma_c(X,V_i) \rightarrow \Gamma_c \left( X, \bigoplus_i V_i \right) \]
is always a monomorphism because we already know that $\Gamma_c(X,\_)$ preserve monomorphism hence each component is a monomorphism, so it is enough to show that it is an epimorphism, i.e. that each element of $\Gamma_c \left( X, \bigoplus V_i \right)$ can be written as a finite combination of elements in the $\Gamma_c(X,V_i)$.
The family $(V_i)_{i \in I}$ can be seen as a $A$-module $V$ over $Y=p^* I$, whose internal direct sum $W$ is just the ordinary direct sum. Hence, any compactly supported map $s$ from $X$ to $W$ can be described, as in lemma \ref{Lem_gammac_indexedcoprod} below, by an object $D$ with a map $D \rightarrow f^* I$, a map $D \rightarrow X$ and a compactly supported section $s'$ from $D$ to $V$ in $\Tcal_{/f^*I}$.
Such a map $D \rightarrow f^* I$ can be interpreted as a decomposition of $D$ into $D =\coprod D_i$. The function $s' :D \rightarrow \coprod V_i$ is hence a collection of compactly supported functions $s_i : D_i \rightarrow V_i$, which are all zero except a finite number. Using the formula in the lemma those functions $s_i$ can be then turned back into compactly supported functions from $X$ to $V_i$ whose sum is the initial function $s$ from $X$ to $W$ and this concludes the proof.
\end{itemize}
}
}
\block{\label{Lem_gammac_indexedcoprod}\Lem{Let $X$ be a $A$-separating object of $\Tcal$ and let $Y$ be any object, let $V$ be a sheaf of $A$-modules in $\Tcal_{/Y}$, let $W$ be the sheaf of $A$-modules defined internally in $\Tcal$ by:
\[ W= \bigoplus_{y \in Y} V_y \]
Let also $s:X \rightarrow W$ be a compactly supported function.
Then there exists a $A$-separating object $D$ with two maps $p_1 :D \rightarrow X$ and $p_2: D \rightarrow Y$, and a compactly supported function $s'$ from $(D,p_2)$ to $V$ in $\Tcal_{/Y}$ such that (internally) for all $x \in X$:
\[ s(x) = \sum_{p_1(d)=x} s'(d) \]
}
\bigskip
This lemma is key for several results in the paper. It roughly says that compactly supported function to an internal co-product can be written in some sense as ``compactly indexed sum'' of section of the component.
\Dem{We will first explain why the formula for $s(x)$ is meaningful. As $D$ and $X$ are separating, any map from $D$ to $X$ is fiberwise decidable, and there is a subobject $D' \subset D$ such that $D'$ is finite over $X$ and contains the support of $s'$, hence the sum can be seen (internally) as a finite sum. Moreover $s'(d)$ is an element of $V_{p_2(d)}$, but as $W= \bigoplus_{y\in Y} V_y$, it can be seen as an element of $W$ too.
We will now prove the lemma. Let $X,Y,V,W,s$ be as in the lemma. By definition of $W$, internally in $\Tcal$, for all $x \in X$ , there exists an integer $n$, element $y_1, \dots ,y_n \in Y$ and $v_1 \in V_{y_1}, \dots, v_n \in V_{y_n}$ such that $s(x) = \sum v_{y_i}$.
This internal statement can be attested by a covering $v_i: X_i \rightarrow X$ of $X$, for each $i$, an integer $n_i$, $n_i$-applications $y^i_1, \dots y^i_{n_i} : X_i \rightarrow Y$, $n_i$-applications $v^i_j: X_i \rightarrow V$ over $Y$ such that:
\[ \sum_{j=1}^{n_i} \iota(v^i_j) = s \circ v_i \]
where $\iota$ denote the canonical arrow from $V$ to $W$.
Moreover, because $X$ is $A$-separating one can freely assume that each $X_i$ is quasi-finite of cardinal $m_i$ over $X$, and that image of $X_i$ in $X$ is relatively compact in $X$. Moreover one can extract a finite family $X_1,\dots,X_k$ which covers the support of $s$, as well as (by lemma \ref{Lem_partition2}) a family of functions $\chi_1,\dots, \chi_k : X \rightarrow A$ such that $\chi_i$ has its support contains in the support of $X_i$, and $\sum \chi_i s =s $, in fact we can (and we will) further assume that $\chi_i$ is compactly supported within the support of $X_i$.
Let then:
\[ D =\coprod_{i=1}^k \coprod_{j=1}^{n_i} X_i \]
$D$ is a decidable coproduct of separating object hence it is a separating object. Let $p_1 : D \rightarrow X$ be the natural map that send each $X_i$ to $X$ by $v_i$. Let $p_2 :D \rightarrow Y$ be the map that send $(i,j)$ component $X_i$ to $Y$ by $y^i_j$ and $s': D \rightarrow V $ be the map that send the $(i,j)$ component $X_i$ of $D$ to $V$ by $ \frac{v_i^j \chi_i}{m_i}$
As the $\chi$ are compactly supported within the image of $X_i$ in $X$ and $X_i$ is finite over its image, this function from $D$ to $V$ is indeed compactly supported and:
\begin{multline*} \sum_{p_1(d)= x} s'(d) = \sum_{i=1}^k \sum_{j=1}^{n_i} \sum_{v_i(d)=x} \frac{\chi_i(x) v_i^j(d)}{m_i} = \sum_{i=1}^k\sum_{v_i(d)=x} \frac{\chi_i(x)}{m_i} \sum_{j=1}^{n_i} v_i^j(d) \\ = \sum_{i=1}^k\sum_{v_i(d)=x} \frac{\chi_i(x)}{m_i} s \circ v_i(d) = \sum_{i=1}^k \chi_i(x) s(x) = s(x) \end{multline*}
}
}
\block{\label{Gamma_functo_left}We will now describe the functoriality in $X$ of $\Gamma_c(X;V)$. Let $f : X \rightarrow Y$ be an arrow between two $A$-separating objects of an absolutely locally compact topos $\Tcal$. As $\Tcal_{/X}$ is separated and $\Tcal_{/Y}$ is $\Delta$-separated, the map $\Tcal_{/X} \rightarrow \Tcal_{/Y}$ is separated by \ref{stability_prop_C'} and hence $(X,f)$ is a decidable object of $\Tcal_{/Y}$ (see for example \cite[II.1.3]{moerdijk2000proper}). Let $v:X \rightarrow V$ be a compactly supported section on $X$ of some abelian group object $V$. There exists a sub-object $U \ll X$ and $W \subset X$ such that $U \cup W = X$ and $v|_W=0$, by lemma \ref{Lem_ll_ext_int}, internally in $\Tcal_{/Y}$ there exists a finite object $F$ such that $U \subset F \subset X$, in particular internally in $\Tcal_{/Y}$ one can define:
\[ \sum_{x \in X} v(x) \]
as the sum for $x \in F$ because $F$ is finite and decidable and contains the support of $v$, and this does not depend on the (internal) choice of $F$. In fact one has:
\Prop{Let $\Tcal$ be an absolutely locally compact topos, $f :X \rightarrow Y$ be an arrow between two separating objects of $\Tcal$. And $v:X \rightarrow V$ be a compactly supported arrow to an abelian group. Then internally in $\Tcal$ the following formula:
\[ w(y) := \sum_{f(x) = y} v(x) \]
defined a compactly supported function from $Y$ to $V$.
}
$w$ will be denoted $\Sigma_f v$ and this turns $\Gamma_c( \_ , \_)$ into a bi-functor.
\Dem{In the discussion above, we proved that\footnote{proving something internally in $\Tcal_{/Y}$ is exactly the same as proving that internally in $\Tcal$ the same thing holds for all $y \in Y$.} internally in $\Tcal$, for each $y \in Y$ there exists a finite set $F \subset f^{-1}(\{y\})$ such that for all $x \in f^{-1}(\{y\})$ either $v(x)=0$ or $x \in F$ which proves that the above sum is well defined and defines a function from $Y$ to $V$. We just have to prove that it is compactly supported, but the set of $f^*(V)$ such that $V \ll Y$ form a directed covering of $X$, hence as $v$ is compactly supported there exists a $V \ll Y$ such that $U$ (the ``support'' of $v$) is included in $f^*(V)$ . Then, for all $y \in Y$ let $F$ be a finite set as above, then either $v=0$ at every element of $F$, in which case $w(y)=0$ or there exist an element of $F$ which is in $U$, in which case $y \in V$, hence for all $y \in Y$, $w(y)=0$ or $y \in V$ with $V \ll Y$ externally, hence $w$ is compactly supported which concludes the proof. }
}
\block{\label{Lem_WeakseparatingTransport}
Let $\Tcal$ be an absolutely locally compact topos in which the terminal object $1$ is separating. Let also $X$ be a separating object of $\Tcal$, in particular, $\Tcal_{/X}$ is separated and as $1$ is separating, the topos $\Tcal = \Tcal_{/1}$ is $\Delta$-separated hence $\Tcal_{/X} \rightarrow \Tcal$ is separated (by \ref{Prop_Deltasep}) which means that $X$ is decidable (by \cite[II.1.3]{moerdijk2000proper}). For any abelian group object $V$ of $\Tcal$ and every object $X$ of $ \Tcal$ one can define the abelian group object $\bigoplus_{x \in X} V$, but in the special case where $X$ is decidable it can be identified with the group of finitely supported functions from $X$ to $V$.
In particular both $\Gamma_c(X,V)$ and $\Gamma_c(1, \bigoplus_{x \in X} V)$ corresponds to subgroup of the group of all functions from $X$ to $V$.
\Lem{In the situation above, $\Gamma_c(X,V)$ and $\Gamma_c(1, \bigoplus_{x \in X} V)$ are equal as subgroup of $Hom(X,V)$, moreover this identification of $\Gamma_c(X,V)$ and $\Gamma_c(1, \bigoplus_{x \in X} V)$ is functorial in both $X$ and $V$.}
\Dem{Let $v: X \rightarrow V$ be a compactly supported function and let $U \ll X$ which contains the support of this function. By lemma \ref{Lem_ll_ext_int}, internally in $\Tcal$ there exists a finite object $F$ such that $U \subset F \subset X$, i.e. $v$ is a finitely supported function from $X$ to $V$ and hence corresponds to a map $1 \rightarrow \bigoplus_{x \in X} V$ it is compactly supported by the exact same argument as in the end of the proof in \ref{Gamma_functo_left}.
Conversely, if $v:1 \rightarrow \bigoplus_{x \in X} V$ is a compactly supported functions, then one can apply lemma \ref{Lem_gammac_indexedcoprod}, and one obtains an object $D$ with a compactly supported map $\lambda:D \rightarrow V$, a map $p_2:D \rightarrow X$ and $p_1$ the map $p_1:D \rightarrow 1$ such that:
\[ v = \sum_{d \in D} i_{p_2(d)}(\lambda(d)) \]
where for $x \in X$, $i_{x}$ denotes the corresponding map $V \rightarrow \bigoplus_{x \in X} V$. In particular seeing $v$ as a function from $X$ to $V$ one has exactly:
\[ v(x) = \sum_{p_2(d)=x} \lambda(d) \]
hence $v= \Sigma_{p_2} \lambda$ is indeed an element of $\Gamma_c(X;V)$ by \ref{Gamma_functo_left}.
The fact that this identification is functorial is immediate.
}}
\block{\label{Th_def_cptsection}\Th{Let $\Tcal$ be an absolutely locally compact topos, $A$ an admissible ring object of $\Tcal$. Then one has a unique bifunctor $\Gamma_c(X;V)$ for $X \in |\Tcal|$ and $V$ a sheaf of $A$-module on $\Tcal$ such that:
\begin{itemize}
\item For all $V$, $\Gamma_c(\_ ,V)$ is a cosheaf of abelian groups on $\Tcal$.
\item When $X$ is $A$-separating this coincide with the definition in \ref{Prop_gammac_colimit}.
\end{itemize}
Moreover, for all $X$, $\Gamma_c(X,\_)$ commutes to all colimits.
}
This theorem will be our definition of a ``compactly supported section of $V$ on $X$'' when $X$ is not separating: they are the elements of $\Gamma_c(X;V)$.
\Dem{For the first part of the proposition, it is enough to proves that for any $V$ a sheaf of $A$-module, $\Gamma_c(X;V)$ as defined in \ref{Prop_gammac_colimit} for $X$ a $A$-separating object defines a cosheaf of abelian group for the canonical topology of $\Tcal$. Indeed the category of $A$-separating objects endowed with the canonical topology of $\Tcal$ is a site of definition for $\Tcal$, so this will construct a cosheaf of abelian groups $\Gamma_c( \_ ;V )$ on $\Tcal$ for all $V$.
Let $X$ be a separating object, and let $U_i$ be a covering of $X$ by separating object, we need to prove the cosheaf condition, which can be formulated as follow: $X$ can be written as a certain co-limits of the $U_i$ and there fiber products, and one needs to show that $\Gamma_c(\_,V)$ commutes to this colimit. As the colimit is computed in $\Tcal_{/X}$ one can freely assume that $X=1$ by working in $\Tcal_{/X}$. But then $\Gamma_c(Y,V) = \Gamma_c(1, \bigoplus_{y \in Y} V)$ obviously commutes to all co-limits: $Y \mapsto \bigoplus_{y \in Y} V$ commutes to all co-limit because it is the left adjoint to the functor which send a sheaf of $A$-module $W$ to the sheaf (of sets) of morphism from $V$ to $W$ and $\Gamma_c(1,\_)$ commutes to all co-limits because of proposition \ref{Prop_gammac_colimit}.
For the last claim of the proposition, co-limits in the category of co-sheaves over a site are computed objectwise so the fact that $\Gamma_c(X; \_)$ commutes to all co-limit when $X$ is separating shows that the functor $V \mapsto \Gamma_c( \_;V)$ commutes to co-limits as a functor from sheaves of $A$-module to co-sheaf of abelian groups, and hence that for all $X$, $\Gamma_c(X,\_)$ commutes to all co-limits.
}
}
\block{While the definition of $\Gamma_c(X;V)$ for a general $X$ may sound very abstract, it is not hard to give explicit formulas to compute them using the cosheaf property: Let $D$ be any separating object covering $X$ and let $D'$ be a separating object covering $D \times_X D$. Then $X$ is the coequalizer in $\Tcal$ of the two maps from $D$ to $D'$, hence $\Gamma_c(X;V)$ is the co-equalier of $\Gamma_c(X;D') \rightrightarrows \Gamma_c(X;D)$. If the topos is quasi-decidable $D \times_X D$ will itself be separating and can be used instead of $D'$.
The above description works only if we are able to find a single separating object covering an object $X$, which is the case as soon as the base topos is boolean or if in the base topos every object can be covered by a decidable object. If it is not the case one has the following more general description: pick $D_i$ a covering family of $X$ by separating object, and for all $i,j$ pick\footnote{This can always be done without invoking the axiom of choice by using the collection axiom introduced in \cite{shulman2010stack}.} $D'_{i,j,k}$ a covering family of $D_i \times_X D_j$, then $\Gamma_c(X;V)$ can be computed as the coequalizer:
\[ \bigoplus_{i,j,k} \Gamma_c(D'_{i,j,k};V) \rightrightarrows \bigoplus_i \Gamma_c(D_i;V) \twoheadrightarrow \Gamma_c(X;V) \]
}
\block{\label{Prop_Gammac_exchange}\Prop{If $\Tcal$ is a absolutely locally compact topos with $A$ an admissible sheaf of ring, then one has an isomorphism functorial in $V,X$ and $Y$:
\[ \Gamma_c( Y \times X;V) \simeq \Gamma_c \left( Y,\bigoplus_{x \in X} V \right) \]
}
\Dem{When $Y=1$ and both $1$ and $X$ are $A$-separating, this is lemma \ref{Lem_ll_ext_int}. Assuming $1$ is $A$-separating, then the two side defines co-sheaves of abelian groups in $X$ and are functorially isomorphic when $X$ is separating and one has such an isomorphism for all $X$.
In particular, for a general absolutely locally compact topos $\Tcal$, for any $A$-separating object $Y$ and any object $X$ one has an isomorphism:
\[ \Gamma_c(Y \times X, V) \simeq \Gamma_c(Y, \bigoplus_{x \in X} V) \]
by applying the above result in $\Tcal_{/Y}$ to the object $X\times Y$. But here again, the two sides are cosheaves in $Y$ hence the isomorphism extend to all $Y$.
}
}
\block{\label{Prop_MatrixRep}Let $X$ be an object of $\Tcal$, we denote by $X_A$ the free $A$-module generated by $X$, i.e., internally in $\Tcal$:
\[ \bigoplus_{x \in X} A =X_A \]
A morphism from $X_A$ to any other $A$-module $M$ is the same as a map from $X$ to $M$. If $X$ is separating, it hence makes sense to ask whether such a map $X_A \rightarrow M$ is compactly supported (depending on if the corresponding map $X \rightarrow M$ is compactly supported or not. The result above shows that:
\Th{Let $X,Y$ be two $A$-separating objects of an absolutely locally compact topos with an admissible ring object $A$. Then a compactly supported map $X_A \rightarrow Y_A$ is the same as a compactly supported section $\Gamma_c(X \times Y; A)$.
\bigskip
The correspondence between the two is as follow:
\bigskip
If $\gamma \in \Gamma_c(X \times Y;A)$ and if one has $p=(p_1,p_2):D \rightarrow X \times Y$ with $D$ a separating object such that $\gamma = \sigma_p \gamma_0$ then the map $F: X \rightarrow Y_A$ corresponding to $\gamma$ is described internally as:
\[ F(x) = \sum_{p_1(d)=x} i_{p_2(d)}(\gamma_0(d)) \]
Where the $i_y$ for $y \in Y$ are the structural maps from $A$ to $Y_A$
}
This theorem is one of the key result of this paper. It should be understood as a description of compactly supported map from $X_A$ to $Y_A$ by matrix elements $X \times Y \rightarrow A$, but with the subtleties that matrix elements are not a function from $X \times Y$ to $A$ as one should expect, but a compactly supported section, and that in the case where $X \times Y$ is not decidable, those compactly supported sections are not ``sections which are compactly supported''.
\Dem{The isomorphism of \ref{Prop_Gammac_exchange} gives us directly that:
\[ \Gamma_c( X;A Y) \simeq \Gamma_c(X \times Y; A) \]
We just need to show that it is indeed as described in the theorem, which amount to understand the composite:
\[ \Gamma_c(D;A) \rightarrow \Gamma_c(X \times Y;A) \overset{\simeq}{\rightarrow} \Gamma_c(X;A Y) \]
For $p:D \rightarrow X \times Y $ be a separating object over $X \times Y$ as in the theorem. The isomorphism corresponds to the one of \ref{Prop_Gammac_exchange} when $X$ is $A$-separating, hence it is essentially the isomorphism of \ref{Lem_WeakseparatingTransport} applied to the the topos $\Tcal_{/X}$ and to the object $X \times Y$ by cosheaf extension from the separating objects of $\Tcal_{/X}$, but $D$ is one of the separating objects of $\Tcal_{/X}$ and hence one has a diagram (all the $\Gamma_c$ of the left square being computed in the topos $\Tcal_{/X}$):
\[\begin{tikzcd}[ampersand replacement = \&]
\Gamma_c(D;p_X^* A) \arrow{r} \arrow{d}{\simeq} \& \Gamma_c(X \times Y;p_X^* A) \arrow{d}{\simeq} \arrow{r}{\simeq} \& \Gamma_c(X \times Y;A) \arrow{d}{\simeq}\\
\Gamma_c(X;p_X^*(A)D) \arrow{r} \& \Gamma_c(X, p_X^*(A)(X \times Y) ) \arrow{r}{\simeq} \& \Gamma_c(X;Y_A)
\end{tikzcd} \]
But the explicit description of the left most vertical arrow given in \ref{Lem_WeakseparatingTransport} allows to give an explicit description of the total diagonal map which is exactly the one presented in the theorem.
}
}
\block{
\Def{ We fix some set $(X_i)_{i \in I}$ of $A$-separating object of $\Tcal$, such that the $X_i$ and their sub-object form a generating family of $\Tcal$. And we define $\Ccal_c(\Tcal;A)$ to be the additive pseudo-category whose objects are the separating objects $X_i$ of $\Tcal$ and whose morphisms are the compactly supported map between the $A X_i$.
}
If the ground topos is locally decidable, one can find such a family formed of a single object $X$, in which case $\Ccal_c(\Tcal;A)$ will simply be a (non-unital) algebra. Because of the result above, this algebra should be thought of as the algebra of (finite) matrix with coefficients in $X$ in the sense that its elements corresponds to compactly supported function on $X \times X$.
But if we want to treat the case of a general basis we need a category.
}
\block{\Prop{Assume that $*$ is a linear involution on $A$ such that $(xy)^*=y^*x^*$ for all $x,y \in A$. Then $\Ccal(\Tcal;A)$ is a $*$-category for the $*$-operation which takes an arrow $f: X_A \rightarrow Y_A$ represented by a compactly supported function $f \in \Gamma_c(X \times Y,A)$ and exchange the variables of $f$ and apply $*$.
}
\Dem{This operation is clearly linear, we just have to check that $(fg)^*=g^* f^*$ but this follow easily from the description of a function $X_A \rightarrow Y_A$ in terms of a compactly supported section on $X \times Y$ given in proposition \ref{Prop_MatrixRep} using an easy computation very similar to the proof that $*$-transpose is a $*$-operation on matrix algebras with coefficient in a $*$-algebra.}
}
{\Prop{If $V$ is any sheaf of $A$-modules on $\Tcal$ then $X \mapsto \Kcal(A X, V)$ defines a non-degenerate right $\Ccal_c(\Tcal,A)$-module. }
\Dem{Let $X$ be one of the chosen $A$-separating objects. Let $f :X \rightarrow V$ be a compactly supported function, then by lemma \ref{Lem_cptSup_extension} one can find a compactly supported section $\lambda$ of $A$ on $X$ such that $\lambda$ is equal to $1$ on the support of $f$. Multiplication by $\lambda$ defines a compactly supported endomorphism of $X_A$ hence it is an element of $\Ccal_c(\Tcal,A)$ and $f \circ \lambda = f$.}
}
\block{\Th{The functor $V \mapsto \Kcal(A X_i,V)$ defines an equivalence of categories between the category of sheaves of $A$-modules and the category of right non-degenerate $\Ccal_c(\Tcal,A)$-modules.}
\Dem{Let $A$-Mod denotes the category of sheaves of $A$-modules, let $\Ccal_c(\Tcal,A)$-Mod be the category of non-degenerate right module over $\Ccal_c(\Tcal,A)$, and Let $S: A$-Mod $ \rightarrow \Ccal_c(\Tcal,A)$-Mod be the functor defined in the theorem. We will first construct a left adjoint $T$.
Let $M \in \Ccal_c(\Tcal,A)$-Mod, It suffices to construct an object $T(M)$ which satisfies the universal properties (naturally in $V$):
\[ \hom(T(M),V) = \hom(M,S(V)) \]
The functoriality of $T$ and the fact that it is adjoint of $S$ then follows from general categorical non-sense. A morphism $v$ from $M$ to $S(V)$ is the data for each $ X \in \Ccal_c(\Tcal,A)$ and each $m \in M(X)$ of a compactly supported morphism $v_{X,m}$ from $A X$ to $V$, which satisfies some equations translating the naturality of $v$. The key point is that the fact that the $v_{X,m}$ are compactly supported can be deduced from those equations. Indeed, as $M$ is non-degenerate, there exists an arrow $f \in \Ccal_{c} (\Tcal,A)$ such that $m=b.n$, hence by naturality $v_{X,m} = v_{Y,n} \circ f$ hence as $f$ is compactly supported $v_{X,m}$ is automatically compactly supported. Once this condition is removed a morphism in $\hom(M,S(V))$ is described as the data of functions from $A X$ to $V$ satisfying some relations, which can be translated as a map from some colimit $C$ to $V$, hence as the category $A$-mod is co-complete there is indeed such an object $T(M)$, which concludes the proof of the existence of the adjoint.
Now as $S$ commutes to co-limit (it follows immediately from proposition \ref{Prop_gammac_colimit}) $S \circ T(M)$ is defined by the same co-limit as $T(M)$ but computed in $\Ccal_c(\Tcal,A)$-Mod, hence it is isomorphic to $M$, moreover the unit of the adjunction $M \rightarrow S \circ T(M)$ is clearly an inverse of this isomorphism.
It remains to prove that the co-unit $c_N : T \circ S (N) \rightarrow N$ is also an isomorphism, but $S(C_N)$ is isomorphic to the identity of $S(N)$ hence it its enough to check that $S$ detect isomorphisms.
Let $f : N \rightarrow M$ be a map between two sheaf of $A$-modules such that $S(f)$ is an isomorphism.
We will first prove that $f$ is monomorphism: let $v_1,v_2$ be two functions from $V \subset X \in \Ccal_c(\Tcal,A)$ to $N$ such that $f \circ v_1 =f \circ v_2$. Let $U \ll V$ and let $\chi$ be a compactly supported function from $X$ to $A$ which is equal to one on $U$ and supported in $V$. The function $v_1 \chi$ and $v_2 \chi$ are compactly supported function from $X$ to $N$, hence the action of $f$ on them is the same as $S(f)$, hence $v_1 \chi = v_2 \chi$ hence $v_1 = v_2$ at least on $U$, but as this is true for any $U \ll V$ this show that $v_1= v_2$ on $V$ and as subobject of object in $\Ccal_{\Tcal,A}$ form a generating set by assumption it proves that $f$ is a monomorphism. A completely similar argument show that $f$ is also an epimorphism and this concludes the proof.
}
}
\blockn{When $A$ is the ring $\mathbb{R}$ or $\mathbb{C}$ of real or complex Dedekind numbers on $\Tcal$, the algebra $\Ccal_c(\Tcal,A)$ corresponds roughly to the convolution algebra of compactly supported function on a groupoid (see for example \ref{Ex_etalegpd}), as for groupoid algebras their is several way to complete it into a Banach algebra or $C^*$-algebra using different norm on this algebra. We will conclude this section by presenting the most important of these norms. For simplicity we will focus on the case where one use a single separating bound $X$ and hence that $\Ccal_c(\Tcal,\mathbb{R})$ is the algebra of endomorphisms of $X_{\mathbb{R}}$ with compact support on $X$. All the norm defined below (both internal and external) takes values en the upper semi-continuous real numbers (i.e. upper Dedekind cut).}
\block{We start with the $L^1$-norm or $I$-norm. Internally in $\Tcal$ the object $X_\mathbb{R}$ can be endowed with the $l^1$ norm:
\[ \Vert x \Vert_1 = \inf_{x=\sum \lambda_i x_i} |\lambda_i| \]
it is easy to check that this defines internally a pre-norm on $X_A$ and one can use it to put a operator norm on $\Ccal_c(\Tcal,\mathbb{R})$, if $f \in \Ccal_c(\Tcal,\mathbb{R})$:
\[ \Vert f \Vert_{I,l} = \sup_{x \in X_A, \Vert x Vert_1 \leqslant 1} \Vert f(x) \Vert_1 \]
in the sense that $\Vert f \Vert_{I,l}<q$ if and only if there is a $q'<q$ such that internally in $\Tcal$ one has $\forall x \in X_A$ such that $\Vert x \Vert_1 \leqslant 1$ one has $\Vert f(x) \Vert_1 \leqslant q'$.
One easily see that this is a norm on $\Ccal_c(\Tcal,\mathbb{R})$ which satisfies $\Vert x y \Vert_{I,l} \leqslant \Vert x \Vert_{I,l} \Vert x \Vert_{I,l} $. The involution is not isometric for this norm. To fix that one generally defines:
\[ \Vert f \Vert_I = \text{max} (\Vert f \Vert_{I,l},\Vert f^* \Vert_{I,l}) \]
Which is the so called $I$-norm or $L^1$ norm. The completion of $\Ccal_c(\Tcal,\mathbb{R})$ for this norm is denoted $L^1(\Tcal,\mathbb{R})$ and is a Banach algebra.
}
\block{One can try to define the $L^2$-norm or reduced norm in a similar way, but we need to deal with an additional difficulty: If one try to define the $l^2$ norm on $X_{\mathbb{R}}$ using the formula:
\[ \Vert x \Vert_2 = \left( \inf_{x=\sum \lambda_i x_i} |\lambda_i|^2 \right)^\frac{1}{2} \]
then this gives norm $0$ to all element of $X_A$, indeed, for any generator $x \in X$ one can write the corresponding element of $X_A$ as:
\[ x = \sum_{i=1}^n \frac{1}{n} x \]
and deduces that with the definition above the norm of $x$ is small than $\frac{1}{\sqrt(n)}$ and hence that it is zero. To avoid this, one need to add in the infimum defining $\Vert \Vert_2$ that the generators used in the expression are pairwise distinct but this work well only if $X$ is decidable. So one can construct the $L^2$-norm in a way similar to the $L^1$-norm above only if one can make our algebra to act on $X_A$ for $X$ a decidable object.
\bigskip
We proceed as follow: one choose $s:\Bcal \rightarrow \Tcal$ a surjection from a topos $\Bcal$ such that $s^*(X)$ is decidable in $\Bcal$. $\Bcal$ can for example be a boolean cover or the topos freely generated by adding a co-diagonal to $X$. Any $f \in \Ccal_c(\Tcal,\mathbb{R})$ is then an endomorphism of $s^*(X)_{\mathbb{R}}$ and as $s^*(X)$ is decidable on can use this to define its $L^2$ operator norm.
Because the $l^2$ norm on $s^*(X)_{\mathbb{R}}$ is a Hilbert norm, the $L^2$-operator norm is preserved by the involution and satisfies the $C^*$-identity and the $C^*$-inequality $\Vert x^* x \Vert = \Vert x \Vert^2$ and $\Vert x \Vert ^2 \leqslant \Vert x^* x +y^* y \Vert$. Hence the completion of $\Ccal_c(\Tcal,\mathbb{R})$ for this norm is a real $C^*$-algebra called the reduced $C^*$-algebra of the topos and denoted $\Ccal^*_{red}(\Tcal,\mathbb{R})$.
}
\block{Finally, one can define the maximal $C^*$-algebra of the topos as the universal real $C^*$-algebra $\Ccal^*_{max}(\Tcal,\mathbb{R})$ generated by the involutive algebra $\Ccal_c(\Tcal,\mathbb{R})$. In order to show that it exists one exactly need to prove the following lemma:
\Lem{For any $f \in \Ccal_c(\Tcal,\mathbb{R})$ there exists a constant $K$ such that for any involutive morphism $h: \Ccal_c(\Tcal,\mathbb{R}) \rightarrow A$ into a $C^*$-algebra one has $\Vert h(f) \Vert \leqslant K$.}
Indeed, the max norm of $f$ is then defined as the infimum of all such constant $K$.
\Dem{We start with the case where $f$ is multiplication by a compactly supported function (also denoted $f$) on $X$, and assume that $O \leqslant f \leqslant 1$.
Any positive function on $X$ can be written as $g^* g$ so for any morphism $h$ it is sent to a positive element of the $C^*$-algebra $A$. Hence in this case $h(f)$ is a positive element, for the same reason if $g \leqslant f$ are two positive functions on $X$ then $h(g) \leqslant h(f)$. In this case, as $f \leqslant 1$ one has $f^2 \leqslant f $ and hence $h(f)^2 = h(f^2) \leqslant h(f)$ which proves that $\Vert h(f) \Vert \leqslant 1$, hence $K=1$ works.
For a general compactly supported function $f$ on $X$, if $K$ is a constant larger than $|f|$ then $f^*f/K^2$ is a positive function between $0$ and $1$ hence for all morphism $h$ one has $\Vert h(f) \Vert \leqslant K$. So this $K$ works for $f$.
Let now $U \subset X$ be any sub-object, $s: U \rightarrow X$ be any map, and $\lambda$ be a compactly supported functions in $U$. One can then define an element of $s \lambda \in \Ccal_c(\Tcal,\mathbb{R})$ as follow, internally as a function from $X$ to $X_A$ by:
\[ s\lambda(x)= \lambda(x) s(x) \]
Which is compactly supported (its support is the support of $h$ which is compact in $U \subset X$) One easily check that his adjoint is:
\[ (s\lambda)^* (x) = \sum_{s(y)=x} \lambda(y) y \]
and that $(s\lambda)(s\lambda)^*$ satisfies:
\[ (s\lambda)(s\lambda)^*(x) = \left(\sum_{s(y)=x} \lambda(y)^2 \right) x \]
hence $(s\lambda)(s\lambda)^*$ is multiplciation by a (compactly supported) function, for any morphism $h$ one has indeed a constant $K$ such that $\Vert h((s\lambda)(s\lambda)^*) \Vert \leqslant K$ and hence $\Vert h(s \lambda) \Vert \leqslant K$, and one also have a $K$ for this type of elements.
Take now a general elements $f$ of $\Ccal_c(\Tcal,\mathbb{R})$, i.e. a compactly supported function from $X_A$ to $X_A$.
Using theorem \ref{Prop_MatrixRep}, the morphism $f$ can be represented by an $\lambda_0$ element of $\Gamma_c(X \times X,A)$, moreover, as $X$ is a bound there is a finite co-product of subobject $U_i$ of $X$:
\[ D=\coprod_{i=1}^n U_i \]
endowed with a map $(p_1,p_2) : D \rightarrow X \times X$ and a compactly supported function $\lambda \in \Gamma_c(D,A)$ such that $\lambda_0$ is the image of $\lambda$ by the map $\sigma_{(p_1,p_2)}$, following theorem \ref{Prop_MatrixRep}, this mean that $f$ can be described as:
\[ f(x) = \sum_{p_1(d)=x} \lambda(x) p_2(x) \]
If $D$ is just one sub-object $U \subset X$, then the above formula for $f$ can be re-written as:
\[ f = (p_2 \lambda_2) (p_1 \lambda_1)^* \]
where $\lambda_2$ and $\lambda_1$ are two compactly supported functions on $U$ such that $\lambda= \lambda_1 \lambda_2$.
In the more general case where $D$ is indeed a co-product of $U_i \subset X$ then it corresponds to a decomposition of $f$ as:
\[ f = \sum_{i=1}^n (p^i_2 \lambda^i_2) (p^i_1 \lambda^i_1)^* \]
but we already know that one has constants $K^i_1$ and $K^i_2$ that control the norm of $h((p^i_1 \lambda^i_1)^*)$ and $h(p^i_2 \lambda^i_2)$ for any morphism $h$ as above and hence one has:
\[ \Vert h(f) \Vert \leqslant \sum K^i_1 K^i_2 \]
which concludes the proof.
}
}
\section{Examples}
\label{Sec_examples}
\blockn{We will conclude this paper by giving some example of topos to which the above formalism apply and the corresponding algebras. Some of the following examples are only sketched.}
\block{Let $\Lcal$ be a Hausdroff (equivalently regular) locally compact locale. Then $\Lcal$ is absolutely locally compact as a topos, the terminal object is separating and the subobject of the terminal object form a basis of quasi-finite object of cardinal $1$.
Hence any c-soft sheaf of rings on $X$ is admissible. Moreover the family $X_i$ can be chosen reduced to the single object $1$. It can also be check that conversely any admissible sheaf of ring have to be c-soft.
hence our main result become:
\Prop{Let $X$ be a locally compact Hausdorff locale and $A$ a c-soft sheaf of ring over $X$. The category of sheaves of $A$-modules is equivalent to the category of non-degenerate modules over the ring $\Gamma_c(A)$ of compactly supported sections of $A$.}
The theorem applies in particular when $A$ is the sheaf of continuous function with value in $\mathbb{R}$ or $\mathbb{C}$ (or any unital topological $\mathbb{R}$-algebra), as soon as we assume either the axiom of dependant choice or that $X$ is completely regular. One can of course take $X$ to be any locally compact Hausdorff topological space or any Hausdorff manifold (in which case one does not need the axiom of choice).
If $A$ is endowed with an involution (for example the identity if $A$ is commutative or complex conjugation if $A = \mathbb{C}$ then the involution on $\Gamma_c(A)$ is just the ``pointwise'' application of the involution on $A$.
}
\block{Let now $\Lcal$ be a locale which is only locally a locally compact Haussdorff locale.
Let $(U_i)_{i \in I}$ be an open covering of $\Lcal$ by open sublocales which are Hausdorff and locally compact. For each $i$, $\Lcal_{/U_i}$ is just $U_i$ and is a locally compact Hausdorff locale. In particular $\Lcal$ is absolutely locally compact and each $U_i$ as well as $\coprod_{i \in I} U_i$ (if $I$ is decidable) form separating objects.
As in the previous example any sheaf of rings $A$ which is c-soft on each $U_i$ will be admissible with the $U_i$ $A$-separating and one can take the $(U_i)_{i \in I}$ as our family $X_i$.
\Prop{In this situation, with $A$ a sheaf of ring which is c-soft of each $U_i$,
\begin{itemize}
\item $\Ccal_c(\Lcal,A)$ can be chosen to be the additive pseudo-category whose objects are the $i \in I$ and whose morphism from $i$ to $j$ are compactly supported sections of $A$ on $U_i \wedge U_j$ composition is just the multiplication in $A$.
\item If $A$ is involutive the involution is just pointwise application of the involution of $A$ and exchange of the source and the target.
\item if $I$ is decidable (or if we assume the law of excluded middle), one can take $\coprod U_i$ as the only object in the familly $X_i$. In this case $\Ccal_c(\Lcal, A)$ is the algebra of finitely supported matrix with coefficient in $I$ whose $i,j$ component is a compactly supported section of $A$ on $U_i \wedge U_j$. Multiplication being define by matrix multiplication and the multiplication in $A$.
\item In this case the involution is matrix transposition together pointwise application of the involution in $A$.
\item In both case, non-degenerate modules over $\Ccal_c(\Lcal,A)$ are the same sheaf of $A$-module over $\Lcal$.
\end{itemize}
}
Here again this applies when $A$ is the sheaf of real or complexe valued continuous functions as soon as we assume the axiom of dependant choice or that each $U_i$ is completely regular.
}
\block{\label{Ex_etalegpd}Let now $G=(G_0,G_1,s,t,\mu)$ be an étale topological groupoid. Let $\Tcal$ be the topos of $G$-equivariant sheaves, i.e. sheaves over $G_0$ endowed with an action of $G$ and $G$-equivariant maps between them.
The forgetful functor from $G$-equivariant sheaves to sheaves over $G_0$ is the $f^*$ part of an étale surjective geometric morphism $f: G_0 \rightarrow \Tcal$ which corresponds to the object $X$ which is $G_1$ over $G_0$ endowed with its multiplication action on itself.
Moreover $G_1$ can be described as $G_0 \times_{\Tcal} G_0$ and the groupoid structure on $G$ can be recovered as the obvious ``pair groupoid'' structure coming from this description of $G_1$. Any topos admitting an étale covering by a locale $G_0$ is of this form, such topos is called an étendu.
If $G_0$ is locally compact and Hausdorff, then any ring object $A$ of $\Tcal$ which is c-soft when seen as a sheaf over $G_0$ is admissible with $X$ as a $A$-separating object. The sub-object of $X$ form a generating family of $\Tcal$, hence one can take $X$ as the single element of the family $(X_i)_{i \in I}$.
By theorem \ref{Prop_MatrixRep}, compactly supported endomorphisms of $X_A$ are the same as compactly supported sections of $A$ on $X \times X$ but $\Tcal_{X \times X}$ is exactly the locale $G_1$ hence we need to distinguishes two cases:
Either $G_1$ is Hausdorff in this case ``compactly supported sections'' of $X$ do mean sections which are compactly supported. Or $G_1$ is not Hausdorff, in which case compactly supported sections on $G_1$ are computed using the co-sheaf construction of \ref{Th_def_cptsection}. As $G_1$ is always locally Hausdorff (it is étale over $G_0$ which is Hausdorff), compactly supported sections on $G_1$ can be computed using the cosheaf property on the covering of $G_1$ by some Hausdorff open sub-objects, and will be exactly linear combinations of compactly supported functions on Hausdorff open subspaces as it is usual in non-comutative geometry. In both case it is not very hard to see that multiplication and involution are the usual multiplication and involution of étale groupoid algebra.
}
\block{In the special case of an étale groupoid whose space of object is locally compact Hausdorff and totally disconnected (i.e. with a basis of compact open subspaces stable under intersection) then any sheaf over the space of objects is c-soft because any open can be covered by compact clopen subspaces. Hence in this case any sheaf of ring is admissible. Applying the above machinery to such a groupoid with a constant sheaf of rings gives exactly the algebra constructed by B.Steinberg in \cite{steinberg2010groupoid} and the equivalence between modules on the algebra and sheaf of module over the topos is the main result of his second paper on the subject \cite{steinberg2014modules}.}
\block{Let now look at a simple example where the divisibility condition in the definition of admissible sheaf of ring is not vacuous. Let $G$ be a pro-finite group, in fact, for simplicity, take $G = \mathbb{Z}_p$ the additive group of p-adic integer.
Let $\Tcal$ be the topos of smooth $G$-set, i.e. the category of sets endowed with a continuous (i.e. smooth, or locally constant) action of $\mathbb{Z}_p$. This is a absolutely compact topos and $1$ is separating. A ring object in $\Tcal$ will be admissible if and only if $p$ is invertible in $A$. Indeed, as the topos is atomic the softness condition is vacuous (all the $Loc(X)$ are discrete, so one always have compactly supported functions) but the quasi-finite generators corresponds to $G$-orbits whose cardinal is always a power of $p$, so one need to have an inverse for $p$.
Again, for simplicity we will focus on the case where $A$ is a ring with trivial $\mathbb{Z}_p$ action and in which $p$ is invertible. We take the $(X_k = \mathbb{Z}/p^k \mathbb{Z})$ as our family of generators. A function from $X_k$ to $X_{k'}$ in $\Ccal(\Tcal,A)$ is a compactly supported section on $X_k \times X_{k'}$ the orbits of $X_k \times X_{k'}$ corresponds to the double cosets $(p^{k'}\mathbb{Z}_{p}) \setminus \mathbb{Z}_p /(p^k \mathbb{Z}_{p})$, so morphisms from $X_k$ to $X_{k'}$ corresponds to linear combinations of this with coefficient in $A$ and the composition can be seen to be the multiplication of double cosets algebra.
Note that in this case the invertibility of $p$ appears to be unimportant for the definition of the algebra but are important for the proof that modules over this algebra are the same as module objects in $\Tcal$. The main result relating module objects over $A$ and modules over $\Ccal_c(\Tcal,A)$ is, in this case, the well known relation between representations of the double cosets algebra and representations of the group.
}
\block{\label{example_Graph}We now consider a finite\footnote{having for each verticies a finite number of edges targeting it would be enough, but it is simpler to assume the graph is finite for certain details below.} directed graph $\mathbb{G} = (\mathbb{G}_0,\mathbb{G}_1)$, i.e. $\mathbb{G}_0$ is a finite set (its elements are called vertices), $\mathbb{G}_1$ is a finite set whose objects are called arrows, and there is two maps $s,t : \mathbb{G}_1 \rightrightarrows \mathbb{G}_0$ giving respectively the source and the target of each arrow.
\bigskip
If $\mathbb{G}$ is a graph a $\mathbb{G}$-presheaf $\Fcal$ is the data of:
\begin{itemize}
\item For each vertices $x \in \mathbb{G}_0$ a set $\Fcal(x)$.
\item For each arrow $a \in \mathbb{G}_1$, $a:x \rightarrow y$, i.e. $x=s(a)$ and $y=t(a)$, one has a function $\Fcal(a):\Fcal(y) \rightarrow \Fcal(x)$.
\end{itemize}
Morphisms of $\mathbb{G}$-presheaves are naturally defined as collection of functions $f_x: \Fcal(x) \rightarrow \Fcal'(x)$ for $x \in \mathbb{G}_0$ such that for all $a \in \mathbb{G}_1$ the induced square commute.
A $\mathbb{G}$-sheaf is a $\mathbb{G}$-presheaf such that for all $x \in \mathbb{G}_0$ one has:
\[ \Fcal(x) \simeq \prod_{a: y \rightarrow x \in \mathbb{G}_1} \Fcal(y) \]
Where the isomorphism is induced by the natural map which is $\Fcal(a) : \Fcal(x) \rightarrow \Fcal(y)$ on the component $a :Y \rightarrow x$.
The category of $\mathbb{G}$-presheaf is equivalent to the category of presheaf on the category $\mathbb{G}^p$ freely generated by $\mathbb{G}$ i.e. the category of paths in $\mathbb{G}$. We will show that the $\mathbb{G}$-sheaves are the sheaves for a Grothendieck topology on $\mathbb{G}^p$.
For any verticies $x$ of $\mathbb{G}$, let $x^-$ be the sieve over $x$ of morphism (i.e. path) from $y$ to $x$ which are of length at least $1$. Equivalently, $x^-$ is the sieve generated by the covering family of the $y \rightarrow x$ for all arrows to $x$ in the graph. if $y \rightarrow x$ and $y' \rightarrow x$ are two arrow in the graph it is not hard to see that the pullback of these two arrows (in the category of presheaf) is $\emptyset$ unless they are equal in which case it is the arrow itself. In particular the sheaf condition with respect to the sieve $x^- \hookrightarrow x$ is exactly the condition that the map:
\[ \Fcal(x) \simeq \prod_{a: y \rightarrow x \in \mathbb{G}_1} \Fcal(y) \]
is an isomorphism.
The pullback of $x^{-} \hookrightarrow x$ by any morphism $y \rightarrow x$ is the maximal sieve $y \hookrightarrow y$ as soon as the morphism $Y \rightarrow x$ has length at least one, and is $y^{-} \hookrightarrow y$ if it is the identity. In particular, the family of all sieve $x^{-} \hookrightarrow x$ and $x \hookrightarrow x$ is stable under pullback. In particular, by \cite[Corrolary II.2.3]{SGA4I}, in order to check whether a pre-sheaf is a sheaf with respect to the topology generated by the covering sieve $x^{-} \hookrightarrow x$ one just have to check to sheaf condition for those sieves, i.e. $\mathbb{G}$-sheaves are exactly the sheaves for the topology generated by the $x^{-} \hookrightarrow x$. In particular, $\mathbb{G}$-sheaves form a topos.
\bigskip
We will denote by $\Tcal_{\mathbb{G}}$ the topos of $\mathbb{G}$-sheaves. For example, if $G$ has one verticies and one arrow it is exactly the topos $B\mathbb{Z}$ of sets with an action of $\mathbb{Z}$, if $G$ has one verticies and $2$ arrows it is the so-called ``J\'onsson-Tarski'' topos of sets $X$ endowed with an isomorphism between $X$ and $X \times X$. The reader should immediately notes that the site we used to define it is far from being sub-canonical: if one fix a verticies $y$ then $\Fcal(x):=\{\text{Path from $x$ to $y$ }\}$ does not satisfies the sheaf condition.
\bigskip
Let $x$ be a verticies of the graph. Let $P_x$ the representable sheaf associated to the object of $\mathbb{G}^p$, i.e. the sheafification of the representable pre-sheaf. The topos $\Tcal_{\mathbb{G}}/P_x$ can be described by the slice site $\mathbb{G}^p/x$. As a category, it is the poset of finite paths in $\mathbb{G}$ that ends at $x$ ordered by extension, the topology is generated by the cover $p^-$ of a path $p$ by all the paths of length one more than the length of $p$ which extend $p$. This is exactly a site for the locale of infinite path in $G$ ending at $x$ (i.e. indexed by $i \leqslant 0$ and such that $p_0=x$ ).
In particular, $\Tcal_{\mathbb{G}}/P_x$ is a locale and even a Stone space. Hence, $\Tcal_{G}$ is absolutely locally compact, $P_X$ is a separating object for $\Tcal_{\mathbb{G}}$, any sheaf of ring over $\Tcal_{\mathbb{G}}$ is admissible, and compactly supported section over $P_x$ are just ordinary section.
\bigskip
Let $K$ be a unital ring, and consider it as a constant sheaf over $\Tcal_{\mathbb{G}}$. One take $X= \coprod_{x \in \mathbb{G}_0} P_x$ as a single generator of the topos to construct the algebra $K_{\mathbb{G}} = \Ccal_c(\Tcal_{\mathbb{G}},K)$.
If $M$ is a $K$ module over $\Tcal_{G}$ then it is just a $\mathbb{G}$-sheaf of ordinary $K$-modules, and the corresponding $\Ccal_c(\Tcal_{\mathbb{G}},K)$-module is (at least at the level of the underlying $K$-module) :
\[ \bigoplus_{x \in \mathbb{G}_0} M(x) \]
\bigskip
\Def{The Leavitt path algebra $L_K(\mathbb{G})$ is the involutive $K$-algebra generated by elements $v_a$ for $a \in \mathbb{G}_1$ and $p_v$ for $v \in \mathbb{G}_0$ with the relation:
\begin{itemize}
\item $p_e^*=p_e$ and $p_e p_{e'} = \delta_{e,e'} p_e$.
\item $v_a p_{s(a)} =p_{t(a)} v_a = v_a$
\item $v_a^* v_b = \delta_{a,b} p_{s(a)}$
\item $ p_e = \sum_{x \in \mathbb{G}_1, t(x)=e} v_x v_x^* $
\end{itemize}
}
One will first see that a (non-degenerate) right $L_K(\mathbb{G})$-module is exactly a $K$-module over $\Tcal_{\mathbb{G}}$: First observe that $i=\sum_{e \mathbb{G}_0} p_e$ is a unit for this algebra. Indeed it follows easily from the relation above that $ip_e=p_e i = p_e$ and $i v_a = v_a i =v_a$ and $iv_a^* = v_a^* i = v_a^*$ and a general element is a polynomial in those elements hence $i$ is a unit. In particular the $p_e$ form a maximal family of orthogonal projections hence correspond to decomposition of $M$ into:
\[ M = \bigoplus M(e) \]
where $M(e) = M.p_e$.
Moreover, right multiplication by $v_a$ for $a:e \rightarrow e'$ an arrow in $\mathbb{G}$ corresponds exactly to a linear map from from $M(e')$ to $M(e)$, indeed for an element $x$ in $M(e'')$ with $e'' \neq e'$, one has $x v_a= x p_{e''} v_a = 0$ and for $x \in M(e')$ one has $x v_a p_e = x v_a$ hence $x v_a \in M(e)$. If one considers the $K$-algebra generated by the $v_a$ and $p_e$ subject to the first two relations, a right module over this algebra would correspond exactly to a $\mathbb{G}$-presheaf of $K$-module. Adding the existence of the $v_a^*$ subject to the last two relations exactly assert (in terms of by-product) that the natural map:
\[ M(e) \rightarrow \prod_{ a: e' \rightarrow e} M(e') \]
is an isomorphism and hence a right $L_K(\mathbb{G})$-module can be identified with a $\mathbb{G}$-sheaf of $K$-module
\bigskip
A map from $\coprod_{e \in \mathbb{G}_0} P_e$ to a sheaf of $K$-modules $M$ corresponds exactly to an element of $\prod M(e)$ hence to an element of the $L_K(\mathbb{G})$-module corresponding to $M$, hence the free $K$-module on $\coprod_{e \in \mathbb{G}_0} P_e$ corresponds to free $L_K(\mathbb{G})$-module on one generator. The algebra of the topos $\Tcal_{\mathbb{G}}$ is the algebra of compactly supported endomorphisms of this free module, but because of the compactness of $\coprod_{e \in \mathbb{G}_0} P_e$ it is exactly the set of all endomorphisms of $L_K(\mathbb{G})$ hence it is $L_K(\mathbb{G})$ itself. One has proved that:
\Th{The convolution algebra $\Ccal_c(\Tcal_{\mathbb{G}},K)$ is the Leavitt path algebra $L_K(\mathbb{G})$.}
The result still hold in the case where the graph $\mathbb{G}$ is infnite as long as any vertices has only a finite number of edges pointing to it. In this infinite situation, the algebra is no longer unital but the finite sum of $p_e$ form an ``approximate unit'' and not all endomorphisms of $L_K(\mathbb{G})$ are compactly supported but it is not very hard to make everything works. Also in a constructive context it is useful to assume that the set $\mathbb{G}_0$ of verticies of $\mathbb{G}$ is decidable otherwise one cannot consider the object $\coprod_{e \in \mathbb{G}_0} P_e$ as a separating object and one need to work with a ``Leavitt path algebroid'' whose set of object is $\mathbb{G}_0$ instead.
}
|
2,877,628,088,575 | arxiv | \section{Introduction}
Let $g_1, \dots, g_s \in \mathbb{R}[X_1,\dots,X_n]$ and consider
the basic closed semialgebraic set
$$
S = \{x \in \mathbb{R}^n \ | \ g_1(x) \ge 0, \dots, g_s(x) \ge 0 \}.
$$
Given $f \in \mathbb{R}[X_1,\dots,X_n]$ such that $f$ is non-negative on $S$, a classical
question is if there is a
representation of $f$
which makes evident this fact.
Concerning this problem, there are two important algebraic objects associated to $g_1, \dots, g_s$:
the preordering
$$
T(g_1, \dots, g_s) = \Big\{\sum_{I \subset \{1, \dots, s \}} \sigma_I \prod_{i \in I}g_i
\ | \
\sigma_I \in \sum \mathbb{R}[X_1, \dots, X_n]^2 \hbox{ for every } I \subset \{1, \dots, s \} \Big\}
$$
and
the quadratic module
$$
M(g_1, \dots, g_s) = \left\{\sigma_0 + \sigma_1g_1 + \dots + \sigma_sg_s \ | \
\sigma_0, \sigma_1, \dots, \sigma_s \in \sum \mathbb{R}[X_1, \dots, X_n]^2\right\}.
$$
It is clear that $M(g_1, \dots, g_s) \subset T(g_1, \dots, g_s)$, but the equality only holds
in some special cases, for instance when $s = 1$.
It is also clear that every polynomial $f \in T(g_1, \dots, g_s)$ is non-negative on $S$,
but the converse is not true in general (see \cite[Example]{Ste}).
Schm\"udgen Positivstellensatz (\cite{Schm}) states that if $S$ is compact,
every polynomial $f \in \mathbb{R}[X_1, \dots,X_n]$ positive on $S$ belongs to $T(g_1, \dots, g_s)$.
On the other hand, Putinar Positivstellensatz (\cite{Put}) states that if $M(g_1, \dots, g_s)$ is archimedean,
every polynomial $f \in \mathbb{R}[X_1, \dots,X_n]$ positive on $S$ belongs to $M(g_1, \dots, g_s)$.
Recall that the quadratic module $M(g_1, \dots, g_s)$ is archimedean if
there exists $r \in \mathbb{N}$ such that
$$
r - X_1^2 - \dots - X_n^2 \in M(g_1, \dots, g_s).
$$
Note that if $M(g_1, \dots, g_s)$ is archimedean, then $S$ is compact, but again, the converse is not true in general
(see \cite[Example 4.6]{JacPres}).
In the case where $\dim S \ge 3$ or in the case where $n = 2$ and $S$ contains an affine full-dimensional cone,
there exist polynomials non-negative on $S$ which do not belong to $T(g_1, \dots, g_s)$ (\cite{Sche}).
On the contrary, M. Marshall proved in \cite{Marsh} the following result for polynomials non-negative on the strip
$[0,1] \times \mathbb{R} \subset \mathbb{R}^2$:
\begin{theorem} \label{th:Marshall}
Let $f \in \mathbb{R}[X,Y]$ with $f \geq 0 $ on $[0,1]\times \mathbb{R}$. Then
\begin{align}\label{repr_f}
f= \sigma_0 + \sigma_1 X(1-X)
\end{align}
with $\sigma_0, \sigma_1 \in \sum \mathbb{R}[X,Y]^2$.
\end{theorem}
In other words, Theorem \ref{th:Marshall} states that every polynomial non-negative on the strip $[0,1]\times \mathbb{R}$
belongs to $M(X(1-X))$.
This result was later extended to other two-dimensional
semialgebraic sets in \cite{NguPowers} and \cite{SchWen}.
In this paper, we present some results concerning effectivity issues around
the representation obtained in Theorem \ref{th:Marshall}, in
particular cases.
For instance, a natural question is if it is possible to bound the degrees of each term
in (\ref{repr_f}).
In Section \ref{sec:deg_2}, we
prove a degree bound for each term in the case
$\deg_Y f \leq 2$. To this end,
we first characterize all the extreme rays of a suitable cone
containing $f$
and study their representation as in
(\ref{repr_f}).
The main result in this section is the following.
\begin{theorem}\label{th:main_degY_2} Let $f \in \mathbb{R}[X,Y]$
with $f \ge 0$ on $[0,1]\times \mathbb{R}$ and
$\deg_Y f \le 2$.
Then $f$ can be written as in (\ref{repr_f}) with
$$
\deg (\sigma_0), \deg (\sigma_1 X(1-X)) \le \deg_X f + 3.
$$
\end{theorem}
In Section \ref{sec:al_app}, we deal again with the question of bounding the degrees of each term in
(\ref{repr_f}) in a different situation. First, in Section \ref{subsect:positiv},
we consider the case where $f$ is positive on $[0, 1] \times \mathbb{R}$
and
does not \emph{vanish at infinity}. To make this concept precise, we introduce the following definition
coming from \cite{Pow}:
\begin{defn}
Let $f \in \mathbb{R}[X,Y]$ and $m=\deg_Y f$.
The polynomial $f$ is \emph{fully} $m$-\emph{ic} on $[0,1]$ if for every $x \in [0,1]$, $f(x,Y) \in \mathbb{R}[Y]$ has degree $m$.
\end{defn}
Given
$$
f= \sum_{0 \leq i \leq m} \sum_{0 \leq j \leq d} a_{ji}X^jY^i \in \mathbb{R}[X,Y],
$$
define
$$
\bar f= \sum_{0 \leq i \leq m} \sum_{0 \leq j \leq d} a_{ji}X^jY^iZ^{m-i} \in \mathbb{R}[X,Y, Z].
$$
Note that if $f>0$ on $[0,1]\times \mathbb{R}$ and $f$ is fully $m$-ic on $[0,1]$ then $m$ is even and
$\bar f>0$ on
$\{(x, y, z) \ | \ x \in [0, 1], \, y^2 + z^2 = 1 \}$.
We note as usual
$$
\| f \|_{\infty} = \max \{ |a_{ji}| \ | \ 0 \le i \le m, \ 0 \le j \le d\}.
$$
We prove the following result.
\begin{theorem}\label{thm:f_positivo}
Let $f \in \mathbb{R}[X,Y]$
with $f > 0$ on $[0,1]\times \mathbb{R}$, $f$ fully $m$-ic on $[0, 1]$, $d = \deg_X f \ge 2$ and
$$
f^{\bullet} = \min\{\bar f(x, y, z) \ | \ x \in [0, 1], \, y^2 + z^2 = 1 \} > 0.
$$
Then $f$ can be written as in (\ref{repr_f}) with
$$
\deg (\sigma_0), \deg(\sigma_1 X(1-X)) \leq \frac{d^3(m+1)\|f\|_{\infty}}{f^{\bullet}}.
$$
\end{theorem}
Note that the cases $\deg_X f = 0$ and $\deg_X f = 1$ are not covered by
Theorem \ref{thm:f_positivo}, but these cases are of a simpler nature.
If $\deg_X f = 0$,
$f$ belongs to $\mathbb{R}[Y]$ and is non-negative on $\mathbb{R}$, then $f$ can simply be written as
a sum of squares in $\mathbb{R}[Y]$ with the degree of each term bounded by $m$
(see \cite[Proposition 1.2.1]{Marshall_book} and
\cite{MagSafSch}).
If $\deg_X f=1$, we have
$$
f(X,Y)= f(1,Y)X+f(0,Y)(1-X)
$$
and, since $f(0,Y)$ and $f(1,Y)$ are non-negative on $\mathbb{R}$,
again these polynomials can be written as
sums of squares in $\mathbb{R}[Y]$ with the degree of each term bounded by $m$;
then,
using the identities
$$
X = X^2 + X(1-X)
\qquad \hbox{ and } \qquad
1-X = (1-X)^2 + X(1-X),
$$
we take $\sigma_0 = f(1,Y)X^2 + f(0, Y)(1-X)^2$ and
$\sigma_1 = f(1,Y) + f(0, Y)$ and the identity $f = \sigma_0 + \sigma_1X(1-X)$ holds
with the degree of each term bounded by $m+2$.
To prove Theorem \ref{thm:f_positivo}, in Section \ref{subsect:positiv} we show a constructive way of producing
the representation in Theorem \ref{th:Marshall}
in the case of $f$ positive on $[0, 1] \times \mathbb{R}$ and fully $m$-ic on $[0, 1]$,
and then we bound the degrees of each term.
A similar constructive way of obtaining this representation was already given in
\cite[Proposition 3]{PowRez} under slightly different hypothesis.
The idea behind the construction is
to consider the unbounded variable as a parameter
and to produce
a uniform version of
a representation theorem for the segment $[0,1]$ using the effective version of P\'olya's Theorem
from \cite{Polya_bound}.
This technique was also used in related problems in \cite{Pow} and \cite{EscPer}.
Finally, in Section \ref{subsect:zeros}, we prove that the constructive method from the previous section
also works in the case of $f$ non-negative on the strip and having only a finite number of zeros, all of them
lying on the boundary,
and such that $\frac{\partial f}{\partial X}$ does not vanish at any of them.
\section{The case $\deg_Y f\leq 2$}\label{sec:deg_2}
In this section we
consider the problem of finding a degree bound for the representation in Theorem \ref{th:Marshall}
under the assumption $\deg_Y f \le 2$.
Since it will be more convenient to homogenize with respect to the unbounded variable,
we introduce the set
$${\cal S} = [0, 1] \times ( \mathbb{R}^2 \setminus \{(0, 0)\} ) \subseteq \mathbb{R}^3.
$$
It is easy to see that for $\bar f=f_2(X)Y^2+f_1(X)YZ+f_0(X)Z^2$ non-negative on ${\cal S}$ and $x_0 \in [0, 1]$,
$f_2(x_0) \ge 0$ and $f_0(x_0) \ge 0$ and either $f(x_0, Y, Z) = 0$ or
$\deg_Y f(x_0, Y, Z)$ and $\deg_Z f(x_0, Y, Z)$ are even numbers;
therefore, if $X - x_0 \, | \, f_2$ or $X-x_0 \, | \, f_0$, then $X-x_0 \, | \, f_1$.
Moreover, if $x_0 \in (0, 1)$ and $X - x_0 \, | \, f_2$, then $(X - x_0)^2 \, |\, f_2$. Similarly,
if $x_0 \in (0, 1)$ and $X - x_0 \, |\, f_0$, then $(X - x_0)^2 \,|\, f_0$.
We introduce the following cone.
\begin{defn}
Given $d, e \in \mathbb{N}_0$, we define
$$
{\cal C}_{d, e}
= \Big\{
\bar f=f_2(X)Y^2+f_1(X)YZ+f_0(X)Z^2 \in \mathbb{R}[X,Y, Z]
\ | \
$$
$$
\bar f \ge 0 \hbox{ on } {\cal S}, \
\deg f_2 \leq d, \ \deg f_1 \leq \left \lfloor \frac12(d + e) \right \rfloor, \ \deg f_0 \leq e
\Big\}.
$$
\end{defn}
We can think of
${\cal C}_{d,e}$ as included in $\mathbb{R}^{d + \lfloor \frac12(d + e) \rfloor + e + 3}$ by identifying
each $\bar f \in {\cal C}_{d,e}$ with its vector of coefficients in some prefixed order.
It is easy to see that ${\cal C}_{d, e}$ is a closed cone which does not contain lines. Therefore, we can use the
following well-known result (see for instance \cite[Section 18]{Rockafellar}).
\begin{theorem}\label{th:extr_rays}
Let ${\cal C} \subseteq \mathbb{R}^N$ be a closed cone which does not contain lines, then
every element of ${\cal C}$ can be written as a sum of elements lying on extreme rays of ${\cal C}$.
\end{theorem}
For a given $f \in \mathbb{R}[X, Y]$ non-negative on $[0, 1] \times \mathbb{R}$, the strategy for proving
that Theorem \ref{th:main_degY_2} holds for $f$ is to use the classical idea of
characterizing the extreme rays of ${\cal C}_{d,e}$, then to study
the homogenized representation
as in Theorem \ref{th:Marshall} for the elements lying on these rays,
and finally to
decompose $\bar f$ as
a sum of them.
Under the additional hypothesis that $d$ and $e$ have the same parity, our characterization of the extreme rays of
${\cal C}_{d, e}$ is the following.
\begin{theorem}\label{th:main}
Let $d, e \in \mathbb{N}_0$ such that $d\equiv e \, (2)$.
The extreme rays of ${\cal C}_{d, e}$
are the rays generated by the polynomials of the form $r(X)(p(X)Y + q(X)Z)^2$
with
\begin{itemize}
\item $p$ and $q$ not simultaneously zero and $(p: q) = 1$,
\item $r \ne 0$, $r \ge 0$ on $[0,1]$ and $r$ with $\deg r$ real roots in $[0, 1]$ (counted with multiplicity),
\item $2 \deg p \le d, 2 \deg q \le e$ and $\deg r = \min \{d - 2\deg p, e - 2\deg q \}.$
\end{itemize}
\end{theorem}
To prove Theorem \ref{th:main}, the idea is to proceed inductively on a
sequence of cones ordered \emph{by inclusion}. To do so, we need to show first that
given $\bar f=f_2(X)Y^2+f_1(X)YZ+f_0(X)Z^2 \in {\cal C}_{d,e}$
some factors of $f_2(X)$ or $f_0(X)$ are necessarily also factors of $f_1(X)$;
in this case,
after removing these factors we move to a smaller cone.
The following lemmas are some basic auxiliary results concerning extreme rays of ${\cal C}_{d, e}$.
\begin{lemma} \label{lem:se_anula}
Let $d, e \in \mathbb{N}_0$ and let $\bar f$ be a generator of an extreme ray of ${\cal C}_{d, e}$.
Then $\bar f$ vanishes at some point of ${\cal S}$.
\end{lemma}
\begin{proof}{Proof:}
Suppose $\bar f > 0$ on ${\cal S}$ and take
$$
c = \min\{\bar f(x, y, z) \, | \, x \in [0, 1], \, y^2 + z^2 = 1 \} > 0.
$$
Consider $cY^2$, $c(Y^2+Z^2) \in {\cal C}_{d,e}$. We have
$$
0 \le cY^2 \le c(Y^2 + Z^2) \le \bar f \ \hbox{ on } {\cal S},
$$
but since $\bar f$ generates an extreme ray of ${\cal C}_{d, e}$, $\bar f$ is a scalar multiple of both $cY^2$ and $c(Y^2 + Z^2)$ which is impossible.
\end{proof}
\begin{lemma} \label{lem:algun_coef_cero}
Let $d, e \in \mathbb{N}_0$ and let
$\bar f = f_2(X)Y^2 + f_1(X)YZ + f_0(X)Z^2$ be a generator of an extreme ray of ${\cal C}_{d, e}$.
If
$f_2 = 0$, $f_1 = 0$ or $f_0 = 0$, then
$\bar f$ is of the form
$$
r(X)Y^2 \hbox{ or } r(X)Z^2.
$$
\end{lemma}
\begin{proof}{Proof:}
If $f_2 = 0$ then $f_1 = 0$, $\bar f = f_0(X)Z^2$ and we take $r(X) = f_0(X)$. Similarly, if $f_0 = 0$
then $f_1 = 0$, $\bar f = f_2(X)Y^2$ and we take $r(X) = f_2(X)$.
On the other hand, if $f_1 = 0$ and $f_2, f_0 \ne 0$, then
$$
0 \le f_2(X)Y^2 \le f_2(X)Y^2 + f_0(X)Z^2 = \bar f \ \hbox{ on } {\cal S}
$$
which, proceeding similarly to the proof of Lemma \ref{lem:se_anula}, is impossible.
\end{proof}
The following lemma shows that the second and third condition in the characterization of the extreme rays
in Theorem \ref{th:main} are indeed consequences of the first condition.
\begin{lemma} \label{lem:conclu_1_var}
Let $d, e \in \mathbb{N}_0$.
If $r(X)(p(X)Y + q(X)Z)^2$
with $p$ and $q$ not simultaneously zero and $(p: q) = 1$
generates an extreme ray of ${\cal C}_{d, e}$,
then
\begin{itemize}
\item $r \ne 0$, $r \ge 0$ on $[0,1]$ and $r$ has $\deg r$ real roots in $[0, 1]$ (counted with multiplicity),
\item $2 \deg p \le d, 2 \deg q \le e$ and $\deg r = \min \{d - 2\deg p, e - 2\deg q \}.$
\end{itemize}
\end{lemma}
\begin{proof}{Proof:}
Let $\bar f = r(X)(p(X)Y + q(X)Z)^2$. Since $\bar f \ne 0$, $r \ne 0$, and
since $\bar f \geq 0$ on ${\cal S}$, $r \geq 0$ on $[0,1]$. If $r$ has a complex non-real root, or a
real root which does not belong to the
interval $[0,1]$, it is easy to see that $r$ can be written as $r=r_1+r_2$
with $r_1,r_2\in \mathbb{R}[X]-\{0\}$,
$\deg r_1, \deg r_2 \le \deg r$,
$\deg r_1 \ne \deg r_2$ and
$r_1,r_2 \geq 0$ on $[0,1]$.
Then for $i = 1, 2$, we take $f_i=r_i(X)(p(X)Y + q(X)Z)^2 \in {\cal C}_{d, e}$ and we have
$$
0 \le f_i \le \bar f \ \hbox{ on } {\cal S},
$$
but since $\bar f$ generates an extreme ray of ${\cal C}_{d, e}$, $\bar f$ is a scalar multiple of both $f_1$ and $f_2$ which is impossible.
Since $\bar f \in {\cal C}_{d,e}$, we have
$2 \deg p \le d, 2 \deg q \le e$ and $\deg r \le \min \{d - 2\deg p, e - 2\deg q \}$.
If $\deg r < \min \{d - 2\deg p, e - 2\deg q \}$, we have
$X \bar f \in {\cal C}_{d,e}$ and
$$
0 \le X \bar f \le \bar f \ \hbox{ on } {\cal S}
$$
which is again impossible for similar reasons.
\end{proof}
In order to prove Theorem \ref{th:main}, we will do several changes of variables.
The following three
lemmas summarize the properties we need. We omit their proofs since they are very simple.
\begin{lemma} \label{lem:camb_variable}
Let $d, e \in \mathbb{N}_0$ with $d \le e$, $\bar f \in {\cal C}_{d, e}$, $\beta \in \mathbb{R}$
and
$h
\in \mathbb{R}[X, Y, Z]$
defined by
$$
h(X, Y, Z) =\bar f(X, Y + \beta Z, Z)= f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
Then:
\begin{itemize}
\item $h$ belongs to ${\cal C}_{d, e}$.
\item If $\bar f$ generates an extreme ray of ${\cal C}_{d, e}$,
then $h$ generates an extreme ray of ${\cal C}_{d,e}$.
\item If $(x_0, y_0, z_0) \in {\cal S}$ with $z_0 \ne 0$ and $\bar f(x_0, y_0, z_0) = 0$
and $\beta = y_0/z_0$, then $h_0(x_0) = 0$.
\item If $h$ can be written as
$
r(X)(p(X)Y + q(X)Z)^2
$
with $p$ and $q$ not simultaneously zero and $(p:q)$ $= 1$,
then
$\bar f$
can be written as
$$
r(X)(p(X)Y + (-\beta p(X) + q(X))Z)^2
$$
with $p$ and $ -\beta p + q$ not simultaneously zero and
$(p : - \beta p + q) = 1$.
\end{itemize}
\end{lemma}
\begin{lemma} \label{lem:camb_variable_d_men_e}
Let $d, e \in \mathbb{N}_0$ with $d +2 \le e$,
$\bar f \in {\cal C}_{d, e}$,
$\ell \in \mathbb{R}[X]$ with $\deg \ell = 1$ and
$h \in \mathbb{R}[X, Y, Z]$ defined by
$$
h(X, Y, Z) = \bar f(X, Y + \ell(X) Z, Z)
= f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
Then:
\begin{itemize}
\item $h$ belongs to ${\cal C}_{d, e}$.
\item If $\bar f$ generates an extreme ray of ${\cal C}_{d, e}$,
then $h$ generates an extreme ray of ${\cal C}_{d, e}$.
\item If $(x_0, y_0, z_0),$ $(x_1, y_1, z_1)
\in {\cal S}$
with $x_0 \ne x_1$, $z_0, z_1 \ne 0$, $y_0/z_0 \ne y_1/z_1$
and $f(x_0, y_0, z_0) = f(x_1, y_1, z_1) = 0$ and
$$
\ell(X) = \frac{y_1/z_1 - y_0/z_0}{x_1 - x_0}(X - x_0) + y_0/z_0,
$$
then $h_0(x_0) = h_0(x_1) = 0$.
\item If $h$ can be written as
$
r(X)(p(X)Y + q(X)Z)^2
$
with $p$ and $q$ not simultaneously zero and $(p:q)$ $= 1$,
then $\bar f$ can be written as
$$
r(X)(p(X)Y + (-\ell(X) p(X) + q(X))Z)^2
$$
with $p$ and $ - \ell p + q$ not simultaneously zero and
$(p : - \ell p + q) = 1$.
\end{itemize}
\end{lemma}
\begin{lemma} \label{lem:camb_variable_d_ig_e}
Let $d, e \in \mathbb{N}_0$ with $d = e$, $\bar f \in {\cal C}_{d, e}$,
$\beta_0, \beta_1 \in \mathbb{R}$
with $\beta_0 \ne \beta_1$ and
$h \in \mathbb{R}[X, Y, Z]$ defined by
$$
h(X, Y, Z) = f(X, \beta_0 Y + \beta_1 Z, Y + Z)
= h_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
Then:
\begin{itemize}
\item $h$ belongs to ${\cal C}_{d, e}$.
\item If $\bar f$ generates an extreme ray of ${\cal C}_{d, e}$,
then $h$ generates an extreme ray of ${\cal C}_{d, e}$.
\item If $(x_0, y_0, z_0),$
$(x_1, y_1, z_1)
\in {\cal S}$
with $z_0, z_1 \ne 0$, $y_0/z_0 \ne y_1/z_1$ and $f(x_0, y_0, z_0) = f(x_1, y_1, z_1) = 0$ and
$\beta_0 = y_0/z_0$,
$\beta_1 = y_1/z_1$, then $h_2(x_0) = h_0(x_1) = 0$.
\item If $h$ can be written as
$
r(X)(p(X)Y + q(X)Z)^2
$
with $p$ and $q$ not simultaneously zero and $(p:q)$ $= 1$,
then $\bar f$
can be written as
$$
\frac1{(\beta_0 - \beta_1)^2}r(X)((p(X) - q(X))Y + (-\beta_1 p(X) + \beta_0q(X))Z)^2
$$
with $p- q$ and $-\beta_1 p + \beta_0q $ not simultaneously zero and
$(p- q :-\beta_1 p + \beta_0q ) = 1$.
\end{itemize}
\end{lemma}
We are ready to prove the characterization of the extreme rays of the
cone ${\cal C}_{d, e}$ given in Theorem \ref{th:main}.
\begin{proof}{Proof of Theorem \ref{th:main}:}
We begin by proving that if $\bar f = r(X)(p(X)Y + q(X)Z)^2$
with $r, p$ and $q$ as in the statement of Theorem \ref{th:main}, then
$\bar f$ generates an extreme ray of ${\cal C}_{d,e}$.
Consider
$$
g = g_2(X)Y^2 + g_1(X)YZ + g_0(X)Z^2 \in {\cal C}_{d, e}
$$
such that $0 \le g \le \bar f$ on ${\cal S}$. We want to show that $g$ is a scalar multiple of $\bar f$.
If $p=0$, since $(p:q) = 1$ we have $q = \lambda \in \mathbb{R}\setminus\{0\}$ and then $\deg r = e$. On the other hand,
for every $x \in [0,1]$, $\bar f(x,1,0)=0$. Then, for every $x \in [0,1]$, $g_2(x)=g(x,1,0)=0$
and this implies $g_2 = g_1=0$. Therefore, $g=g_0(X)Z^2$, but
since $0 \le g \le \bar f$ on ${\cal S}$, $0 \leq g_0 \leq \lambda^2 r $ on $[0,1]$. It is easy to see
that every root of $r$ is necessarily also a root of $g_0$ with at least the same multiplicity, then we have
$\deg r \le \deg g_0 \leq e=\deg r$, $g_0$ is a scalar multiple of $r$ and $g$ is a scalar multiple of $\bar f$.
If $p \neq 0$, we consider $G \in \mathbb{R}[X,Y,Z]$ defined by
$$
G(X,Y,Z)=p(X)^2g(X,Y,Z)=g_2(X)(p(X)Y+q(X)Z)^2+G_1(X)YZ+G_0(X)Z^2.
$$
We first see that $G_1=G_0=0$.
Take $x_0 \in [0,1]$ such that $p(x_0)\neq 0$. Since $\bar f(x_0, -q(x_0), p(x_0))=0$,
$G(x_0, -q(x_0), p(x_0))=0$ and then
\begin{align}\label{id_G_1}
-G_1(x_0)q(x_0)p(x_0)+G_0(x_0)p(x_0)^2=0.
\end{align}
Moreover, since $G \geq 0$ on ${\cal S}$,
\begin{align}\label{id_G_0}
\frac{\partial G}{\partial Y}(x_0, -q(x_0), p(x_0))= G_1(x_0)p(x_0)=0.
\end{align}
We conclude from (\ref{id_G_1}) and (\ref{id_G_0}) that $G_1(x_0)= G_0(x_0)=0$. This implies $G_1=G_0=0$
and then $p(X)^2g(X,Y,Z)=g_2(X)(p(X)Y+q(X)Z)^2$. Since $(p:q)=1$, $p^2 \, | \, g_2$ and $g=\tilde g_2(X)(p(X)Y+q(X)Z)^2$
for $\tilde g_2 = g_2/p^2 \in \mathbb{R}[X]$.
Reasoning similarly to the case $p =0$, we see that $\tilde g_2$ is a scalar multiple of $r$ and $g$
is a scalar multiple of $\bar f$.
Now we prove that if $\bar f = f_2(X)Y^2 + f_1(X)YZ + f_0(X)Z^2$ generates an extreme ray of ${\cal C}_{d,e}$ then
$\bar f$ can be written as in the statement of Theorem \ref{th:main}. To do so,
we use inductive arguments, considering the families of cones ordered \emph{by inclusion}, this is
to say,
$$
{\cal C}_{d_1, e_1} \le {\cal C}_{d_2, e_2} \qquad \text{ if } \qquad d_1 \le d_2\text{ and } e_1 \le e_2.
$$
Actually, for $(d,e)=(0,0)$, the result is easy to check using Lemma \ref{lem:se_anula}, so from now on
we assume $(d, e) \ne (0, 0)$.
Using Lemma \ref{lem:algun_coef_cero} and Lemma \ref{lem:conclu_1_var}, we can assume $f_2, f_1, f_0 \ne 0$.
First, we prove the result in two particular cases.
\begin{enumerate}
\item[A1.] There is $x_0 \in [0, 1]$ such that $(X-x_0)^2 \, | \, f_2$ or $(X-x_0)^2 \, | \, f_0$:
Without loss of generality, suppose $(X-x_0)^2 \, | \, f_2$, then $X-x_0 \, | \, f_1$.
Consider $h_2 = f_2/(X-x_0)^2, \, h_1 = f_1/(X-x_0) \in \mathbb{R}[X]$ and
$$
h = h_2(X)Y^2 + h_1(X)YZ + f_0(X)Z^2 \in \mathbb{R}[X, Y, Z],
$$
then
$$
h(X, (X-x_0)Y, Z) = \bar f(X, Y, Z) \qquad \hbox{ and } \qquad
h(X, Y, Z) =\bar f\left(X, \frac{Y}{X-x_0}, Z\right)
$$
Note that $h \in {\cal C}_{d-2, e}$. Indeed,
$h$ verifies the degree bounds
and
$h \ge 0$ on $\{(x, y, z) \in {\cal S} \ | \ x\ne x_0 \}$,
by continuity, $h \ge 0$ on ${\cal S}$.
In order to apply the inductive hypothesis, let us prove that $h$ generates an extreme ray of ${\cal C}_{d-2, e}$.
Given
$$
g = g_2(X)Y^2 + g_1(X)YZ + g_0(X)Z^2 \in {\cal C}_{d-2, e}
$$
such that $0 \le g \le h$ on ${\cal S}$, we consider
$$
\tilde g = (X-x_0)^2g_2(X)Y^2 + (X-x_0)g_1(X)YZ + g_0(X)Z^2 \in \mathbb{R}[X, Y, Z],
$$
since $\tilde g(X, Y, Z) = g(X, (X-x_0)Y, Z)$, $\tilde g \in {\cal C}_{d,e}$ and $0 \le \tilde g \le \bar f$ on ${\cal S}$.
Therefore, $\tilde g$ is a scalar multiple of $\bar f$ and $g$ is a scalar multiple of $h$.
By the inductive hypothesis, $h$ is of the form
$$
h(X, Y, Z) = \tilde r(X)(\tilde p(X)Y + \tilde q(X)Z)^2
$$
with $\tilde p$ and $\tilde q$ not simultaneously zero and $(\tilde p: \tilde q) = 1$.
Then,
$$
\bar f(X, Y, Z) = \tilde r(X)((X-x_0) \tilde p(X)Y + \tilde q(X)Z)^2.
$$
If $X - x_0 \, \not| \, \tilde q$, we take $r = \tilde r$, $p = (X-x_0) \tilde p$ and $q = \tilde q$, and
if $X - x_0 \, | \, \tilde q$, we take $r = (X-x_0)^2 \tilde r$, $p = \tilde p$ and $q = \tilde q /(X-x_0) \in \mathbb{R}[X]$.
In both cases we have $(p:q) = 1$ and we conclude using Lemma \ref{lem:conclu_1_var}.
\item[A2.] There is $x_0 \in [0, 1]$ such that $X-x_0 \, | \, f_2, f_0$:
It is clear that $X-x_0 \, | \, f_1$.
If $x_0 \in (0, 1)$ it is easy to see that $(X-x_0)^2 \, | \, f_2$ and then we are in case A1,
so we can suppose $x_0 \in \{0, 1\}$. Without loss of generality assume $x_0 = 0$.
Consider $h = \bar f/X \in \mathbb{R}[X, Y, Z]$.
Proceeding as in case A1, it is easy to see that $h$ generates an extreme ray of ${\cal C}_{d-1, e-1}$,
and using the inductive hypothesis we have $h$ is of the form
$$
h(X, Y, Z) = \tilde r(X)(\tilde p(X)Y + \tilde q(X)Z)^2
$$
with $\tilde p$ and $\tilde q$ not simultaneously zero and $(\tilde p: \tilde q) = 1$.
Then
we take $r = X\tilde r$, $p = \tilde p$ and $q = \tilde q$ and we conclude using Lemma \ref{lem:conclu_1_var}.
\end{enumerate}
We consider now an auxiliary list of cases in which we prove the result by reducing to cases A1 and A2.
\begin{enumerate}
\item[B1.] There are $x_0 \in \{0,1\}$ and $(y_0,z_0) \in \{(1,0), (0,1)\}$ such that
$\bar f(x_0,y_0,z_0)=0$ and $\bar f(x, y, z) \ne 0$ for every $(x, y , z) \in {\cal S}$ with $x \ne x_0$:
Without loss of generality, suppose $\bar f(0,1,0)=0$, then $f_2(0) = 0$ and $X \, | \, f_1$.
If $X^2 \, | \, f_2$ we are in case A1 and if $X \, | \, f_0$ we are in case A2.
Moreover, if
there is $x \in (0, 1]$ with $f_2(x) = 0$, then
$\bar f(x, 1, 0) = 0$ which contradicts the hypothesis.
Similarly, if
there is $x \in (0, 1]$ with $f_0(x) = 0$, then
$\bar f(x, 0, 1) = 0$ which also contradicts the hypothesis.
So from now on we assume $X^2\nmid f_2$, $f_2 > 0$ on $(0, 1]$ and $f_0 > 0$ on $[0, 1]$.
Consider $g_2 = f_2/X, g_1 = f_1/X \in \mathbb{R}[X]$
and note that $g_2 > 0$ in $[0, 1]$.
Since $\bar f(x,y,z) > 0$ for $(x,y,z) \in {\cal S}$ with $x \in (0,1]$,
$$
f_1(x)^2-4f_2(x)f_0(x)= x^2g_1^2(x) - 4xg_2(x)f_0(x) < 0,
$$
for $x \in (0, 1]$, and then
$$
xg_1^2(x) - 4g_2(x)f_0(x) < 0
$$
for $x \in (0, 1]$, but since $g_2(0) > 0$ and $f_0(0) >0$, this last inequality can be extended to $x \in [0, 1]$.
We take $\varepsilon > 0$ such that
$$
\frac{xg_1^2(x)}{4g_2(x)} - f_0(x) \le -\varepsilon
$$
for $x \in [0,1]$.
Therefore,
$$
f_1(x)^2-4f_2(x)(f_0(x) - \varepsilon)= x^2g_1^2(x) - 4xg_2(x)(f_0(x) - \varepsilon) \le 0
$$
for $x \in [0, 1]$.
Let
$
h = f_2(X)Y^2 + f_1(X)YZ + (f_0(X) - \varepsilon)Z^2 \in \mathbb{R}[X,Y,Z].
$
It follows easily that $h \in {\cal C}_{d,e}$ and $0 \le h \le \bar f$ on ${\cal S}$,
but then $h$ is a scalar multiple of $\bar f$ which is impossible.
\item[B2.] There is $(y_0,z_0) \in \{(1,0), (0,1)\}$ such that
$\bar f(0,y_0,z_0) =\bar f(1, y_0,z_0) = 0$ and $\bar f(x, y, z) \ne 0$ for every $(x, y , z) \in {\cal S}$ with $x \in (0, 1)$:
Without loss of generality, suppose $\bar f(0, 1, 0) =\bar f(1, 1, 0) = 0$, then $f_2(0) = f_2(1) = 0$
and therefore $X \, | \, f_1$ and $X-1 \, | \, f_1$.
If $X^2 \, | \, f_2$ or $(X-1)^2 \, | \, f_2$ we are in case A1 and
if $X \, | \, f_0$ or $X - 1 \, | \, f_0$ we are in case A2.
Moreover, if
there is $x \in (0, 1)$ with $f_2(x) = 0$,
then $\bar f (x, 1, 0) = 0$ which contradicts the hypothesis.
Similarly, if
there is $x \in (0, 1)$ with $f_0(x) = 0$, then $\bar f (x, 0, 1) = 0$ which also contradicts the hypothesis.
So from now on we assume $X^2 \nmid f_2$, $(X-1)^2 \nmid f_2$, $f_2 > 0$ on $(0, 1)$ and $f_0 > 0$ on $[0, 1]$.
Consider $g_2 = f_2/(X(X-1)), g_1 = f_1/(X(X-1)) \in \mathbb{R}[X]$
and note that $g_2 < 0$ in $[0, 1]$.
Since $\bar f(x,y,z) > 0$ for $(x,y,z) \in {\cal S}$ with $x \in (0,1)$,
$$
f_1(x)^2-4f_2(x)f_0(x)= x^2(x-1)^2g_1^2(x) - 4x(x-1)g_2(x)f_0(x) < 0,
$$
for $x \in (0, 1)$, and then
$$
x(x-1)g_1^2(x) - 4g_2(x)f_0(x) > 0
$$
for $x \in (0, 1)$, but since $g_2(0) <0, g_2(1) < 0, f_0(0) >0$ and $f_0(1)>0$,
this last inequality can be extended to $x \in [0,1]$.
We take $\varepsilon > 0$ such that
$$
\frac{x(x-1)g_1^2(x)}{4g_2(x)} - f_0(x) \le -\varepsilon
$$
for $x \in [0,1]$.
The proof is finished using the same arguments as in case B1.
\item[B3.] There are $(y_0,z_0), (y_1,z_1) \in \{(1,0), (0,1)\}$, $(y_0,z_0)\neq (y_1,z_1)$ such that
$\bar f(0, y_0,z_0) =\bar f(1, y_1,z_1) = 0$ and $\bar f(x, y, z) \ne 0$ for every $(x, y , z) \in {\cal S}$ with $x \in (0, 1)$:
Without loss of generality, suppose $f(0, 1, 0) = f(1, 0, 1) = 0$, then $f_2(0) = f_0(1) = 0$
and therfore $X \, | \, f_1$ and
$X-1 \, | \, f_1$.
If $X^2 \, | \, f_2$ or $(X-1)^2 \, | \, f_0$ we are in case A1
and if $X \, | \, f_0$ or $X-1 \, | \, f_2$ we are in case A2.
Moreover, if
there is $x \in (0, 1)$ with $f_2(x) = 0$,
then $\bar f (x, 1, 0) = 0$ which contradicts the hypothesis.
Similarly, if
there is $x \in (0, 1)$ with $f_0(x) = 0$, then $\bar f (x, 0, 1) = 0$ which also contradicts the hypothesis.
So from now on we assume $X^2 \nmid f_2$, $(X-1)^2 \nmid f_0$, $f_2 > 0$ on $(0, 1]$ and $f_0 > 0$ on $[0, 1)$.
Consider $g_2 = f_2/X$, $g_1 = f_1/(X(X-1))$, $g_0 = f_0/(X-1) \in \mathbb{R}[X]$
and note that $g_2 > 0$ in $[0, 1]$ and $g_0 < 0$ in $[0, 1]$.
Since $\bar f(x,y,z)>0$ for $(x,y,z) \in {\cal S}$ with $x \in (0,1)$,
$$
f_1(x)^2-4f_2(x)f_0(x)= x^2(x-1)^2g_1^2(x) - 4x(x-1)g_2(x)g_0(x) < 0
$$
for $x \in (0, 1)$, and then
$$
x(x-1)g_1^2(x) - 4g_2(x)g_0(x) > 0
$$
for $x \in (0, 1)$, but since $g_2(0) >0, g_2(1) > 0, g_0(0) <0$ and $g_0(1) < 0$, this last inequality can
be extended to $x \in [0, 1]$.
We take $\varepsilon > 0$ such that
$$
\frac{x(x-1)g_1^2(x)}{4g_2(x)} - g_0(x) \ge \varepsilon
$$
for $x \in [0,1]$.
Therefore,
$$
f_1(x)^2-4f_2(x)(x-1)(g_0(x) + \varepsilon)= x^2(x-1)^2g_1^2(x) - 4x(x-1)g_2(x)(g_0(x) + \varepsilon) \le 0
$$
for $x \in [0, 1]$.
Let
$
h = f_2(X)Y^2 + f_1(X)YZ + (X-1)(g_0(X) + \varepsilon)Z^2 \in \mathbb{R}[X,Y,Z].
$
It follows easily that $h \in {\cal C}_{d,e}$ and $0 \le h \le \bar f$ on ${\cal S}$, but
then $h$ is a scalar multiple of $\bar f$ which is impossible.
\end{enumerate}
We prove now the general case.
Without loss of generality we suppose $d \le e$.
By Lemma \ref{lem:se_anula}, $\bar f$ vanishes at some point of ${\cal S}$.
To prove the result we are going to consider three final cases.
\begin{enumerate}
\item[C1.] There is $(x_0, y_0, z_0) \in {\cal S}$ with $x_0 \in (0, 1)$ such that
$\bar f(x_0, y_0, z_0) = 0$:
If $z_0 = 0$, $X - x_0 \, | \, f_2$, then $(X - x_0)^2 \, | \, f_2$ and we are in case A1.
If $z_0 \ne 0$ we take $\beta = y_0/z_0$ and
consider
$$
h(X,Y,Z) =\bar f(X, Y + \beta Z, Z) = f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable}, $h$ generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_0(x_0) = 0$.
Then $(X - x_0)^2 \, | \, h_0$ and by case A1 applied to $h$ and
Lemma \ref{lem:camb_variable} the result follows.
\item[C2.] There are $x_0 \in \{0,1\}$ and $(y_0, z_0) \in {\cal S}$ such that
$\bar f(x_0, y_0, z_0) = 0$ and $\bar f(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \ne x_0$:
Without loss of generality, suppose $x_0=0$.
If $z_0 = 0$, we can assume $y_0 = 1$ and we are in case B1.
If $z_0 \ne 0$, we take $\beta = y_0/z_0$ and consider
$$
h(X,Y,Z) =\bar f(X, Y + \beta Z, Z) = f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable},
$h$
generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_0(0) = 0$ and $h(0, 0, 1) = 0$.
In addition,
$h(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \ne 0$.
By case B1 applied to $h$ and
Lemma \ref{lem:camb_variable} the result follows.
\item[C3.] There are $(y_0,z_0), (y_1,z_1) \in {\cal S}$ such that
$\bar f(0, y_0, z_0) =\bar f(1, y_1, z_1)= 0$
and $\bar f(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \in (0, 1)$:
If $z_0 = z_1= 0$, we can assume $y_0 = y_1 = 1$ and we are in case B2.
If $z_0 \ne 0$ and $z_1 = 0$, we take $\beta = y_0/z_0$
and consider
$$
h(X,Y,Z) = \bar f(X, Y + \beta Z, Z) = f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable},
$h$
generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_0(0) = 0$ and $h(0, 0, 1) = 0$.
On the other hand,
since $\bar f(1, y_1, 0) = 0$, $f_2(1) = 0$ and
$h(1, 1, 0) = 0$. In addition,
$h(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \in (0,1)$.
By case B3 applied to $h$
and
Lemma \ref{lem:camb_variable} the result follows.
If $z_0 = 0$ and $z_1 \ne 0$ we proceed similarly to the case $z_0 \ne 0$ and $z_1 = 0$.
The final case is $z_0,z_1 \ne 0$, but we need to split it in three cases.
If $z_0, z_1 \ne 0$ and $y_0/z_0 = y_1/z_1$,
we take $\beta = y_0/z_0$ and consider
$$
h(X,Y,Z) =\bar f(X, Y + \beta Z, Z) = f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable},
$h$ generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_0(0) = h_0(1) = 0$, then
$h(0, 0, 1) = h(1, 0, 1) = 0$.
In addition,
$h(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \in (0,1)$.
By case B2 applied to $h$ and
Lemma \ref{lem:camb_variable} the result follows.
If $z_0, z_1 \ne 0$ with $y_0/z_0 \ne y_1/z_1$ and $d = e$,
we take $\beta_0 = y_0/z_0$ and $\beta_1 = y_1/z_1$
and consider
$$
h(X,Y,Z) =\bar f(X, \beta_0 Y + \beta_1 Z, Y + Z) = h_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable_d_ig_e},
$h$ generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_2(0) = h_0(1) = 0$, then
$h(0, 1, 0) = h(1, 0, 1) = 0$.
In addition,
$h(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \in (0,1)$.
By case B3 applied to $h$ and
Lemma \ref{lem:camb_variable_d_ig_e} the result follows.
Finally,
if $z_0, z_1 \ne 0$ with $y_0/z_0 \ne y_1/z_1$
and $d < e$, since $d\equiv e(2)$, $d +2 \le e$.
Then, we take
$$
\ell(X) = (y_1/z_1 - y_0/z_0)X + y_0/z_0
$$
and consider
$$
h(X,Y,Z) =\bar f(X, Y + \ell(X) Z, Z) = f_2(X)Y^2 + h_1(X)YZ + h_0(X)Z^2.
$$
By Lemma \ref{lem:camb_variable_d_men_e}, $h$
generates an extreme ray of ${\cal C}_{d, e}$ and verifies $h_0(0) = h_0(1) = 0$,
then, $h(0, 0, 1) = h(1, 0, 1) = 0$.
In addition,
$h(x, y, z) \ne 0$ for every $(x, y, z) \in {\cal S}$ with $x \in (0,1)$.
By case B2 applied to $h$ and
Lemma \ref{lem:camb_variable_d_men_e} the result follows.
\end{enumerate}
\end{proof}
Finally, we deduce Theorem \ref{th:main_degY_2}.
\begin{proof}{Proof of Theorem \ref{th:main_degY_2}:}
Take $d = e = \deg_X f$, then $\bar f = f_2(X) Y^2 + f_1(X) YZ + f_0(X)Z^2 \in {\cal C}_{d, e}$
(note that we \emph{homogenize to degree} 2 even in the case $\deg_Y f = 0$).
By Theorems \ref{th:extr_rays} and \ref{th:main},
$$
\bar f= \sum_{1 \leq i \leq s}r_i(p_iY+q_iZ)^2
$$
for some $r_i,p_i,q_i\in \mathbb{R}[X]$ as in Theorem \ref{th:main} for $1 \le i \le s$.
By studying the factorization in $\mathbb{C}[X]$ of each $r_i \in \mathbb{R}[X]$, it is easy to see that
the condition $r_i \ge 0$ on $[0, 1]$ implies that there exist
$t_i, u_i, v_i, w_i \in \sum \mathbb{R}[X]^2$
such that
$$
r_i = t_i + u_iX + v_i(1-X) + w_iX(1-X)
$$
with $\deg t_i, \deg u_iX , \deg v_i(1-X), \deg w_iX(1-X) \le \deg r_i$.
Using the identities
$$
X = X^2 + X(1-X)
\qquad \hbox{ and } \qquad
1-X = (1-X)^2 + X(1-X),
$$
we take
$$\sigma_0 = \sum_{1 \leq i \leq s}(t_i + u_iX^2 + v_i(1-X)^2 )(p_iY+q_i)^2$$
and
$$\sigma_1 = \sum_{1 \leq i \leq s}(u_i + v_i + w_i) (p_iY+q_i)^2$$
and the identity $f = \sigma_0 + \sigma_1X(1-X)$ holds.
Finally,
$$
\deg (\sigma_0) \le \max_{1 \le i \le s} \deg(t_i + u_iX^2 + v_i(1-X)^2 )(p_iY+q_i)^2
\le \max_{1 \le i \le s} \deg r_i(p_iY+q_i)^2 + 1 \le \deg_X f + 3
$$
and
$$
\deg (\sigma_1X(1-X)) \le \max_{1 \le i \le s} \deg(u_i + v_i + w_i) (p_iY+q_i)^2X(1-X)
\le
$$
$$
\le \max_{1 \le i \le s} \deg r_i(p_iY+q_i)^2 +1 \le \deg_X f + 3.
$$
\end{proof}
\section{A constructive approach}\label{sec:al_app}
In this section
we show, under certain hypothesis, a constructive approach which also provides a degree bound for each term in the
representation in Theorem \ref{th:Marshall}.
This approach works in the case that $f$ is positive on the strip and
fully $m$-ic on $[0, 1]$ (Section \ref{subsect:positiv})
and in the case that $f$ is non-negative on the strip, fully $m$-ic on $[0, 1]$, and has only a finite
number of zeros, all of them lying on the boundary of the strip and
such that $\frac{\partial f}{\partial x}$ does not vanish at any of them (Section \ref{subsect:zeros}).
Finally, we will see in Example \ref{ex:caso_malo} that
this approach does not work in the general case.
Roughly speaking, the main idea is to lift the interval $[0, 1]$ to the standard 1-dimensional simplex
$$
\Delta_1 = \{(w, x) \in \mathbb{R}^2 \ | \ w \ge 0, \, x \ge 0, \, w+x = 1\},
$$
to consider $Y$ as a parameter and to produce for each evaluation of $Y$
a certificate of non-negativity on $\Delta_1$
using the effective version of
P\'olya's Theorem from \cite{Polya_bound} in a suitable manner so that these certificates can be glued together.
We introduce a variable $W$ which is used to lift the interval $[0, 1]$ to the simplex $\Delta_1$ and, as before, a variable
$Z$ which is used to compactify $\mathbb{R}$.
\begin{notn}
Given
$$
f= \sum_{0 \leq i \leq m} \sum_{0 \leq j \leq d} a_{ji}X^jY^i \in \mathbb{R}[X,Y],
$$
define
$$
F = \sum_{0 \leq i \leq m} \sum_{0 \leq j \leq d} a_{ji}X^j (W+X)^{d-j} Y^iZ^{m-i}
\in \mathbb{R}[W,X,Y,Z].
$$
For $N \in \mathbb{N}_0$ and $0 \le j \le N+d$, we define the polynomials $b_j \in \mathbb{R}[Y, Z]$ as follows:
\begin{equation} \label{eq:reemplazo_fund}
(W+X)^N F= \sum_{0 \leq j \leq N+d}b_j (Y,Z) W^j X^{N+d-j}.
\end{equation}
\end{notn}
Note that $(W+X)^NF$
is homogeneous on $(W,X)$ and $(Y,Z)$ of degree $N+d$ and $m$ respectively.
Therefore, for $0 \le j \le N+d$, $b_j \in \mathbb{R}[Y, Z]$ is a homogeneous polynomial of degree $m$.
We introduce the notation
$$
C = \{(y, z) \in \mathbb{R}^2 \ | \ y^2 + z^2 = 1\}.
$$
\begin{proposition}\label{prop:metodo_gral} Let $f \in \mathbb{R}[X, Y]$ and
$N \in \mathbb{N}_0$ such that for $0 \le j \le N+d$, $b_j \ge 0$ on $C$.
Then $f$ can be written as in (\ref{repr_f}) with
$$
\deg(\sigma_0), \deg(\sigma_1 X(1-X)) \leq N + d + m +1.
$$
\end{proposition}
\begin{proof}{Proof:}
Substituting $W = 1-X$ and $Z = 1$ in (\ref{eq:reemplazo_fund}) we have
$$
f(X, Y) =\sum_{0 \leq j \leq N+d}b_j (Y,1) (1-X)^j X^{N+d-j}.
$$
For $0 \le j \le N + d$,
since $b_j(Y, Z) \ge 0$ on $C$ and $b_j$ is homogeneous,
we have $b_j(Y,1) \geq 0$ on $\mathbb{R}$ and therefore
$b_j(Y,1)$ is
a sum of squares in $\mathbb{R}[Y]$ (see \cite[Proposition 1.2.1]{Marshall_book})
with the degree of each term bounded by $m$.
If $N+d$ is even,
we take
$$\sigma_0 = \sum_{0 \leq j \leq N+d, \ j \hbox{ {\small even}}}b_j(Y, 1)(1-X)^jX^{N+d-j}$$
and
$$\sigma_1 = \sum_{1 \leq j \leq N+d-1, \ j \hbox{ {\small odd}}}b_j(Y, 1)(1-X)^{j-1}X^{N+d-j-1}$$
and the identity $f = \sigma_0 + \sigma_1X(1-X)$ holds.
In addition, we have
$$
\deg(\sigma_0), \deg(\sigma_1X(1-X)) \le N + d + m.
$$
If $N+d$ is odd,
using the identities
$$
X = X^2 + X(1-X)
\qquad \hbox{ and } \qquad
1-X = (1-X)^2 + X(1-X),
$$
we take
$$
\sigma_0 = \sum_{0 \leq j \leq N+d-1, \ j \hbox{ {\small even}}}b_j(Y, 1)(1-X)^jX^{N+d-j+1} +
\sum_{1 \leq j \leq N+d, \ j \hbox{ {\small odd}}}b_j(Y, 1)(1-X)^{j+1}X^{N+d-j}
$$
and
$$
\sigma_1 = \sum_{0 \leq j \leq N+d-1, \ j \hbox{ {\small even}}}b_j(Y, 1)(1-X)^jX^{N+d-j-1} +
\sum_{1 \leq j \leq N+d, \ j \hbox{ {\small odd}}}b_j(Y, 1)(1-X)^{j-1}X^{N+d-j}
$$
and the identity $f = \sigma_0 + \sigma_1X(1-X)$ holds.
In addition, we have
$$
\deg(\sigma_0), \deg(\sigma_1X(1-X)) \le N + d + m + 1.
$$
\end{proof}
In Section \ref{subsect:positiv} and Section \ref{subsect:zeros}, under certain
hypothesis,
we prove the
existence and find an upper bound for $N \in \mathbb{N}_0$ satisfying the hypothesis
of Proposition \ref{prop:metodo_gral}.
Then, to obtain the representation (\ref{repr_f})
we proceed as follows.
If it possible to compute the upper bound,
we compute the expansion of the polynomial $(W + X)^NF$
and then we compute the representation of each $b_j(Y, 1)$ as a sum of
squares in $\mathbb{R}[Y]$ (see \cite{MagSafSch}).
If it is not possible to compute the upper bound, we
pick a value of $N$ and
we proceed by increasing $N$ one by one, we check symbolically
at each step if it is the case that
$b_j(Y, 1)$ is non-negative on $\mathbb{R}$ for every $0 \le j \le N+d$
(see \cite[Chapter 4]{BPR} and \cite{PerRoy}),
and once this condition is satisfied
we compute the representation of each $b_j(Y, 1)$ as a sum of
squares in $\mathbb{R}[Y]$.
For a homogeneous polynomial
$$
g = \sum_{0 \le j \le d} c_jW^jX^{d-j}\in \mathbb{R}[W, X]
$$
we note, as in \cite{Polya_bound},
$$
\| g \| = \max \left\{ \frac{|c_{j}|}{\binom{d}{j} }
\ | \ 0 \le j \le d \right\}.
$$
One of the main tools we use is the effective version of P\'olya's Theorem from
\cite{Polya_bound}. In the case of a
homogeneous polynomial $g \in \mathbb{R}[W, X]$ which is positive on $\Delta_1$,
this theorem states that after
multiplying for a suitable power of $W+X$, every coefficient is positive.
Since we will need an explicit positive lower bound for these coefficients,
we present in Lemma \ref{lem:polya_local}
a slight adaptation of \cite[Theorem 1]{Polya_bound}.
We omit its proof since it can
be developed exactly as the proof
of \cite[Theorem 1]{Polya_bound} with only a minor modification at the final step.
\begin{lemma}\label{lem:polya_local}
Let $g \in \mathbb{R}[W,X]$ homogeneous of degree $d$ with $g>0$ on $\Delta_1$ and
let $\lambda=\min_{\Delta_1} g >0$. For $0 \leq \epsilon<1$, if
$$
N+d \geq \frac{(d-1)d\|g\|}{2(1-\epsilon) \lambda},
$$
for $0 \leq j \leq N+d$ the coefficient of $W^jX^{N+d-j}$ in $(W+X)^Ng$ is greater than or equal to
$\frac{N!(N+d)^d}{j!(N+d-j)!} \epsilon \lambda$.
\end{lemma}
\subsection{The case of $f$ positive on the strip}\label{subsect:positiv}
In this section, we study the case of $f$ positive on $[0, 1] \times \mathbb{R}$
and fully $m$-ic on $[0, 1]$ and we prove Theorem \ref{thm:f_positivo}.
\begin{proposition}\label{prop:cota_para_N}
Let $f \in \mathbb{R}[X,Y]$
with $f > 0$ on $[0,1]\times \mathbb{R}$, $f$ fully $m$-ic on $[0, 1]$ and
$$
f^{\bullet} = \min\{\bar f(x, y, z) \ | \ x \in [0, 1], \, y^2 + z^2 = 1 \} > 0.
$$
Then, if
$$
N+d > \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 f^{\bullet}},
$$
for every $0 \le j \le N+d$, $b_j \ge 0$ on $C$.
\end{proposition}
\begin{proof}{Proof:}
Since for every $(w,x,y,z) \in \Delta_1 \times C$,
$F(w,x,y,z)=\bar f(x,y,z)$ we have $F \geq f^{\bullet} $ on $\Delta_1 \times C$.
On the other hand, it is easy to see that for $(y,z) \in C$,
$$
\|F(W,X,y,z)\| \leq (d+1)(m+1) \max_{\substack{0 \leq i \leq m \\ 0 \leq j \leq d}} \left\{ \|a_{ji}X^j (W+X)^{d-j} y^iz^{m-i} \| \right\}
\le
(d+1)(m+1) \|f\|_{\infty}.
$$
Using the bound for Polya's Theorem from \cite[Theorem 1]{Polya_bound},
if $N \in \mathbb{N}$ verifies
$$
N+d> \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 f^{\bullet}},
$$
all the coefficients of the polynomial
$$
(W+X)^N F(W,X,y,z) = \sum_{0 \le j \le N+d}b_j(y, z)W^jX^{N+d-j} \in \mathbb{R}[W, X]
$$
are positive. In other words, for $0 \le j \le N +d$, $b_j \ge 0$ on $C$ as we wanted to prove.
\end{proof}
We deduce easily Theorem \ref{thm:f_positivo}.
\begin{proof}{Proof of Theorem \ref{thm:f_positivo}:}
By Proposition \ref{prop:cota_para_N} if $N \in \mathbb{N}$ is the smallest integer number such that
$$
N+d > \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 f^{\bullet}},
$$
then for every $0 \le j \le N+d$, $b_j \ge 0$ on $C$.
By Proposition \ref{prop:metodo_gral}, we have that $f$ can be written as in (\ref{repr_f}) with
$$
\deg(\sigma_0), \deg(\sigma_1 X(1-X)) \leq N+d+m+1.
$$
Since
$$
\|f\|_{\infty}\geq |a_{00}|=|f (0,0)| = f(0, 0) =\bar f (0,0,1) \geq f^{\bullet},
$$
we have
$$
\deg(\sigma_0), \deg(\sigma_1 X(1-X)) \leq N+d+m+1
\leq \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 f^{\bullet}} +m+2 \le
\frac{d^3(m+1)\|f\|_{\infty}}{ f^{\bullet}}.
$$
\end{proof}
\subsection{The case of $f$ with a finite number of zeros on the boundary of the strip}\label{subsect:zeros}
Next, we want to relax the hypothesis
$f > 0$ on $[0,1] \times \mathbb{R}$ to
$f \ge 0$ on $[0,1] \times \mathbb{R}$ and with a finite numbers of zeros on the boundary of the strip.
Consider
$$
C_+ = \{(y, z) \in \mathbb{R}^2 \ | \ y^2 + z^2 = 1, \ z \geq 0 \}.
$$
For $f$ non-negative in $[0, 1] \times \mathbb{R}$ and fully $m$-ic on $[0, 1]$, it is clear that $m$ is even. Then, since
each $b_j(Y, Z) \in \mathbb{R}[Y, Z]$ is homogeneous of degree $m$, to prove that
$b_j \ge 0$ on $C$
it is enough to prove that
$b_j \ge 0$ on $C_+$. The advantage of considering $C_+$ instead of $C$ is simply that
under the present hypothesis
there is a bijection between
the zeros of $f$ in $[0, 1] \times \mathbb{R}$ and the zeros of $F$ in $\Delta_1 \times C_+$ given by
$$
(x, \alpha) \mapsto (1-x,x, y_{\alpha}, z_{\alpha}) \qquad \hbox { with } \qquad
(y_{\alpha}, z_{\alpha}) = \left(\frac{\alpha}{\sqrt{\alpha^2+1}}, \frac{1}{\sqrt{\alpha^2+1}} \right).
$$
The idea is to consider separately,
for each zero $(x, \alpha)$ of $f$,
the polynomial $F(W,X,y_{\alpha},z_{\alpha}) \in \mathbb{R}[W,X]$ and
to find $N_{\alpha} \in \mathbb{N}_0$
such that $(W+X)^{N_{\alpha}}F(W,X,y_{\alpha},z_{\alpha})$ has non-negative coefficients
$b_j(y_\alpha, z_\alpha)$.
Then, we show that the same $N_\alpha$
works for $(y,z) \in C_+$ close to $(y_{\alpha},z_{\alpha})$. Finally,
in the rest of $C_+$ we use compactness arguments.
\begin{proposition}\label{prop:cota_para_N_con_ceros_bordes}
Let $f \in \mathbb{R}[X,Y]$ with
$f \geq 0$ on $[0,1]\times \mathbb{R}$, $f$ fully $m$-ic on $[0, 1]$ and
suppose that $f$ has a finite number of zeros in $[0,1]\times\mathbb{R}$, all of them lying on $\{0,1\}\times\mathbb{R}$,
and $\frac{\partial f}{\partial X}$
does not vanish at any of them.
Then, there is $N \in \mathbb{N}_0$ such that
for every $0 \le j \le N+d$, $b_j \ge 0$ on $C$.
\end{proposition}
\begin{proof}{Proof:}
For $0 \le h \le d$, we define the polynomials $c_h \in \mathbb{R}[Y, Z]$ as follows:
$$
F = \sum_{0 \le h \le d} c_h(Y, Z)W^hX^{d-h}.
$$
Then, for $0 \le h \le d$,
$$
c_h(Y,Z)= \sum_{0 \leq i \leq m} \sum_{0 \leq j \leq d-h}
a_{ji}\binom{d-j}{h} Y^iZ^{m-i}
$$
is a homogeneous polynomial in $\mathbb{R}[Y, Z]$ of
degree $m$, and for $(y, z) \in C_+$ we have
\begin{equation}\label{eq:cota_c_h}
|c_h(y,z)|\le (m+1)\|f \|_{\infty} \sum_{0 \leq j \leq d-h}\binom{d-j}{h}
=
(m+1)\binom{d+1}{h+1}\|f \|_{\infty}
\end{equation}
and
\begin{equation}\label{eq:cota_norma_F}
\|F(W, X, y, z)\| \le
\max \left \{ (m+1) \frac{\binom{d+1}{h+1}}{\binom{d}{h}}\|f \|_{\infty} \ | \ 0 \le h \le d \right\} \le
(m+1) (d+1)\|f \|_{\infty}.
\end{equation}
Now, since along the proof we will consider several values of $N$,
we add the index $N$ to the notation of polynomials $b_j$ in the following way:
$$
(W+X)^N F= \sum_{0 \leq j \leq N+d}b_{j,N} (Y,Z) W^j X^{N+d-j}.
$$
So we need to prove that there is $N \in \mathbb{N}_0$ such that
for every $0 \le j \le N+d$, $b_{j,N} \ge 0$ on $C_+$.
It is clear that, for a fixed $(y, z) \in C_+$, if $N \in \mathbb{N}_0$ satisfies that
for every $0 \le j \le N+d$, $b_{j,N}(y, z) \ge 0$, then
any $N' \in \mathbb{N}_0$ with $N' \ge N$ also satisfies
that for every $0 \le j \le N'+d$, $b_{j,N'}(y, z) \ge 0$.
For $N \in \mathbb{N}_0$ and $\alpha \in \mathbb{R}$, we have the identities
\begin{equation} \label{eq:f_en_1}
b_{0,N}(y_\alpha, z_\alpha) =
c_0(y_\alpha, z_\alpha) =
F(0, 1, y_\alpha, z_\alpha) =
\bar f(1, y_\alpha, z_\alpha) =
\frac{1}{\sqrt{\alpha^2 + 1}^m}f(1, \alpha)
\end{equation}
and
\begin{equation} \label{eq:f_en_0}
b_{N+d,N}(y_\alpha, z_\alpha) =
c_d(y_\alpha, z_\alpha) =
F(1, 0, y_\alpha, z_\alpha) =
\bar f(0, y_\alpha, z_\alpha) =
\frac{1}{\sqrt{\alpha^2 + 1}^m}f(0, \alpha).
\end{equation}
From (\ref{eq:f_en_1}) and (\ref{eq:f_en_0}) we deduce that
for every $N \in \mathbb{N}_0$, $b_{0, N}
\ge 0$ on $C_+$ and $b_{N+d, N} \ge 0$ on $C_+$.
So we need to prove that there is $N \in \mathbb{N}_0$ such that
for every $1 \le j \le N+d-1$, $b_{j,N} \ge 0$ on $C_+$.
We note
$$
\Pi_f= \{\alpha \in \mathbb{R} \ | \ f(x,\alpha)=0 \ \mbox{for some} \ x \in \{0,1\} \} \subseteq \mathbb{R}.
$$
We will show first that for each $\alpha \in \Pi_f$ there is
$N_{\alpha}\in \mathbb{N}_0$ such that for $1 \le j \le N_\alpha+d-1$,
$b_{j, N_\alpha}(y_{\alpha},z_{\alpha})$ is positive on $C_+$.
We consider three cases:
\begin{itemize}
\item $f(0,{\alpha})=0$ and $f(1, \alpha) \ne 0$:
From (\ref{eq:f_en_0}) we have
$b_{N+d, N}(y_\alpha, z_\alpha) = 0$ for every $N \in \mathbb{N}_0$ and
also $c_d(y_\alpha, z_\alpha) = 0$.
We consider the homogeneous polynomial of degree $d-1$
$$
\widetilde F_{{\alpha}}(W,X)= \frac{F(W,X,y_{{\alpha}},z_{{\alpha}})}
{X}
= \sum_{0 \le h \le d-1} c_h(y_{\alpha}, z_{\alpha})W^hX^{d-h-1}
\in \mathbb{R}[W,X].
$$
From (\ref{eq:cota_c_h}) we deduce
that for $0 \le h \le d-1$,
$$
\frac{|c_h(y_\alpha, z_\alpha)|}{\binom{d-1}{h}} \le
(m+1)\frac{\binom{d+1}{h+1}}{\binom{d-1}{h}}\|f \|_{\infty} =
(m+1)\frac{(d+1)d}{(h+1)(d-h)}\|f \|_{\infty} \le
(m+1)(d+1)\|f \|_{\infty}
$$
and we have
$\|\tilde F_{\alpha}\| \le (m+1)(d+1)\|f \|_{\infty}$.
On the other hand, it is clear that
$\widetilde F_{{\alpha}}>0$ on $\Delta_1-\{(1,0)\}$ and, in addition,
$$
\widetilde F_{{\alpha}}(1,0) = \frac{\partial F(1, 0, y_{\alpha}, z_{\alpha})}{\partial X}
=
\frac{1}{\sqrt{{\alpha}^2+1}^m} \frac{\partial f}{\partial X}(0,{\alpha})
> 0
$$
therefore $\widetilde F_{{\alpha}}(1,0)>0$. We note
$$
\lambda_{{\alpha}}= \min_{\Delta_1}\widetilde F_{{\alpha}}>0.
$$
By Lemma \ref{lem:polya_local} with $\epsilon = 1/2$, if
$N_{\alpha} \in \mathbb{N}_0$ satisfies
$$
N_{\alpha}+d -1 \geq \frac{(d-2)(d-1)(d+1)(m+1)\|f\|_{\infty}}{\lambda_{\alpha}}
$$
and
$$
(W+X)^{N_{\alpha}}\widetilde F_{{\alpha}} = \sum_{0 \le j \le N_{\alpha} + d-1}
c_{j}W^{j}X^{N_{\alpha}+d-1-j},
$$
for $0 \leq j \leq N_{\alpha}+d-1$ we have
$$
c_{j} \ge \frac{N_{\alpha}!(N_{\alpha}+d-1)^{d-1}}{j!(N_{\alpha}+d-1-j)!}
\frac{\lambda_{\alpha}}{2}.$$
But since
$$
\sum_{0 \le j \le N_{\alpha} + d-1}
c_{j}W^{j}X^{N_{\alpha}+d-j} =
(W+X)^{N_{\alpha}}X\widetilde F_{{\alpha}} =
$$
$$
=
(W+X)^{N_{\alpha}}F(W, X, y_\alpha, z_\alpha)
= \sum_{0 \le j \le N_\alpha+d-1}b_{j, N_\alpha}(y_\alpha, z_\alpha)W^jX^{N_\alpha+d-j}
$$
we conclude that for $0 \le j \le N_\alpha + d-1$,
$$
b_{j, N_\alpha}(y_\alpha, z_\alpha) = c_{j}\ge
\frac{N_{\alpha}!(N_{\alpha}+d-1)^{d-1}}{j!(N_{\alpha}+d-1-j)!}
\frac{\lambda_{\alpha}}{2}.
$$
\item $f(0,{\alpha})\ne0$ and $f(1, \alpha) = 0$:
From (\ref{eq:f_en_1}) we have
$b_{0, N}(y_\alpha, z_\alpha) = 0$ for every $N \in \mathbb{N}_0$ and
also $c_0(y_\alpha, z_\alpha) = 0$.
We consider the homogeneous polynomial of degree $d-1$
$$
\widetilde F_{{\alpha}}(W,X)= \frac{F(W,X,y_{{\alpha}},z_{{\alpha}})}
{W}
= \sum_{1 \le h \le d} c_h(y_{\alpha}, z_{\alpha})W^{h-1}X^{d-h}
\in \mathbb{R}[W,X].
$$
Then, proceeding similarly to the previous case we prove
$\|\tilde F_{\alpha}\| \le \frac12(m+1)d(d+1)\|f \|_{\infty}$.
Moreover, since
$\widetilde F_{{\alpha}}>0$ on $\Delta_1-\{(0,1)\}$ and
$$
\widetilde F_{{\alpha}}(0,1) = \frac{\partial F(0, 1, y_{\alpha}, z_{\alpha})}{\partial W}
=
- \frac{1}{\sqrt{{\alpha}^2+1}^m} \frac{\partial f}{\partial X}(1,{\alpha})
> 0
$$
we have that $\widetilde F_{{\alpha}}(1,0)>0$ and we note
$$
\lambda_{{\alpha}}= \min_{\Delta_1}\widetilde F_{{\alpha}}>0.
$$
Finally, using Lemma \ref{lem:polya_local} with $\epsilon = 1/2$,
we conclude that
if
$N_{\alpha} \in \mathbb{N}_0$ satisfies
$$
N_{\alpha}+d -1 \geq \frac{(d-2)(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2\lambda_{\alpha}},
$$
for $1 \le j \le N_\alpha + d$,
$$
b_{j, N_\alpha}(y_\alpha, z_\alpha) \ge
\frac{N_{\alpha}!(N_{\alpha}+d-1)^{d-1}}{(j-1)!(N_{\alpha}+d-j)!}
\frac{\lambda_{\alpha}}{2}.
$$
\item $f(0,{\alpha})=0$ and $f(1, \alpha) = 0$:
From (\ref{eq:f_en_1}) and (\ref{eq:f_en_0}) we have
$b_{0, N}(y_\alpha, z_\alpha) = b_{N+d, N}(y_\alpha, z_\alpha) = 0$ for every $N \in \mathbb{N}_0$ and
also $c_0(y_\alpha, z_\alpha) = c_d(y_\alpha, z_\alpha) = 0$.
We consider the homogeneous polynomial of degree $d-2$
$$
\widetilde F_{{\alpha}}(W,X)= \frac{F(W,X,y_{{\alpha}},z_{{\alpha}})}
{WX}
= \sum_{1 \le h \le d-1} c_h(y_{\alpha}, z_{\alpha})W^{h-1}X^{d-h-1}
\in \mathbb{R}[W,X].
$$
Then, proceeding similarly to the previous cases we prove again
$\|\tilde F_{\alpha}\| \le \frac12(m+1)d(d+1)\|f \|_{\infty}$.
We note
$$
\lambda_{{\alpha}}= \min_{\Delta_1}\widetilde F_{{\alpha}}>0.
$$
Finally, using Lemma \ref{lem:polya_local} with $\epsilon = 1/2$,
we conclude that
if
$N_{\alpha} \in \mathbb{N}_0$ satisfies
$$
N_{\alpha}+d -2 \geq \frac{(d-3)(d-2)d(d+1)(m+1)\|f\|_{\infty}}{2\lambda_{\alpha}},
$$
for $1 \le j \le N_\alpha + d-1$,
$$
b_{j, N_\alpha}(y_\alpha, z_\alpha) \ge
\frac{N_{\alpha}!(N_{\alpha}+d-2)^{d-2}}{(j-1)!(N_{\alpha}+d-1-j)!}
\frac{\lambda_{\alpha}}{2}.
$$
\end{itemize}
Now, our next goal is to compute a radios $r_\alpha > 0$ around each $(y_\alpha, z_\alpha)$ so that
for $(y, z) \in C_+$ with $\| (y, z) - (y_\alpha, z_\alpha) \| \le r_\alpha$,
for $1 \le j \le N_\alpha + d - 1$, we have
$b_{j, N_\alpha}(y, z) \ge 0$.
First, we do some auxiliary computations.
For $0 \le h \le d$ and $(y, z) \in \mathbb{R}^2$ with $y^2 + z^2 \le 1$ we have
\begin{align*}
\|\nabla c_h(y, z) \|
& \leq \left| \frac{\partial c_h}{\partial Y}(y, z) \right| + \left| \frac{\partial c_h}{\partial Z}(y, z) \right|\\
& \leq \sum_{1 \leq i \leq m} \sum_{0 \leq j \leq d-h} |a_{ji}| \binom{d-j}{h}i + \sum_{0 \leq i \leq m-1} \sum_{0 \leq j \leq d-h} |a_{ji}| \binom{d-j}{h}(m-i)\\
& \le m(m+1) \binom{d+1}{h+1}\|f\|_{\infty} \\
& \leq m(m+1) (d+1) \binom{d}{h} \|f\|_{\infty}.
\end{align*}
Then, for $(y, z) \in C_+$,
$$
|c_h(y,z)-c_h(y_{\alpha},z_{\alpha})| \leq m(m+1) (d+1) \binom{d}{h}\|f\|_{\infty} \|(y,z)-(y_{\alpha},z_{\alpha})\|.
$$
We introduce now some notation following \cite{Polya_bound}.
For $t \in \mathbb{R}$, $m\in \mathbb{N}_0$ and a variable $U$,
$$
(U)_t^m:=U(U-t)(U-2t)\cdots(U-(m-1)t)= \prod_{0\leq i \leq m-1}(U-it) \in \mathbb{R}[U].
$$
Also, for $t \in \mathbb{R}$
$$
F_{t}(W, X, Y, Z) = \sum_{0 \le h \le d}c_h(Y, Z)(W)_t^h(X)_t^{d-h}.
$$
By \cite[(4)]{Polya_bound}, for $N \in \mathbb{N}_0$ and $0 \le j \le N+d$ we have
$$
b_{j, N}(y,z)= \frac{N!(N+d)^d}{j!(N+d-j)!} F_{\frac{1}{N+d}}\left(\frac{j}{N+d}, \frac{N+d-j}{N+d},y,z \right).
$$
Then, using the Vandermonde-Chu identity (see \cite[(6)]{Polya_bound}), for $(y, z) \in C_+$ we have
\begin{eqnarray*}
& & \left | F_{\frac{1}{N+d}}\left(\frac{j}{N+d},\frac{N+d-j}{N+d},y,z \right) -
F_{\frac{1}{N+d}}\left(\frac{j}{N+d},\frac{N+d-j}{N+d},y_{\alpha},z_{\alpha} \right) \right |
\\[3mm]
& & \le
\sum_{0 \leq h \leq d} | c_h(y,z)-c_h(y_{\alpha},z_{\alpha}) |
\left(\frac{j}{N+d} \right)^h_{\frac{1}{N+d}}
\left(\frac{N+d-j}{N+d} \right)^{d-h}_{\frac{1}{N+d}}
\\[3mm]
& & \le
m(m+1)(d+1)\|f\|_{\infty}
\|(y,z)-(y_{\alpha},z_{\alpha})\|
\left(
\sum_{0 \leq h \leq d}
\binom{d}{h}
\left(\frac{j}{N+d} \right)^h_{\frac{1}{N+d}}
\left(\frac{N+d-j}{N+d} \right)^{d-h}_{\frac{1}{N+d}}
\right)
\\[3mm]
& & =
m(m+1)(d+1)\|f\|_{\infty}
\|(y,z)-(y_{\alpha},z_{\alpha})\|
(1)^d_{\frac{1}{N+d}}
\\[3mm]
& & \le
m(m+1)(d+1)\|f\|_{\infty}
\|(y,z)-(y_{\alpha},z_{\alpha})\|.
\end{eqnarray*}
Consider $\alpha \in \Pi_f$. If
$f(0, \alpha) = 0$ and $f(1, \alpha) \ne 0$
we take
$$
r_{\alpha}=\frac{\lambda_{\alpha}(N_{\alpha}+d-1)^{d-1}}{2(N_{\alpha}+d)^dm(m+1)(d+1)\|f\|_{\infty}}.
$$
Then, for $(y, z) \in C_+$ with $\|(y, z) - (y_\alpha, z_\alpha)\| \le r_\alpha$ and $1 \le j \le N_\alpha + d - 1$ we have
\begin{eqnarray*}
b_{j, N}(y,z) &= & b_{j, N}(y_{\alpha},z_{\alpha})+b_{j, N}(y,z)-b_{j, N}(y_{\alpha},z_{\alpha}) \\[3mm]
& \geq &
\frac{N_{\alpha}!(N_{\alpha}+d-1)^{d-1}}{j!(N_{\alpha}+d-1-j)!}
\frac{\lambda_{\alpha}}{2}
- \frac{N_\alpha!(N_\alpha+d)^d}{j!(N_\alpha+d-j)!} m(m+1)(d+1)\|f\|_{\infty} r_\alpha \\[3mm]
& \geq & 0.
\end{eqnarray*}
If
$f(0, \alpha) \ne 0$ and $f(1, \alpha) = 0$
we take again
$$
r_{\alpha}=\frac{\lambda_{\alpha}(N_{\alpha}+d-1)^{d-1}}{2(N_{\alpha}+d)^dm(m+1)(d+1)\|f\|_{\infty}}
$$
and if
$f(0, \alpha) \ne 0$ and $f(1, \alpha) = 0$
we take
$$
r_{\alpha}=\frac{\lambda_{\alpha}(N_{\alpha}+d-2)^{d-2}}{2(N_{\alpha}+d)^dm(m+1)(d+1)\|f\|_{\infty}}
$$
and in both cases we proceed in a similar way.
Now, consider $K \subseteq C_+$ defined by
$$
K=\left\{(y,z) \in C_+ : \|(y,z)-(y_{\alpha},z_{\alpha})\| \geq r_{\alpha} \ \mbox{for all} \ {\alpha} \in \Pi_f \right\}.
$$
Since $K$ is compact and $\lambda_K= \min_{\Delta_1 \times K} F >0$, by \cite[Theorem 1]{Polya_bound} using (\ref{eq:cota_norma_F}),
if
$$
N+d> \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 \lambda_K},
$$
for $0 \le j \le N+d$,
$b_{j,N}(y,z)\geq 0$ for every $(y,z) \in K$.
Finally, if $N \in \mathbb{N}$,
$$
N = \max\left\{\Big\lfloor \frac{(d-1)d(d+1)(m+1)\|f\|_{\infty}}{2 \lambda_K} \Big \rfloor -d+1, \, \max \left \{ N_{\alpha} \, | \, {\alpha} \in \Pi_f \right \} \right\},
$$
we conclude that for $0 \le j \le N+d$, $b_{j, N} \geq 0$ on $C_+$.
\end{proof}
From Proposition \ref{prop:metodo_gral} and Proposition \ref{prop:cota_para_N_con_ceros_bordes} we deduce the
following result.
\begin{theorem}\label{thm:cota_con_ceros_bordes}
Let $f \in \mathbb{R}[X,Y]$ with
$f \geq 0$ on $[0,1]\times \mathbb{R}$, $f$ fully $m$-ic on $[0, 1]$ and
suppose that $f$ has a finite number of zeros in $[0,1]\times\mathbb{R}$, all of them lying on $\{0,1\}\times\mathbb{R}$,
and $\frac{\partial f}{\partial X}$
does not vanish at any of them.
Then, for $N \in \mathbb{N}_0$ as in Proposition \ref{prop:cota_para_N_con_ceros_bordes},
$f$ can be written as in (\ref{repr_f}) with
$$
\deg(\sigma_0), \deg(\sigma_1 X(1-X)) \leq N+d+m+1.
$$
\end{theorem}
We conclude with an example of a polynomial
$f \in \mathbb{R}[X,Y]$ with
$f \geq 0$ on $[0,1]\times \mathbb{R}$, $f$ fully $m$-ic on $[0, 1]$,
with only one zero in $[0,1]\times\mathbb{R}$ lying on $\{0,1\}\times\mathbb{R}$
but $\frac{\partial f}{\partial X}$ vanishing at it, and such that $f$
does not admit a value of $N \in \mathbb{N}_0$ as in Proposition \ref{prop:metodo_gral}.
Note that in this example, $f$ is itself a sum of
squares, so the representation as in (\ref{repr_f}) is already given; nevertheless, our purpose is to show that
there is no hope of applying the method underlying Proposition \ref{prop:metodo_gral} in full generality.
\begin{example}\label{ex:caso_malo}
Let
$$
f(X,Y)= (Y^2-X)^2+X^2=Y^4-2XY^2+2X^2.
$$
Then
$$
F(W,X,Y,Z)=(W+X)^2Y^4-2X(W+X)Y^2Z^2+2X^2Z^4.
$$
and for $N \in \mathbb{N}$,
$$
(W+X)^NF(W,X,Y,Z) =
Y^4 W^{N+2} + Y^2 \left( (N+2)Y^2-2Z^2 \right) W^{N+1}X + \dots
$$
It is easy to see that it does not exist $N \in \mathbb{N}_0$ such that
$$
b_{N+1}(Y,Z)=Y^2 \left( (N+2)Y^2-2Z^2 \right)
$$
is non-negative on $C$.
\end{example}
\textbf{Acknowledgments:} We want to thank the anonymous referee for her/his helpful suggestions.
|
2,877,628,088,576 | arxiv | \section{Introduction}
General Relativity is the favorite theory of gravitation, and it is
based on the equivalence principle, and in the end on the assumption
that the gravitational constant, $G$, is indeed constant. However,
this is just a hypothesis which needs to be verified. In fact, there
are several modern grand-unification theories that predict that the
value of $G$ is a varying function of a low-mass dynamical scalar
field \citep{LAea,mio}. Hence, we expect that if these theories are
true the gravitational constant should experience slow changes over
cosmological timescales. In recent years, several constraints have
been placed on the variation of the fine structure constant, and other
interesting constants of nature --- see \cite{uzan}, and \cite{mio}
for extensive reviews. However, very few works have been devoted to
study a hypothetical variation of $G$. The most tight bounds on the
variation of $G$ are those obtained using Lunar Laser Ranging ---
$\dot{G}/G = (0.2\pm0.7)\times 10^{-12}$~yr$^{-1}$ \citep{H10} ---
solar asteroseismology --- $\dot G/G \simeq -1.6\times
10^{-12}$~yr$^{-1}$ \citep{Demarque} --- and Big Bang nucleosynthesis
--- $-0.3 \times 10^{-12}$~yr$^{-1} \la \dot{G}/G \la 0.4\times
10^{-12}$~yr$^{-1}$ \citep{CO4,B05}. Nevertheless, both Lunar Laser
Ranging and asteroseismological bounds are eminently local, while Big
Bang limits are model-dependent. At intermediate cosmological ages
the Hubble diagram of Type Ia supernovae has also been used to put
constraints on the rate of change of $G$, but the constraints are
somewhat weaker $\dot G/G\la 1\times 10^{-11}$~yr$^{-1}$ at $z\sim
0.5$~\citep{SNIa,IJMPD}. In this work we summarize why on how white
dwarfs can be used to place constraints on the rate of variation of a
rolling $G$.
\section{White dwarf cooling times}
\begin{figure}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{fig01.eps}}
\caption{\footnotesize Surface luminosity versus age for several
$0.609\, M_\odot$ white dwarf sequences, adopting different values
of $\dot{G}/G$.}
\label{fig1}
\end{figure}
A slowly rolling $G$ affects both the cooling timescales of white
dwarfs \citep{gold,gnew} and those of their progenitors \citep{Weiss}.
Consequently, the ages determined from the color-magnitude diagrams of
globular or open clusters --- namely, the main-sequence turn-off age
and the age determined from the termination of the white dwarf cooling
sequence --- change accordingly, and depend on the precise value of
$\dot G/G$ \citep{JCAP}. This allows to put upper bounds on the rate
of variation of $G$. To quantify the effects of a varying $G$ on the
derived ages, we computed the main sequence evolution of two model
stars of 1.0 and $2.0\, M_{\odot}$ considering three values for the
rate of change of $G$, namely $\dot{G}/G=-5 \times
10^{-11}$~yr$^{-1}$, $\dot{G}/G=-1 \times 10^{-11}$~yr$^{-1}$, and
$\dot{G}/G=-1 \times 10^{-12}$~yr$^{-1}$. All the evolutionary
calculations were done using the {\tt LPCODE} stellar evolutionary
code \citep{Althaus10,Renedo10}, appropriately modified to take into
account the effect of a varying $G$. Despite the small rates of
change of $G$ adopted here, the evolution of white dwarf progenitor
stars is severely modified. The evolutionary timescales can be
modelled using rather simple arguments. In particular, it turns out
that the main sequence lifetimes when a varying $G$ is adopted is
\citep{JCAP}:
\begin{equation}
\tau_{\rm MS}=\frac{1}{\gamma\left|\frac{\dot G}{G}\right|}
\ln\left[\gamma\left|\frac{\dot G}{G}\right|\left(\frac{G_0}{G_i}\right)^\gamma
\tau_{\rm MS}^0+1\right].
\label{fit}
\end{equation}
\noindent with $\gamma=3.6$. The effect of a varying $G$ on the white
dwarf cooling times is displayed in Fig.~\ref{fig1}. As can be seen,
the cooling timescales are considerably modified, being the cooling
accelerated in the case of $\dot G<0$ \citep{gnew}. This can be
explained easily. A smaller value of $G$ implies a smaller
gravitational force, and thus a smaller degeneracy (and density) is
needed to balance gravity. Hence, for $\dot G<0$ the white dwarf
expands as it evolves, and the cooling is accelerated. The cooling
track shown in Fig.~\ref{fig1} is a representative example of a set of
white dwarf cooling sequences which incorporate the most up-to-date
physical inputs. Specifically, these cooling sequences consider
$^{22}$Ne diffusion and its associated energy release
\citep{nature,Althaus10,GB08}, and together with the main sequence
lifetimes given by Eq.~(\ref{fit}) allow to derive an age for any
cluster, and for each value of $\dot G/G$. Moreover, the grid of
models has been computed for several initial values of $G$, to take
into account that the evolutionary value of $G$ must match its present
value. With these sequences the effect of a running $G$ on the
color-magnitude diagram of any cluster can be studied.
\begin{figure}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{fig02.eps}}
\caption{\footnotesize Upper panel: periods of the several modes of
G117$-$B15A as a function of the value of $|\dot G/G|$ with $\dot{G}
< 0$. Lower panel: period derivatives of the same modes as a
function of the secular rate of change of $G$.}
\label{fig2}
\end{figure}
We chose to employ the old, metal-rich, well populated open cluster
NGC~6791, for which the ages derived from main sequence stars and from
the termination of the degenerate sequence agree very well in the case
of a constant $G$. When a varying $G$ is adopted, the resulting age
of NGC~6791 is modified, but then the position of the main sequence
turn-off in the color-magnitude diagram is also significantly
different if the same distance modulus is adopted. However, the
distance modulus derived using an independent and reliable method
(eclipsing binaries) which does not make use of theoretical models
turns out to be $13.46\pm 0.1$ \citep{Grundahl}. Thus, large errors
in the distance modulus seem to be quite implausible. Given the
uncertainty in the distance modulus ($\simeq 0.1^{\rm mag}$), and the
measured value, an upper limit to $\dot G/G$ can be placed
\citep{JCAP}. Since $\Delta t_{\rm MSTO}/\Delta (m-M)_{\rm
F606W}\approx 4$~Gyr/mag, the maximum age difference with respect to
the case in which a constant $G$ is adopted is $\sim 0.4$~Gyr, which
translates into an upper bound $\dot G/G\sim -1.8\times
10^{-12}$~yr$^{-1}$. This upper limit considerably improves the other
existing upper bounds to the rate of variation of $G$ and is
equivalent to the upper limit set by helioseismology.
\begin{figure}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{fig03.eps}}
\caption{\footnotesize Rate of temporal variation of the 215~s period
of G117$-$B15A as a function of $\dot{G}/G$, red solid line. The
observational value of the rate of change of the period ---
horizontal solid line --- along with its observed error bars ---
horizontal dashed lines --- is also displayed. The formal
theoretical errors are also shown as dashed lines.}
\label{fig3}
\end{figure}
\section{Pulsating white dwarfs}
Pulsations in white dwarfs are associated to nonradial
$g$(gravity)-modes which are a subclass of spheroidal modes whose main
restoring force is gravity. These modes are characterized by low
oscillation frequencies (long periods) and by a displacement of the
stellar fluid essentially in the horizontal direction. Hence, some
characteristics of the pulsations are sensitive to the precise value
of $G$, and to its rate of change. In particular, it can be easily
understood that measuring the rate of change of the period is
equivalent to measure the evolutionary time scale of the white dwarf.
Thus, a slowly varying $G$ should have an impact in the rate of period
change of the observed periods. There are two white dwarfs for which
we have reliable determinations of the rate of period change of its
main periods, namely G117$-$B15A and R548. We performed an
asteroseismological analysis of these white dwarfs using a grid of
fully evolutionary DA models \citep{2012MNRAS.420.1462R} characterized
by consistent chemical profiles for both the core and the envelope,
and covering a wide range of stellar masses, thicknesses of the
hydrogen envelope and effective temperatures.
The pulsation periods for the modes with $\ell = 1$ and $k = 1, 2, 3$
and $4$ of the asteroseismological model of G117$-$B15A for increasing
values of $|\dot{G}/G|$ are shown in the upper panel of
Fig.~\ref{fig2}. The variation of the periods is negligible, implying
that a varying $G$ has negligible effects on the structure of the
asteroseismological model, and that, for a fixed value of the
effective temperature, the pulsation periods are largely independent
of the adopted value of $|\dot{G}/G|$. In the lower panel of
Fig.~\ref{fig2} we display the rates of period change for the same
modes. At odds with what happens with the pulsation periods, the
rates of period change are markedly affected by a varying $G$,
substantially increasing for increasing values of $|\dot{G}/G|$. This
is because that, for a decreasing value of $G$ with time, the white
dwarf cooling process accelerates \citep{gold,gnew}, and this is
translated into a larger secular change of the pulsation periods as
compared with the situation in which $G$ is constant.
In Fig.~\ref{fig3} we plot the theoretical value of $\dot{\Pi}$ the
mode with period $\Pi = 215$~s of G117$-$B15A for increasing values of
$|\dot{G}/G|$ (solid curve). The dashed curves embracing the solid
curve show the uncertainty in the theoretical value of $\dot{\Pi}$,
$\epsilon_{\dot{\Pi}} = 0.09 \times 10^{-15}$ s s$^{-1}$. This value
has been derived taking into account the uncertainty due to our lack
of knowledge of the $^{12}$C$(\alpha,\gamma)^{16}$O reaction rate ---
$\varepsilon_1 \sim 0.03 \times 10^{-15}$~s/s --- and that due to the
errors in the asteroseismological model --- $\varepsilon_2 \sim 0.06
\times 10^{-15}$~s/s \cite{2012MNRAS.424.2792C}. We assumed that the
uncertainty for the case in which $\dot{G} \neq 0$ is the same as that
computed for the case in which $G= 0$, which is a reasonable
assumption. Considering that the theoretical solution should not
deviate more than one standard deviation from the observational value,
we conclude that the secular rate of variation of the gravitational
constant obtained using the variable DA white dwarf G117$-$B15A is
$\dot{G}/G= (-1.79^{+0.53}_{-0.49}) \times 10^{-10}$~yr$^{-1}$
\citep{varG}. The same analysis applied to the other star, R548,
results in $\dot{G}/G= (-1.29^{+0.71}_{-0.63}) \times
10^{-10}$~yr$^{-1}$, a very similar upper bound --- see \cite{varG}
for a detailed discussion. Clearly, these values are completely
compatible each other, although currently less restrictive than those
obtained using other techniques, and compatible with a null result for
$\dot G/G$.
\section{Discussion, conclusions and outlook}
In this work we have reviewed the several constraints that white
dwarfs can provide on the rate of change of a secularly varying
gravitational constant. Specifically, we have shown that when a
secularly evolving value of $G$ is adopted, the white dwarf cooling
tracks (and the main sequence evolutionary times of their progenitors
as well) are noticeably affected, and depend sensitively not only on
the value of $\dot G/G$ but also on the actual value of $G$ when the
white dwarf was born. We find that for negative values of $\dot G$ the
cooling is accelerated, due a less intense gravitational interaction.
According to these results the main sequence turn-off age and the age
derived from the termination of the white dwarf cooling sequence
differ from those computed when a constant value of Newton's constant
is adopted. This can be used to constrain the rate of variation of a
rolling $G$. In particular, we have applied this technique to the
metal rich, well populated, old open cluster NGC~6791. It turns out
that the resulting age of NGC~6791 is considerably modified modified,
as it occurs with the position of the main sequence turn-off in the
color-magnitude diagram, if the same distance modulus is adopted.
Accordingly, the distance modulus necessary to fit the position in the
color-magnitude diagram of the main sequence turn-off of the cluster
needs to be changed as well. However, the distance modulus derived
using an independent and reliable method (eclipsing binaries) which
does not make use of theoretical models turns out to be $13.46\pm 0.1$
\citep{Grundahl}. Thus, large errors in the distance modulus seem to
be quite implausible. Given the uncertainty in the distance modulus
($\simeq 0.1^{\rm mag}$), and the measured value, an upper limit to
$\dot G/G$ can be placed. Since $\Delta t_{\rm MSTO}/\Delta
(m-M)_{\rm F606W}\approx 4$~Gyr/mag, the maximum age difference with
respect to the case in which a constant $G$ is adopted should be $\sim
0.4$~Gyr, which translates into an upper bound $\dot G/G\sim
-1.8\times 10^{-12}$~yr$^{-1}$. This upper limit considerably
improves the other existing upper bounds to the rate of variation of
$G$ and is equivalent to the upper limit set by helioseismology.
We have also shown that individual pulsating hydrogen-rich white
dwarfs can also be useful in setting upper limits to the rate of
variation of $G$, although these upper bounds are currently less
restrictive than those obtained using the color-magnitude diagram of
clusters. In essence, we have found that the periods of the dominant
modes of the two white dwarfs (G117--B15A and R548) for which we have
reliable observational determinations of their rate of period change
of their main modes, are not affected, thus allowing to derive an
excellent asteroseismological fit of their pulsational
spectra. However, the rates of period change of their dominant modes
are severely affected, as a consequence of the fact that the rates of
change of these periods reflect their evolutionary changes, and thus
allow to measure their evolutionary time scales. Accordingly, in the
case of an hypothetical smooth variation of the gravitational
constant, and due to the sensitivity of the cooling time scales to the
precise value of $\dot G$, the rates of period change are
modified. Our calculations in the case of a running value of $G$ allow
to compare the predictions of the theoretical models with the
observational rates of period change, and hence to derive constraints
on the value of $\dot G/G$. Using this technique we found that the
upper bounds are $\dot{G}/G= (-1.79^{+0.53}_{-0.49}) \times
10^{-10}$~yr$^{-1}$, and $\dot{G}/G= (-1.29^{+0.71}_{-0.63}) \times
10^{-10}$~yr$^{-1}$, for G117--B15A and R548 respectively. We
emphasize that although these upper limits are less restrictive than
those obtained using the previously described technique, they could be
much improved should we have reliable observational determinations of
the rate of change of the dominant periods of pulsating massive white
dwarfs, as the effect of a running $G$ is more evident for these white
dwarfs, due to their larger gravitational field.
Last, but not least, we would like to emphasize here that there is
still room for new (and possibly exciting) studies that have not been
addressed here, and that the results of such studies could translate
in interesting improved constraints. To be precise, we now have
excellent observational luminosity functions of the white dwarf
population of the Galactic disk, which are the result of both
magnitude-limited large scale surveys --- like the Sloan Digital Sky
Survey \citep{SDSS1,SDSS2} or the SuperCOSMOS sky survey
\citep{SuperCOSMOS} --- or of volume-limited surveys
\citep{Bergeron}. The completenesses of the large surveys is expected
to be high ($\sim 80\%$), while the volume-limited sample is thought
to be nearly complete. The white dwarf luminosity function reflects
the characteristic cooling time of the population of white dwarfs as a
function of the absolute bolometric magnitude, and has two distinctive
features. The first of these properties is a monotonic increase until
luminosities of the order of $\log (L/L_\odot) \sim -3.5$, which is
simply a consequence of the the fact that due to the absence of energy
sources other than the gravothermal energy of white dwarfs, the cooler
a white dwarf the longer it takes to cool further. The second --- and
for our purposes most important feature --- of the disk white dwarf
luminosity function is the presence a sharp drop-off at
$\log(L/_\odot) \sim -4.5$. This pronounced cut-off is the obvious
consequence of the finite age of the Galactic disk. The origin of this
deficit of cool stars is clear: white dwarfs have not had time enough
to cool down beyond this luminosity. Since the cooling process of
white dwarfs is sensitive to $\dot G/G$ it is rather evident that in
the case of a secularly varying $G$ the position of the cut-off should
be different. Such an analysis still remains to be done, as it
requires the calculation of an extensive set of cooling sequences for
different values of $\dot G/G$ and the initial value of $G$, which
requires considerable efforts, but it is one of our priorities for the
next future.
In summary, we have demonstrated that due to their relative structural
simplicity, to the fact that the gravothermal cooling process that
governs their evolution is well understood, to the well determined
individual and ensemble properties, and to the sensitivity of their
properties to the value of $\dot G/G$, white dwarf stars can be used
to constrain alternative theories of gravitation, and that future
efforts, both on the observational and on the theoretical sides, can
result in improved upper bounds on the rate of change of the
gravitational constant.
\begin{acknowledgements}
Part of this work was supported by AGENCIA through the Programa de
Modernizaci\'on Tecnol\'ogica BID 1728/OC-AR, by PIP 112-200801-00940
grant from CONICET, by MCINN grant AYA2011-23102, by the ESF EUROCORES
Program EuroGENESIS (MICINN grant EUI2009???04170), by the European
Union FEDER funds, and by the AGAUR.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,577 | arxiv | \section{Introduction}
Electrical conduction in organic semiconductors is typically interpreted in terms of temperature activated hopping of charge carriers. A seminal work\cite{Mot69} by Mott showed that the hopping conductivity follows a stretched exponential dependence on temperature,
\begin{align} \label{eq:1}
\sigma&=\sigma_0\exp\left[-\left(\frac{T_0}{T}\right)^{\alpha}\right],\\
T_0&=\frac{\beta}{\rho_0\xi^d},
\end{align}
where $\alpha=1/(1+d)$, $d$ is dimensionality, $\rho_0$ is the density of localized states at the Fermi level, $\xi$ is the isotropic localization length proportional to the carrier wave function extent and $\beta$ is a numerical coefficient ($\beta=21.2$ and 13.8 for $d=3$ and 2, respectively\cite{Shk_book}). In the derivation of \eqref{eq:1}, the charge transport was assumed to be dominated by the states within a narrow energy band close to the Fermi energy, and within that energy band the charge transport occurs by variable-range hopping (VRH).\cite{Shk_book} It becomes nearest-neighbor hopping (NNH) for $\alpha=1$ with $k_BT_0$ being the activated energy. Eq. \eqref{eq:1} has been routinely used to determine dimensionality,\cite{Kim12, Nar07_AM, Ash05, Ale04} $T_0$ and consequently $\xi$ if $\rho_0$ is known, or vice versa, from a separate measurement.\cite{Nar07_PRB, Ree99, Nar08, Shu12} Conductivity in a system having structural anisotropy is still expected to follow \eqref{eq:1} with the same $\alpha$ for all directions, but different $\sigma_0$, which becomes direction dependent and related to carrier wave function anisotropy. \cite{Shk_book}However, surprisingly, in experiments by Nardes \textit{et al.}\cite{Nar07_PRB}, thin films of poly(3,4-ethylenedioxythiophene) (PEDOT), which were prepared by spin coating, showed $\alpha=0.25$ for $\sigma$ measured in the lateral direction ($\sigma_{\parallel}$) and $\alpha=0.81$ for measurement in the perpendicular (vertical) direction ($\sigma_{\bot}$), with a ratio $\sigma_{\parallel}/\sigma_{\bot}=10-10^3$. This has led to a conclusion about VRH in the lateral and NNH in the vertical direction, but the microscopic origin of the co-existence of those two regimes remained an open question. Another uncertainty exists regarding the fractional value $\alpha=0.81$ that is less than 1 expected for activated Arrhenius-like transport. Fractional values of $\alpha$, which do not fit integral $d$, are commonly observed\cite{Kim12, Ale04} in conductivity measurements on organic semiconductors, which further lead to uncertainties in interpreting the morphology and nature of charge transport.
The extraction of Mott's exponent $\alpha$ from the temperature dependence of conductivity is known to be error prone. The values extracted deviate commonly from 1/4, 1/3 and 1/2 that are characteristic to 3D, 2D and 1D charge transport, respectively. This led to conclusions of quasi-dimensional transport with morphology having no preferred dimensionality.\cite{Kim12} For $\alpha>1/2$, it was concluded about the transition between VRH and NNH.\cite{Nar07_PRB} A common method to obtain $\alpha$ is to plot $\sigma$ vs $T^{-\alpha}$ for different $\alpha$ and check whether it falls onto a straight line. The linearity could be then quantified via the correlation coefficient.\cite{Nar07_PRB, Nar08} Another, more accurate method is based on computing the reduced activation energy $d\log(\sigma)/d\log(T)$, for which a slope, when plotted as a function of $\log(T)$, directly gives $\alpha$.\cite{Zab84}
In this paper, numerical calculations of charge hopping transport in anisotropic systems are presented with a focus on an analysis of powers $\alpha$ entering the Mott's law \eqref{eq:1}. As the localized states become progressively anisotropic, $\sigma$ in a direction, where the localization length is smaller, follows \eqref{eq:1} with $\alpha$ taking any values between 1/4 and 1 at low $T$. This implies changing of VRH to NNH as a result of the formation of a single conduction path that carries most of the current. This is demonstrated by current visualization and also explained using the percolation theory. At the same time, $\sigma$ in a perpendicular direction retains VRH for any degree of anisotropy, which is all consistent with experimental data\cite{Nar07_AM, Nar07_PRB} on anisotropic conduction in PEDOT.
\section{Model}
The hopping conduction between localized states in a disordered system is modeled by a resistor network.\cite{Mil60, McI79, Amb73} The resistance between two states $i$ and $j$ is\cite{Shk_book}
\begin{equation} \label{eq:2}
R_{ij}=\frac{k_BT}{e^2\Gamma_{ij}},
\end{equation}
where the average tunneling rate accounting for wave function anisotropy is
\begin{widetext}
\begin{equation} \label{eq:3}
\Gamma_{ij}=\gamma_0\exp\left(-2\sqrt{\frac{x_{ij}^2+z_{ij}^2}{\xi_{\parallel}^2}+\frac{y_{ij}^2}{\xi_{\bot}^2}} -\frac{|E_i-E_j|+|E_i|+|E_j|}{2k_BT}\right),
\end{equation}
\end{widetext}
with $\gamma_0$ being the electron-phonon coupling parameter, $\xi_{\parallel}$($\xi_{\bot}$) is the localization length in $xz$ plane ($y$ direction), see the inset in Fig. \ref{fig:1}(b), ($x_{ij}$, $y_{ij}$, $z_{ij}$) are coordinate components of the distance between states, and $E_i$ is the energy of the $i$-th state. The exponentially decaying wave functions are characterized by ellipsoids with semi-major and semi-minor axises $\xi_{\parallel}$ and $\xi_{\bot}$ (see inset in Fig. \ref{fig:1}(b)) that are centered on lattice sites of the cubic crystal that is assumed in the following. In this way $\xi_{\parallel}/\xi_{\bot}$ describes the degree of anisotropy; for the isotropic case $\xi_{\parallel}=\xi_{\bot}=\xi$ and \eqref{eq:3} reduces to a familiar expression for the tunneling rate.\cite{Shk_book}
The linear Ohmic regime is assumed in the following and the chemical potential is set to zero.
Applying the Kirchhoff's law to the resistor network, the resistance between two arbitrary nodes can be calculated\cite{Wu82} from the determinants of the conductance matrix $G$,
\begin{equation} \label{eq:4}
R_{ij}=\frac{|G^{ij}|}{|G^j|},
\end{equation}
where $|G^j|$ is the determinant of $G$ with the $j$-th row and column removed, and $|G^{ij}|$ is the same determinant but with the $i$-th and the $j$-th rows and columns removed. It is convenient to introduce two additional nodes serving as the source ($s$) and drain ($d$) electrodes and then connecting them to all nodes in the outer planes of the lattice by small resistances. Those nodes are substituted into \eqref{eq:4}, which is further used to compute conductivity,
\begin{equation}
\sigma=\frac{1}{R_{sd}Nl},
\end{equation}
where $N$ is the edge length and $l$ is the constant of a cubic lattice. This method allows one to account for resistances between all pairs of the nodes in the system and thus current branching without any cut-off, which is more accurate than commonly implemented methods\cite{Amb73} and also the critical subnetwork approximation\cite{Amb71} used in the percolation approach.\cite{OPA} Note that a weak $T$ dependence due to the pre-exponential factor in \eqref{eq:2} is explicitly taken into account. To visualize the currents, the system of equations $I=GV$ is solved for a small source-to-drain voltage, $eV_{sd}\ll k_BT$.\cite{Amb73}
In the following, the $y$ axis is assumed to be a direction in which the anisotropic localized states are squeezed (Fig. \ref{fig:1}(b)), and if the source and drain electrodes align with the $y$ axis, it is said to be out-of-plane transport. If the electrodes are in the $x$ (or $z$) direction, transport is denoted as in-plane.
\section{Results and discussion}
To analyze the influence of structural anisotropy on charge transport the numerical calculations are performed for a system with parameters typical for organic semiconductors.\cite{Kim12_Pipe} In particular, $\xi=\xi_{\parallel}$ is chosen to be equal to $l$, a value large enough not to bring the system into strong localization (insulating) regime. DOS is taken to be uniform (constant) with width $W$ (measured in units of Kelvin) that establishes an energy scale. The disorder is assumed to be only energetic; the effect of positional disorder will be later commented on. The system size for the results presented below is $20\times20\times20$. This allows to perform averaging over 10000 different disorder realizations within available computational resources.
The calculations were also performed for different sizes and $\xi$ with similar results obtained.
\begin{figure}[h]
\includegraphics[keepaspectratio,width=0.9\columnwidth]{fig1_6}
\caption{(Color online) Temperature dependence of (a) averaged conductivity and (b) reduced activation energy. The dotted lines show a fit to Eq. \eqref{eq:1} with $\alpha$ denoted in (b). In an isotropic system, the localized states are spheres centered in the nodes of a cubic lattice, while the states in an anisotropic system are oblate spheroids squeezed in the $y$ direction as shown in the inset in (b). $\sigma$ in the $xz$ plane (in-plane), where neighbor states overlap more, and in the $y$ direction (out-of-plane) are shown for two values of anisotropy: $\xi_{\parallel}/\xi_{\bot}=3$ and 6. The lattice size is $20\times20\times20$.}
\label{fig:1}
\end{figure}
Figure 1 shows the temperature dependence of conductivity for different morphologies, as the localization states change from isotropic to anisotropic, for which the transport direction is either in-plane or out-of-plane. There, several transport regimes can be traced, which are easy to distinguish by slopes to $d\log(\sigma)/d\log(T)$ in Fig. \ref{fig:1}(b). At high temperatures ($T_c>0.1W$), conductivity follows activated behavior with $T_0/W\approx0.1$. This agrees with the traditional hopping theory\cite{Shk_book} that predicts activated transport for
\begin{equation} \label{eq:5}
T_c>0.29W\rho_0^{1/3}\xi.
\end{equation}
At lower temperatures, VRH is observed with $\sigma$ described by the Mott's law \eqref{eq:1}. For the isotropic structure, $\alpha=1/4$ and $T_0/W=18$ are derived, while $\alpha=1/3$ and $T_0/W=7$ are derived for the in-plane conduction, implying $\beta=18$ and $\beta=7$ for 3D and 2D hopping, respectively, for $\xi=l$. These values agree well with known values\cite{Shk_book}, which, along with $T_c$ obtained above, justify the validity of the method implemented. \textit{While isotropic and in-plane hopping conduction demonstrate an expected behavior, out-of-plane conduction surprisingly reveals a reentrance to activated behavior at low $T$ as the anisotropy degree of the localized states becomes stronger.} For $\xi_{\parallel}/\xi_{\bot}=6$, $\alpha=0.7$, and it approaches 1 as the ratio $\xi_{\parallel}/\xi_{\bot}$ increases further.
\begin{figure*}[th!]
\includegraphics[keepaspectratio,width=\textwidth]{fig2_1}
\caption{(Color online) Currents in (a) isotropic and (b) anisotropic $\xi_{\parallel}/\xi_{\bot}=6$ structures mapped onto a view stretched along the $y$ axis: For better visualization, the distance between the $xz$ planes is intentionally increased after calculation has been done; the original lattice is cubic. The dots mark the hopping sites, with the dot size being inversely proportional to the absolute value of energy of the localized state. Gray pads are the source and drain electrodes. Both structures have a $15\times15\times15$ lattice size and an identical energetic disorder. $T/W=0.001$.}
\label{fig:2}
\end{figure*}
To understand this, Fig. \ref{fig:2} compares the currents flowing through isotropic and anisotropic ($\xi_{\parallel}/\xi_{\bot}=6$) structures at $T/W=0.001$. Both structures have an identical energetic disorder. For the former, the current spans uniformly over the interior, and the conduction path acquires different distances, consistent with VRH theory.\cite{Shk_book} However, the anisotropic structure in Fig. 2(b) reveals nearest-neighbor inter-plane hopping along the transport direction. Conduction is dominated by a single path that consists of a chain of resistors connecting neighbor planes in a series. That path carries even more current (less branching) when compared to the isotropic structure.
Reentrance to the activation regime at low $T$ for out-of-plane transport can be also understood from the percolation theory with the following argument. In the percolation theory,\cite{Amb71, Shk_book} a critical subnetwork is constructed from bonds (resistors) that satisfy the inequality
\begin{equation} \label{eq:perc}
\frac{r_{ij}}{r_{max}}+\frac{|E_i|+|E_j|+|E_i-E_j|}{2E_{max}}<1,
\end{equation}
where
\begin{equation} \label{eq:perc2}
E_{max}=k_BT\ln(\frac{\gamma_0}{\Gamma_c})
\end{equation}
and
\begin{equation} \label{eq:perc25}
r_{max}^2=\frac{x^2+z^2}{r_{max\parallel}^2}+\frac{y^2}{r_{max\bot}^2}
\end{equation}
bounds an ellipsoid (oblate spheroid) with semi-major and semi-minor axises
\begin{align} \label{eq:perc3}
r_{max\parallel}&=\frac{\xi_{\parallel}}{2}\ln\frac{\gamma_0}{\Gamma_c},\\
r_{max\bot}&=\frac{\xi_{\bot}}{2}\ln\frac{\gamma_0}{\Gamma_c}. \label{eq:perc33}
\end{align}
$\Gamma_c$ is chosen such that the set of connected bonds is just enough for the subnetwork to span through the device, from the source to drain electrodes. This percolation criterion is satisfied at
\begin{equation} \label{eq:perc4}
nr_{max\parallel}^2r_{max\bot}=v_c
\end{equation}
where $n=2\rho_0E_{max}$ is the total number of states per unit volume with $|E_i|<E_{max}$. $v_c$ is a dimensionless constant related to the critical density of the percolation problem. For a given site $i$, the factor $r_{max\parallel}^2r_{max\bot}$ allows all the states contained inside the ellipsoid centered at $i$ to create a bond. Note that the elliptical shape of $r_{max}$ results from the wave function anisotropy in \eqref{eq:3}. For the isotropic case, this ellipsoid transforms into a sphere of radius $r_{max}$, and the coordinate terms in \eqref{eq:perc4} are replaced by $r_{max}^3$.\cite{Amb71} If the localized states are strongly anisotropic $\xi_{\parallel}/\xi_{\bot}\gg1$ and positional disorder is weak $\Delta r<\xi_{\bot}$, the states in the $y$-direction, which fall inside the ellipsoid \eqref{eq:perc25} and are thus allowed to create a bond at the percolation threshold,
belong to the nearest-neighbor $xz$-planes. This allows one to replace $r_{max\bot}$ in \eqref{eq:perc4} by the lattice constant $l$, which is the minimal bond length at percolation.
\begin{equation} \label{eq:perc5}
\ln\left(\frac{\Gamma_c}{\gamma_0}\right)\approx\frac{v_c}{2\rho_0k_BT\xi_{\parallel}^2l}
\end{equation}
Since $y$ is the transport direction and the $xz$ tails of the wave functions from different planes do not overlap, $r_{max\parallel}\approx \xi_{\parallel}$. Within the $xz$ planes there are many strongly coupled states available to adjust the subnetwork such that a pair of states from the nearest neighbor planes with the smallest energy difference is to be chosen to form a bond. For an electron traversing through the system this means that it is energetically favorable to hop in the $xz$ plane until the next vacant site on the other plane becomes closest in energy. From \eqref{eq:perc5}, an activated $T$ dependence of conductivity ($\sigma\propto \Gamma_c$) is obtained.
\begin{figure}[h]
\includegraphics[keepaspectratio,width=0.9\columnwidth]{fig3_2}
\caption{(Color online) Probability distribution function of conductance fluctuations for $T/W=0.001$. $\xi_{\parallel}/\xi_{\bot}=6$ for in-plane and out-of-plane $\sigma$.}
\label{fig:3}
\end{figure}
Additional information on the conduction mechanism in the activation regime can be obtained from the probability distribution function (PDF) of the conductance fluctuations.\cite{Hug96} The hopping transport generally implies strong fluctuations as any external parameter (e.g., the chemical potential) varies because of an extremely broad distribution of elementary resistors composing the network.\cite{He03} In the activated regime, however, fluctuations are expected to be smaller than those in the VRH regime, since the bond length does not fluctuate. To check whether this holds for a low-$T$ activated regime, Fig. \ref{fig:3} shows PDF of the $\ln\sigma$ fluctuation for isotropic and anisotropic structures at $T/W=0.001$. In the activated (NNH) regime, $\sigma$ reveals strong fluctuations, comparable in magnitude with fluctuations in VRH regime. This might be understood to be a result of an additional constraint imposed by the wave function anisotropy (anisotropic breaks) on the current path, where this path has to adjust in a way shown in Fig. \ref{fig:2}(b). Note that the geometrical constraint due to reducing dimensionality generally enhances fluctuations, see Ref. \onlinecite{Hug96} and references therein, and leads ultimately to large non-self-averaging fluctuations in 1D.\cite{Rod09}
PDF is asymmetric and skewed to the right, which indicates that the samples with large $\sigma$ dominate the ensemble averaged $\sigma$. As $N\rightarrow\infty$, fluctuations decrease (not shown) and become negligible compared to the average value; PDF approaches a Gaussian distribution, in agreement with the central limit theorem. For isotropic and in-plane transport, PDF is already closely approximated by a Gaussian, which indicates that $N$ chosen is sufficiently large.
Relevant results were obtained by Nardes \textit{et al}.\cite{Nar07_PRB, Nar07_AM} in experiments on anisotropic PEDOT films where co-existing activated and VRH transport regimes were found. Their samples were prepared by spin coating and confirmed by scanning tunnel microscopy to contain elongated PEDOT grains aligned in horizontal layers and separated by poly(4-styrenesulfonate) (PSS) lamellas.\cite{Nar07_AM, Nar08} PEDOT grains possess good electrical conduction while PSS acts as an insulating barrier.\cite{Kim12, Nar07_PRB, Nar07_AM, Nar08, Cri06} Experimentally\cite{Nar07_PRB} extracted in-plane $T_0=3.2\times10^5$ K exceeds out-of-plane $T_0=70$ K, which is consistent with the result obtained above. Additional non-Ohmic measurements revealed the characteristic hopping length $\approx 1$ nm for the out-of-plane direction. This agrees with the plane-to-plane separation of PEDOT layers obtained for relaxed geometries in the first-principles calculations.\cite{Len11} Thus, activated out-of-plane conduction and low values of $\sigma_{\bot}$ ($\sim 10^{-6}$ S/cm) in experiment might be related to a strong charge localization and short-range order in the PEDOT layer across the thin film.\cite{Nar07_PRB, Nar07_AM} In-plane VRH in the measurement of the same sample, along with a larger $\sigma_{\parallel}$ ($\sim10^{-4}$ S/cm), might be explained by weaker localization, where the wave function extends along the polymer backbone and couples strongly with another state in a neighbor polymer unit. Note that, in experiment, $\alpha=1/4$ indicating 3D VRH, while the above theoretical results predict 2D. This might be attributed to the fact that for in-plane electrical measurements the electrodes were placed 1 mm apart from each other, thus including many ($\sim10^6$) localized states composing a conductive network that is unlikely to maintain long-range order, in contrast to theoretical results where the long-range order (no positional disorder) was realized. A quantitative agreement with experiment\cite{Nar07_PRB, Nar07_AM} might be achieved for other parameters: $k_BW$ = 0.25-1.25 eV, which is of the order of the band gap of pristine PEDOT\cite{Len11}; $\gamma_0=10^{13}$ $s^{-1}$ --- a typical value for organic semiconductors\cite{Kim12_Pipe}; $l=1$ nm.
Finally, several comments are as follows. First, the results presented above were obtained for constant DOS, which might be a poor approximation for DOS in real polymeric systems.\cite{dos} Eq. \eqref{eq:1} was derived while assuming that transport occurs in a narrow energy band where DOS can be regarded as a constant for sufficiently low $T$.\cite{Mot69, Shk_book} For sufficiently low $T$, \eqref{eq:1} is still expected to hold true, even for DOS of strongly varying Gaussian shape.\cite{Zvy08} Because an overwhelming amount of experiments support Mott's law \eqref{eq:1}, the above results are expected to stay qualitatively the same also for different DOS shapes fulfilled with a low $T$ condition. Second, if positional disorder is added to the modeling with a deviation of 80\% relative to $l$,\cite{Jac11} the activated regime disappears, consistent with the traditional VRH theory.\cite{Shk_book} In this case of strong positional disorder, charge carriers propagate zig-zag like through the network. Third, to reproduce the absolute values of $\sigma$ in Fig. \ref{fig:2}, with arbitrary units converting to S/cm, $\gamma_0=10^{13}$ $s^{-1}$ should be used. Fourth, the above theory does not include a Coulomb interaction that is known\cite{Efr75} to create a soft gap in DOS near the Fermi energy and make $\alpha=1/2$ in \eqref{eq:1}. Electron interactions are expected to become important at low $T$, below the range where VRH occurs, and also if screening is not strong. This effect might be a topic of a separate study. Fifth, the hopping rates \eqref{eq:3} assume electrons or holes as charge carriers. These rates are modified when polaron effects become important,\cite{Mar56} which also deserves a separate study.
In conclusion, numerical calculations of hopping conduction have shown that both activated temperature dependence and stretched exponential dependence of the the Mott's law \eqref{eq:1} should be observable in anisotropic structures at low temperatures. This implies nearest-neighbor and variable-range hopping for different transport directions. Both are characterized by conductance fluctuations of comparable amplitudes. Activated behavior (nearest-neighbor hopping) is a result of a single conduction path formation that adjusts in the planes where the wave functions strongly overlap. This has been demonstrated by current path visualization and using the percolation theory. These findings provide a microscopic explanation of anisotropic hopping conduction in PEDOT thin films observed by Nardes et al.\cite{Nar07_PRB, Nar07_AM}
\section{Acknowledgement}
This work was supported by the Energimyndigheten and NSC (SNIC 2015/4-20). It is a pleasure to acknowledge discussions with M. Kemerink.
|
2,877,628,088,578 | arxiv | \section{Introduction}
The ECG is often corrupted by different types of noise, namely, power line interference, electrode contact and motion artifacts, respiration, electrical activity of muscles in the vicinity of the electrodes and interference from other electronic devices. Analysis of noisy ECGs is difficult for humans and for computer programs.
In this work we place ourselves in context of automatic and semi automatic ECG analysis: denoising should facilitate automatic ECG analysis.
General denoising signal processing methods have been applied to ECG. Low pass linear filters are used for high frequency noise removal, namely power line interference and muscle activity artifacts. High pass linear filters can be applied to cancel baseline wander. The use of neural networks to ECG denoising has been, to our knowledge, limited to the removal of these two types of noise. Other denoising tools are median filter, wavelet transform methods, empirical mode decomposition, morphological filters, non linear bayesian filtering and template matching. We will focus on noise introduced by electrode motion which causes more difficulties in ECG analysis\cite{Moodynoisestress}.
Our method adapts to each particular ECG channel and learns how to reproduce it from a noisy version of the different channels available.
In the Physionet/Cinc Challenge 2010 it was shown that we can use some physiological signals to reconstruct another physiological signal, in particular an ECG~\cite{Moody2010,fillinginthegap, missingphysiological}.
This approach to reconstructing the noisy ECG channel is a simplified version, but equally effective, of the winning entry in that Challenge. We show that the procedure is robust against noise in the input signals and can include, as an input, the channel we want to denoise.
This noise removal method is another example of the power of deep neural networks~\cite{hintonReducingdimension, hintonFastLearning, hintonRecognizeshapes, BengioLecun2007:scaling}, in this case, applied to ECG signals.
\section{Method}
\begin{figure*}[!ht]
\centering
\includegraphics[height=8cm, width=13cm]{figure1.pdf}
\caption{Reconstructing the first channel from record 105 (MIT-BIH Arrhythmia Database), SNR=-6 db. In the lower section, noisy ECG, in the
upper section, clean channel 1 and denoised channel 1.}
\label{fig:rec105}
\end{figure*}
If an ECG channel we want to use for ECG analysis is, at some time segment, contaminated with noise, we call it the {\bf target} channel in our denoising process.
The method uses a feedforward neural network.
A prerequisite for applying it is the target channel to be free from noise for some minutes, in order to train the neural network. The other channels used by the procedure may have noise in training time. If one channel has much more noise than others, it might be better not to use it for the reconstruction, even if it is the target channel.
The neural network will receive the samples as input, from one or more channels, corresponding to time segments with a fixed length. The output will be the samples from the target channel corresponding to the time segment used in the input. We used time segments with lengths of between one and three seconds, depending on the channels we use for reconstruction: one second if we use the target channel and another channel, two seconds, if we do not use the target channel, and three seconds, if we only use the target channel.
To reconstruct one ECG channel we collected time segments, $T_k$ from the ECG each one starting 16 samples after the preceding segment. After obtaining the output of the neural network, corresponding to each of the $T_k$, the value of the reconstruction on sample $t_0$ will be the average value of the sample outputs corresponding to $t_0$, using all $T_k$ that contain $t_0$.
The proposed method could be applied to a Holter record, reconstructing those time segments where an important level of noise
is identified and using the remaining time of the Holter record for training.
\subsection{Neural network architecture and training}
We used a neural network with three hidden layers. The number of units on each of those layers was approximately 1000 in all experiments.
To train the neural network, we constructed a sequence of time segments each one starting five samples after the beginning of the previous one. There is no need to use fiducial points to create input data to the neural network.
We applied Geoffrey Hinton's method~\cite{hintonroyalsocietyreview,hintonReducingdimension,hintonMultipleLayers} to learn the neural network weights: following initialization using a stack of Restricted Boltzmann Machines, we applied backpropagation algorithm to fine tune the weights.
For details on the training procedure for Restricted Boltzmann Machines, we refer to Hinton~\cite{hintonrbmpracticalguide}.
As usual, when using feedforward neural networks, we normalized the input data, to accelerate the learning process.
First we applied a moving average filter, with the window size equal to the sampling rate. Then we subtracted the result from the signal, thus reducing the baseline wander. In the output signal, instead of the moving average filter we applied a median filter: it is more effective in the removal of baseline wander.
Finally, we scaled the output signal to have unit variance and multiplied the input signals by the same scale factor.
We implemented our method using GNU Octave language and, to reduce training and reconstruction time, we ran most time consuming code on a Gra\-phics Processing Unit. Our code is available at the first author's web page.
\subsection{Evaluating the method}
\begin{figure*}[!ht]
\centering
\includegraphics[height=8cm, width=13cm]{figure2.pdf}
\caption{Segment of first channel from record 202 (MIT-BIH Arrhythmia database): RMSE of recontructed signal is largely due to the noise in the original signal: baseline shift and high frequency noise.}
\label{rec202}
\end{figure*}
Evaluating ECG denoising methods is not an obvious task. A common way of doing it is to add add noise to an existing signal and measure the Root Mean Square Error (RMSE) of the denoised signal relative to the original signal. This approach has some disadvantages. Firstly, when using a large data base of ECGs, is difficult to avoid noise in the original signal, and we do not want to punish the denoising method for not reconstructing the noise in the original signal. Secondly, RMSE does not always reflect the difficulties in analysing a noisy ECG. For instance, a constant baseline shift in the reconstructed signal is not very disturbing, but might correspond to a high RMSE value.
In this study we report RMSE in the reconstructed signal when we artificially add noise in the ECG, but we also evaluate our method using some publicly available programs that analyse the ECG: we compare the results of applying these programs with and without denoising the corrupted ECG. Although those programs alredy have a preprocessing stage to deal with noise, we show that, in the presence of noise, our denoising method improves their results.
\subsubsection{Programs used to test this method}
\begin{description}
\item[gqrs] is a recent QRS detector, not yet published: the author is George Moody. This program is open source and available with WFDB software, from Physionet. There is an accompaining post-processor 'gqpost' intended to improve positive predictivity, at a cost of reduced sensitivity. We report the results of 'gqrs' with and without using gqpost. 'gqpost' uses a configuration file 'gqrs.conf'; we kept the default values of 'gqrs.conf'.
The results of this program 'gqrs' depend on the value of a threshold parameter; as we did not find a systematic way of determining, for each record, the best value for the threshold, we used the parameter's default value. For this reason, we do not report the best possible results of this detector in the different records and therefore we should not use this study to compare the perfomance of the different qrs detectors.
\item[E.P. limited ] is an open source program written by Patrick S. Hamilton~\cite{Eplimitedqrs}. It performs QRS detection and classifies beats as 'normal' or 'ventricular ectopic' (VEB).
\item[ecgpuwave] It is an open source QRS detector and waveform limit locator, available as part of the PhysioToolkit~\cite{waveboudaries,evaluationwaveformlimits}. The authors are Pablo Laguna, Raimon Jané, Eudald Bogatell, and David Vigo Anglada.
\end{description}
All the programs listed above act on a single ECG channel, we did not find publicly available methods using more than one channel.
\subsection{Statistics used to describe the results of QRS detectors and the beat classifier}
For QRS detectors we used the following statistics:
$$\mbox{Sensitivity}=\frac{TP}{TP+FN}\, ,\,\,\,\,\mbox{Positive Predictivity}=\frac{TP}{TP+FP}$$
$$\mbox{Error rate}=\frac{FP+FN}{TP+FN}\mbox{, \cite{Eplimitedqrs}}$$
where $TP$ is the number of correctly detected beats, $FP$ is the number of false detections and $FP$ is the number of missed beats.
For the beat classifier, we use, as in~\cite{PatientAdaptable}, the Sensitivity, Positive Predictivity,
$$\mbox{False Positive Rate}=\frac{FP}{TN+FP}$$
$$\mbox{and}\,\, \,\,\, \, \mbox{Classification Rate}=\frac{TN+TP}{TN+TP+FN+FP}$$
where $TP$, $TP$, $FP$ and $FN$ are defined as follows:
\begin{itemize}
\item $\mathbf{TP}$ is the number of beats correctly classified as VEB.
\item $\mathbf{TN}$ is the number of non VEBs correctly classified.
\item $\mathbf{FP}$ is the number of beats wrongly classified as VEB, excluding fusion and unclassifiable beats.
\item $\mathbf{FN}$ is the number of true VEB not classified as such.
\end{itemize}
\subsection{Adding noise to an existing ECG}
In most experiments, to test the behavior of our denoising method, we start with a 'clean' ECG and add noise to it.
For this we use the program {\bf nst}, written by Geoge Moody~\cite{Moodynoisestress}.
The standard definition of signal to noise ratio (SNR), in decibels, is:
$$SNR=10\log_{10}\frac{S}{R}$$
where $S$ and $R$ are the power of signal and noise. We used a slightly different value for $S$ and $R$, following the method used by the program 'nst'. Next we quote 'nst' man page~\cite{nstmanpage}:
\begin{quotation} \em
`` A measurement based on mean
squared amplitude, for exam\-ple, will be proportional to the square of
the heart rate. Such a measurement bears little relationship to a
detector's ability to locate QRS complexes, which is typically related
to the size of the QRS complex. A less significant problem is that
unweighted measurements of noise power are likely to overestimate the
importance of very low frequency noise, which is both common and (usually) not troublesome for detectors. In view of these issues, nst
defines S as a function of the QRS amplitude, and N as a frequency-weighted noise power measurement. ``
\end{quotation}
More details on the way 'nst' computes SNR can be found on the man page of 'nst'.
\section{Experiments}
\subsection{MIT-BIH Arrhythmia Database}
\label{main_exper}
\begin{figure*}[!ht]
\centering
\includegraphics[height=8cm, width=13cm]{figure3.pdf}
\caption{Reconstructing channel 1 from record 103 using only the same channel, SNR=0 db. At the bottom, noisy signal, in the middle denoised signal
and at the top, clean signal.}
\label{fig:rec103}
\end{figure*}
We added noise to both channels in all the 48 records from Physionet MIT-BIH Arrhythmia Database~\cite{mitbih-Arrhythmia, PhysioNet}, and applied our method to reconstruct the first channel.
As it is well known~\cite{Moodynoisestress, similar}, from the three types of noise, baseline wander, muscle artifact and electrode motion artifact, it is the last one that creates most difficulties to ECG analysis programs. We contaminated both channels of each record with electrode motion artifact noise, using the corresponding noise record from the MITBIH Noise Stress Test Database~\cite{Moodynoisestress}.
In all but one record, both channels were used as input to reconstruct the first channel. In record 103, the noise in the second channel is already very high, therefore, we chose to use only the target channel in the reconstruction.
The clean record and corrupted noisy versions of the same record were used as input for training the neural network. We always used the clean target channel for the output.
The default behavior of the program 'nst' was followed to add noise to the records used in the tests: starting after the first five minutes from the beginning of each record, we added noise for two minutes, followed by two minutes without noise, repeating the process until the end of the record.
In order to train the neural network, we used all those segments of time where noise was not added in test time. In this way, the parts of noise used during training and testing do not overlap: we kept the neural network from learning the noise used in the test. The amount of noise, used for testing, corresponds to SNR values of 24 db, 18 db, 12 db, 6 db, 0 db and -6 db.
\begin{table*}[!ht]
\centering
\caption{Reconstruction Error: for each level of noise, we report the value of RMSE(denoised signal)/RMSE(noisy signal) over the 48 records of MIT-BIH Arrhythmia Database}
\begin{tabular}[l]{c c c c c c c} \cline{1-7}
SNR&24db&18db&12db&6db&0db&-6db\\
&1.014&0.507&0.257&0.133&0.071&0.041\\ \cline{1-7}
\end{tabular}
\label{RMSE}
\end{table*}
In table~\ref{RMSE} we report the fraction of RMSE, in the noisy signal, present in the reconstructed signal: RMSE(denoised signal)/RMSE(noisy signal). As we can see in the table, there are no visible advantages, in terms of RMSE, in applying the denoising methods for low noise (SNR=24db), in fact, situations like the one in figure~\ref{rec202} introduce high values of RMSE because the method is not intended to learn to reproduce the noise of the original signal but just its main pattern. When the values of added noise increase, the errors in the reconstructed signal, due to noise in the original signal, lose their relative importance: for higher values of noise we notice an important reduction in the value of RMSE in the reconstructed signal.As supplementary material to this article, we present the detailed results for each record.
\begin{table*}[!ht]
\centering
\caption{Results of 'gqrs' applied to the first channel, corrupted with noise and denoised using our method (MIT-BIH Arrhythmia, 48 records).}
\begin{tabular}[l]{c c c c c}
\multirow{2}{*}{SNR}& \multirow{2}{*}{channel 1}&\multirow{2}{*}{sensitivity}& positive& error\\
& & &predictivity&rate \\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{24 db}}
& noisy & 0.9973 & 0.9963& 0.0064\\
&\bf denoised&0.9963 &0.9992 & 0.0045\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{18 db}}
& noisy & 0.9973 & 0.9834 & 0.0195 \\
&\bf denoised& 0.9964 & 0.9991& 0.0045\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{12 db}}
& noisy & 0.9957 & 0.9054& 0.1084\\
&\bf denoised& 0.9959 & 0.9991& 0.0049\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{6 db}}
& noisy & 0.9881 & 0.7605& 0.3231\\
&\bf denoised& 0.9941 & 0.9989& 0.0071\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{0 db}}
& noisy & 0.9680 & 0.6471& 0.5599 \\
&\bf denoised& 0.9826 & 0.9922& 0.0251 \\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{-6 db}}
& noisy & 0.9523 & 0.5806&0.7357\\
&\bf denoised&0.9470 & 0.9466 & 0.1064\\
\cline{1-5}
\end{tabular}
\label{tablegqrsmitbih}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Results of 'gqrs' with post-processor 'gqpost' applied to the first channel, corrupted with noise and denoised using our method (MIT-BIH Arrhythmia, 48 records).}
\begin{tabular}[l]{c c c c c}
\multirow{2}{*}{SNR}& \multirow{2}{*}{channel 1} &\multirow{2}{*}{sensitivity}& positive&error\\
& & &predictivity&rate \\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{24 db}}
& noisy & 0.9970 & 0.9970 & 0.0060\\
&\bf denoised& 0.9961 & 0.9994& 0.0045\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{18 db}}
& noisy & 0.9969 & 0.9868 & 0.0165\\
&\bf denoised& 0.9960 & 0.9993 & 0.0047\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{12 db}}
& noisy & 0.9921 & 0.9318 & 0.0806\\
&\bf denoised& 0.9957 & 0.9992 & 0.0050\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{6 db}}
& noisy & 0.9659 & 0.8282 & 0.2345\\
&\bf denoised& 0.9942 & 0.9988& 0.0069\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{0 db}}
& noisy & 0.9121 & 0.7037& 0.4720\\
&\bf denoised& 0.9838 & 0.9947&0.0215\\
\cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{-6 db}}
& noisy & 0.8767 & 0.6301 & 0.6380\\
&\bf denoised&0.9493 & 0.9568 & 0.0935 \\
\cline{1-5}
\end{tabular}
\label{tablegqrsgqpmitbih}
\end{table*}
We applied the programs 'gqrs' and 'EPlimited' to the first channel, in noisy versions of each record and in the reconstructed signal, to verify whether, after applying our method, there were significant improvements in the performance of those programs.
The results are reported in tables~\ref{tablegqrsmitbih}, \ref{tablegqrsgqpmitbih}, \ref{tableEPLimitedmitbihqrs}, \ref{mitbihbeatclassification} . The first column indicates the SNR of the resulting ECG, the same value for both channels, after corrupting it with noise. The second column refers to the signal used when applying the program to the first channel: {\bf denoised} means the reconstructed noisy first channel, using our method. The tables present the sensitivity, positive predictivity, number of detection errors and error rate, in the case of QRS detectors, and VEB sensitivity, positive predictivity, false positive rate and classification rate, for the 'EP Limited' beats classification. We used the following programs to report the results: 'bxb', from WFDB software~\cite{standards}, in the case of 'gqrs', and 'bxbep', in the the case of 'EPLimited'.
The numbers are relative to all the 48 records from the MIT-BIH Arrhythmia Database, 91225 beats, from which 6101 are VEBs: we started the test after the first 5 minutes and stopped one second before the end. In the case of EP limited we had to start the test one second later because we could not configure 'bxbep'to behave in another way.
\begin{table*}[!ht]
\centering
\caption{Results of EP Limited (QRS detection) applied to the first channel, corrupted with noise and denoised using our method(MIT-BIH Arrhythmia, 48 records). }
\begin{tabular}[l]{c c c c c}
\multirow{2}{*}{SNR}& \multirow{2}{*}{channel 1}&\multirow{2}{*}{sensitivity}& positive& error \\
& & & predictivity&rate\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{24 db}}
& noisy & 0.9977 & 0.9981& 0.0042 \\
&\bf denoised&0.9961 &0.9996 & 0.0043\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{18 db}}
& noisy & 0.9977 & 0.9945& 0.0079 \\
&\bf denoised&0.9957 & 0.9995& 0.0048\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{12 db}}
& noisy & 0.9969 &0.9342& 0.0733 \\
&\bf denoised&0.9955 & 0.9995& 0.0050\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{6 db}}
& noisy& 0.9857 &0.7943& 0.2696 \\
&\bf denoised&0.9944 &0.9993&0.0063\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{0 db}}
& noisy & 0.9432& 0.7110& 0.4401\\
&\bf denoised&0.9865 & 0.9951 &0.0184\\ \cline{1-5}
\multicolumn{1}{c}{\multirow{2}{*}{-6 db}}
& noisy& 0.8557& 0.6568&0.5915 \\
&\bf denoised&0.9531 &0.9493&0.0977\\ \cline{1-5}
\end{tabular}
\label{tableEPLimitedmitbihqrs}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Beat clasification as Normal or Ventricular Ectopic Beat (VEB): results of EP Limited applied to the first channel, corrupted with noise and denoised using our method(MIT-BIH Arrhythmia, 48 records). }
\begin{tabular}[l]{cccccc}
\multirow{2}{*}{SNR}& \multirow{2}{*}{signal}& VEB & VEB positive& VEB false & classification \\
& &sensitivity & predictivity & positive rate& rate \\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{24 db}}
& noisy&0.9147 & 0.9589 & 0.0028 & 0.9916\\
&denoised &0.9142 & 0.9815& 0.0013& 0.9930 \\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{18 db}}
& noisy & 0.8873 & 0.9260 & 0.0051&0.9876\\
&denoised& 0.9089 & 0.9759&0.0016&0.9923\\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{12 db}}
& noisy & 0.8190 &0.5935 & 0.0380& 0.9530\\
&denoised&0.9032 & 0.9785&0.0014&0.9921\\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{6 db}}
& noisy & 0.6977&0.2369&0.1291&0.8615\\
&denoised&0.8901 & 0.9778&0.0015&0.9912\\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{0 db}}
& noisy & 0.6083& 0.1308& 0.2160&0.7751\\
&denoised&0.8567 & 0.9513& 0.0032&0.9873\\ \cline{1-6}
\multicolumn{1}{c}{\multirow{2}{*}{-6 db}}
& noisy & 0.5720& 0.0939& 0.2994&0.6940\\
&denoised&0.7663 &0.7689 &0.0166& 0.9688\\ \cline{1-6}
\end{tabular}
\label{mitbihbeatclassification}
\end{table*}
For QRS detectors, after applying our denoising procedure, there is always an improvement in positive predictivity in the tested programs, but, for high values of SNR there is a small reduction in the sensitivity: above 12 db for gqrs and above 6 db for EPLimited. Besides some ectopic beats not being well reconstructed, the reduction in sensitivity is due to a smaller amplitude of the QRS complex in the reconstructed signal; this occurs in the first beat from figure~\ref{fig:rec103}. We could improve the sensitivity in the reconstructed signal, at the cost of a reduction in the positive predictivity, multiplying the reconstructed signal by a factor greater than 1.0, but we chose not to do it.
For beat classification there is always a clear improvement after using the proposed method.
As supplementary material to this article, we present the detailed results for each program and record.
\subsection{Record mgh124}
The MGH/MF Waveform Database~\cite{mghdatabase, PhysioNet} is a collection of electronic recordings of hemodynamic and electrocardiographic waveforms. Typical records include three ECG leads. In the case of record mgh124, the first two ECG channels are sometimes strongly contaminated with noise, while the third ECG channel mantains a relatively good quality, therefore we have reliable QRS annotations. Using record mgh124, we tested our denoising method on a real ECG, without having to artificially add noise. In this case we reconstructed the second ECG channel, using only that same channel as input: we trained a neural network to produce a clean segment of the second channel given a corrupted version of the same segment. The clean parts of the channel 2 were used to obtain training data for the neural network.
\begin{figure*}[!ht]
\centering
\includegraphics[height=8cm, width=13cm]{figure4.pdf}
\caption{Reconstruction of noisy channel 2 from record mgh124: channel 3 is shown as a reference, it is not used for denoising. We notice that although the first VEB is recognized as such, the next two are not. This ECG segment starts at sample 853318.}
\label{fig:recmgh124}
\end{figure*}
tables ~\ref{tablegqrsmgh124} and~\ref{tableEPLmgh124} show the results of 'gqrs' and 'EP Limited' on the original second ECG channel and on the reconstructed version. The total number of beats during testing time is 8573, from which 458 are classified as VEBs.
\begin{table*}[!ht]
\centering
\caption{Results of 'gqrs' applied to second ECG channel, corrupted with noise and denoised using our method (record mgh124).}
\begin{tabular}[l]{ c c c c c}
\cline{1-5}
\multirow{2}{*}{lead 2}& detection &error&\multirow{2}{*}{sensitivity}& positive\\
&errors &rate& &predictivity \\
\cline{1-5}
noisy & 1344 & 0.1568 & 85.06\% & 99.14\%\\
\bf denoised& 890& 0.1038& 0.8971 &0.9990\\
\cline{1-5}
\end{tabular}
\label{tablegqrsmgh124}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{EP Limited: QRS detection and beat clasification as Normal or Ventricular Ectopic Beat (VEB), results on lead 2, corrupted with noise and denoised using our method(record mgh124). }
\begin{tabular}[l]{ccccc}
\multirow{2}{*}{signal}& QRS detection & QRS error& \multirow{2}{*}{ QRS sensitivity}& QRS positive \\
&errors &rate& &predictivity\\ \cline{1-5}
noisy & 553&0.0645 &0.9472& 0.9878\\
\bf denoised& 463 &0.0540& 0.9470 &0.9989\\ \cline{1-5}
& & &\\ \cline{1-4}
\multirow{2}{*}{signal}& \multirow{2}{*}{VEB sensitivity}& VEB positive& VEB false\\
& & predictivity & positive rate \\\cline{1-4}
noisy & 23.14\% & 17.82\%&6.305\%\\
recsignal&36.68\%&87.50\%&0.311\%\\ \cline{1-4}
\end{tabular}
\label{tableEPLmgh124}
\end{table*}
\subsection{record sele0106 from QT database}
Determination of peaks and limits of ECG waves is very important for ECG analysis: they are necessary for ECG measurements that are clinically relevant, namely, PQ interval, QRS duration, ST segment and QT interval.
Physionet QT data base was created to test QT detection algorithms~\cite{QTdatabase}. Each record contains at least 30 beats with manual annotations identifying the beginning, peak and end of the P-wave, the beginning and end of the QRS-complex and the peak and end of the T-wave.
We used the program 'ecgpuwave' to show that, in some situations, we can improve automatic ECG delineation by using a clean channel to reconstruct a very noisy one.
Typically, the accuracy of ecgpuwave when detecting the limits or peak of some ECG characteristic wave is better in one of the channels. The best channel to locate one of those reference points changes with the different characteristic points and also from record to record.
Table~\ref{sele0106:clean leads} shows the results of
ecgpuwave, on the two channels, when it locates P wave peak, P ending, QRS beginning and QRS ending. We are using the first annotator as reference. We can conclude that the error is smaller when ecgpuwave is applied to the second channel.
\begin{table*}[!ht]
\centering
\caption{Accuracy of ecgpuwave, in ms, locating some characteristic points on the record sele0106(QT database): comparing results for the two leads.}
\begin{tabular}[l]{cccc}
\cline{1-4}
reference point& channel& mean error & std error\\ \cline{1-4}
\multirow{2}{*}{P peak} & 1& 9.33 & 4.54\\
& 2 & 2.93 & 2.72\\ \cline{1-4}
\multirow{2}{*}{P off} & 1 & 15.73 & 10.06 \\
& 2 & 7.47 & 9.28\\ \cline{1-4}
\multirow{2}{*}{QRS on} & 1 & 18.80 & 6.62\\
& 2& 7.87 & 4.32 \\ \cline{1-4}
\multirow{2}{*}{QRS off} & 1& 13.33 & 4.17\\
& 2 & 4.67 & 4.27 \\ \cline{1-4}
\cline{1-4}
\end{tabular}
\label{sele0106:clean leads}
\end{table*}
At this point we consider an easily imaginable situation, where the second channel is highly corrupted with noise, in such a way that it is better to use only the first channel for the reconstruction of the second channel. In this case we trained a neural network to produce a segment of channel 2 when it gets the corresponding segment of channel 1 as input.
\begin{figure*}[!ht]
\centering
\includegraphics[height=6cm, width=12cm]{figure5.pdf}
\caption{reconstructing channel 2 from record sele0106(QT database) using only channel 1.}
\label{fig:recsele0106}
\end{figure*}
We followed this procedure and applied ecpuwave to the reconstructed channel~2. The results are in table~\ref{sele0106:reclead2}.
One can see that we still get better results using reconstructed channel~2, from channel 1, than when applying ecgpuwave to clean channel~1.
\begin{table*}[!ht]
\centering
\caption{comparing results, in ms, of ecgpuwave for the channel 1 and reconstructed channel 2}
\begin{tabular}[l]{cccc}
\cline{1-4}
reference point& channel& mean error & std error\\ \cline{1-4}
\multirow{2}{*}{P peak} & 1& 9.33 & 4.54\\
& reconstructed channel 2 & 2.80 & 2.76\\ \cline{1-4}
\multirow{2}{*}{P off} & 1 & 15.73 & 10.06 \\
& reconstructed channel 2 & 7.20 & 6.54\\ \cline{1-4}
\multirow{2}{*}{QRS on} & 1 & 18.80 & 6.62\\
& reconstructed channel 2& 7.33 & 4.63 \\ \cline{1-4}
\multirow{2}{*}{QRS off} & 1& 13.33 & 4.17\\
& reconstructed channel 2 & 6.40 & 4.21 \\ \cline{1-4}
\cline{1-4}
\end{tabular}
\label{sele0106:reclead2}
\end{table*}
\section{Discussion of results and conclusions}
Adding noise to existing records, we carried out extensive expe\-ri\-ments on all the records from the MIT-BIH Arrhythmia Database. In the presence of high noise, SNR equal to 12db and lower, the programs we tested showed much better perfomance when we applied our denoising method to the ECGs. For low noise, SNR above 12db, after applying our method, QRS detectors show a slight reduction in sensitivity although there is an improvement in the positive predictivity.
The experiments with records mgh124 and sele0106, without artificially adding noise in the test, confirm the advantages of using our method on a real ECG, a Holter record, for example. The experiment with record sele0106 also shows that the result of reconstructing a noisy channel can be exceptionally good when clean channels are available.
\bibliographystyle{plain}
|
2,877,628,088,579 | arxiv | \section{Acknowledgements}
We thank the staff of the Collider-Accelerator and Physics
Departments at BNL for their vital contributions. We acknowledge
support from the Department of Energy and NSF (U.S.A.),
MEXT and JSPS (Japan), CNPq and FAPESP (Brazil), NSFC (China),
IN2P3/CNRS, CEA, and ARMINES (France),
BMBF, DAAD, and AvH (Germany),
OTKA (Hungary), DAE and DST (India), ISF (Israel),
KRF and CHEP (Korea), RMIST, RAS, and RMAE (Russia),
VR and KAW (Sweden), U.S. CRDF for the FSU,
US-Hungarian NSF-OTKA-MTA, and US-Israel BSF.
|
2,877,628,088,580 | arxiv | \section{Preliminaries}\label{sec:Setting}
\input{Generalities}
\section{On the continuity}\label{sec:RegularValues}
\input{Regular_Values}
\section{On the smoothness}\label{sec:Consequences}
\input{Consequences}
\subsection{The end-point map}In what follows we fix a point $x_0\in M$ and a time $T>0$.
\begin{defi}[end-point map]
The \emph{end-point map} at time $T$ is the map
\begin{equation}\label{eq:end-point}
\Endp{}:\Omega_{x_0}^T \to M,\quad \Endp{(u)}=x_u(T),
\end{equation}
where $x_u(\cdot)$ is the admissible trajectory driven by the control $u$.
\end{defi}
The end-point map is smooth on $\Omega_{x_{0}}^{T}\subset L^{2}([0,T],\mathbb{R}^{d})$. The computation of its Fr\'echet differential is classical and can be found for example in \cite{nostrolibro,rifbook,Tre00}:
\begin{prop}
The differential $\dEndp{u}:L^{2}([0,T],\mathbb{R}^{d})\to T_{x_{u}(T)}M$ of the end-point map at $u\in\Omega_{x_0}^T$ is given by the formula:
\begin{equation}\label{eq:DiffEndp}
\dEndp{u}(v)=\int_0^T\sum_{i=1}^d v_i(s)\left(\Flow{s,T}{u}\right)_*X_i(x_u(s))ds.
\end{equation}
\end{prop}
Let us consider a sequence of admissible controls $\{u_n\}_{n\in\mathbb{N}}$, which weakly converges to some element $u\in
L^2([0,T],\mathbb{R}^d) $. Then the sequence $\{u_n\}_{n\in\mathbb{N}}$ is bounded in $L^{2}$ and, thanks to our assumption (H2), there exists a compact set $K_{T}$ such that $x_{u_n}(t)\in K_{T}$ for all $n\in \mathbb{N}$ and $t\in [0,T]$.
This yields that the family of trajectories $\{x_{u_n}(\cdot)\}_{n\in\mathbb{N}}$ is uniformly bounded, and from here it is a classical fact to deduce that the weak limit $u$ is an admissible control, and that $x_u(\cdot)=\lim_{n\to\infty}x_{u_n}(\cdot)$ (in the uniform topology) is its associated trajectory (see for example \cite{trelatbook}).
This proves that the end-point map $\Endp{}$ is \emph{weakly continuous}. Indeed, one can prove that the same holds true for its differential $\dEndp{u}$. More precisely:
if $\{u_n\}_{n\in\mathbb{N}}$ is a sequence of admissible controls which weakly converges in $L^2([0,T],\mathbb{R}^d)$ to $u$ (which is admissible by the previous discussion), we have both that
\begin{equation}\label{eq:Convergence}
\lim_{n\to\infty}\Endp{(u_n)}=\Endp{(u)}\quad\textrm{ and }
\quad \lim_{n\to\infty}\dEndp{u_n}=\dEndp{u},
\end{equation} and the last convergence is in the (strong) operator norm (see \cite{Tre00}).
\begin{remark}
There are other possible assumptions to ensure that the weak limit of a sequence of admissible controls is again an admissible control; for example, as suggested in \cite{CanRif08}, one could ask a sublinear growth condition on the vector fields $X_0,\dotso, X_d$. In this case the uniform bound on the trajectories (equivalent to (H2)) follows as a consequence of the Gronwall inequality, and the observation that a weakly convergent sequence in $L^2$ is necessarily bounded.
\end{remark}
\begin{defi}[Attainable set]\label{def:AttSet}
For a fixed final time $T>0$, we denote by $A_{x_0}^T$ the image of the end-point map at time $T$, and we call it the \emph{attainable set} (from the point $x_{0}$).
\end{defi}
In general the inclusion $A_{x_0}^T\subset M$ can be proper, that is the end-point map $\Endp{}$ may not be surjective on $M$; nevertheless, the weak H\"ormander condition \eqref{eq:WeakHormander} implies that for every initial point $x_0$ one has $\Int{A_{x_0}^T}\neq \emptyset$ \cite[Ch. 3, Thm.\ 3]{jurdjevicbook}.
\subsection{Value function and optimal trajectories}\label{s:limit}
Let $Q:M\to \mathbb{R}$ be a smooth function, which plays in what follows the role of a potential; if we introduce the Tonelli Lagrangian
\begin{equation}\label{eq:TonLag}
L:M\times \mathbb{R}^d\to \mathbb{R}, \qquad L(x,u)=\frac{1}{2}\left(\sum_{i=1}^d u_i^2-Q(x)\right),
\end{equation}
then the \emph{cost} $C_T:\Omega_{x_0}^T\to \mathbb{R}$ is written as:
\begin{equation}\label{eq:Cost}
\Costu{(u)}=\int_0^T L(x_u(t),u(t))dt=\frac{1}{2}\int_0^T\left(\sum_{i=1}^d u_i(t)^2-Q(x_u(t))\right)dt.
\end{equation}
The differential $d_u\Costu{}$ of the cost can be recovered similarly as for the differential of the end-point map, and is given, for every $v\in L^2([0,T],\mathbb{R}^d) $, by the formula
\begin{equation}\label{eq:diffCost}
d_u\Costu{(v)}=\int_0^T\langle u(t),v(t)\rangle dt-\frac{1}{2}\int_0^TQ'(x_u(t))\left(\int_0^t\sum_{i=1}^{d}v_i(s)(\Flow{s,t}{u})_*X_i(x_u(s))ds\right)dt,
\end{equation}
that is obtained by writing $x_u(t)=E_{x_0}^{t}(u)$ and applying \eqref{eq:DiffEndp}.
\smallskip
Fix two points $x_0$ and $x$ in $M$. The problem of describing optimal trajectories steering $x_0$ to $x$ in time $T$ can be naturally reformulated in the following way: introducing the value function $\Cost{}:M\to\mathbb{R}$ via the position
\begin{equation}\label{eq:ValueFunct}
\Cost{(x)}:=\inf\left\{\Costu{(u)}\mid u\in \Omega_{x_0}^T \cap \left(\Endp\right)^{-1}{(x)}\right\},
\end{equation}
with the agreement that $\Cost{(x)}=+\infty$ if the preimage $\left(\Endp{}\right)^{-1}(x)$ is empty,
then, for any fixed $x\in M$, the \emph{optimal control problem} consists into looking for elements $u\in L^2([0,T],\mathbb{R}^d) $ realizing the infimum in \eqref{eq:ValueFunct}. Accordingly, from now on we will call \emph{optimal control} any admissible control $u$ which solves the optimal control problem.
In this paper we will aways concentrate on the case that the final point $x$ of an admissible trajectory belongs to the interior of the attainable set $A_{x_0}^T$. Indeed, it is a general fact that $\Int{A_{x_0}^T}$ is densely contained in $A_{x_0}^T$ \cite{agrachevbook,jurdjevicbook}, and the weak H\"ormander condition ensures that $\Int{A_{x_0}^T}$ is non-empty; moreover, for every point $x\in \Int{A_{x_0}^T}$, we trivially have that $\Cost{(x)}<+\infty$, since by definition there exists at least one admissible control $v$ steering $x_0$ to $x$.
Existence of minimizers under our main assumptions (H1)-(H3) follows from classical arguments.
\begin{prop}[Existence of minimizers]\label{prop:optimal}
Let $x\inA_{x_0}^T$. Then there exists an optimal control $u\in \Omega_{x_0}^T$ satisfying:
\begin{equation}
\Endp{(u)}=x,\quad\textrm{and}\quad\Costu{(u)}=\Cost{(x)}.
\end{equation}
\end{prop}
\begin{remark}
The assumptions (H2)-(H3) play a crucial role for the existence of optimal control. An equivalent approach could be to work directly inside a given compact set (see \cite{MemAMS}) or with $M$ itself a compact manifold. For some specific cases, as in the classical case of the harmonic oscillator, one is able to integrate directly Hamilton's equations (cf.\ Section \ref{s:ham}), and the existence of optimal trajectories could be proved with ad hoc arguments.
\end{remark}
As already pointed out in the introduction, one could not expect global continuity for the value function. Nevertheless, it is well-known that under our assumptions, we have the following.
\begin{prop}\label{prop:Semicontinuous}
The map $\Cost:A_{x_0}^T\to \mathbb{R}$ is lower semicontinuous.
\end{prop}
Proofs of Propositions \ref{prop:optimal} and \ref{prop:Semicontinuous} are classical and follows from standard arguments in the literature, hence their proof is omitted and left to the reader.
\subsection{Lagrange multipliers rule}
In this section we briefly recall the classical necessary condition satisfied by optimal controls $u$ realizing the infimum in \eqref{eq:ValueFunct}. It is indeed a restatement of the classical Lagrange multipliers' rule (see \cite{agrachevbook,nostrolibro,pontryaginbook}).
\begin{prop}\label{prop:Lagrange}
Let $u\in L^2([0,T],\mathbb{R}^d) $ be an optimal control with $x=\Endp{(u)}$. Then at least one of the following statements is true:
\begin{itemize}
\item [a)] $\exists\, \lambda_T\in T^*_xM$ such that $\lambda_T \dEndp{u}=d_u\Costu{}$,
\item [b)] $\exists\, \lambda_T \in T^*_xM$, with $\lambda_T\neq 0$, such that $\lambda_T\dEndp{u}=0$.
\end{itemize}
\end{prop}
Here $\lambda_T \dEndp{u}:L^{2}([0,T])\to \mathbb{R}$ denotes the composition of the linear maps $\dEndp{u}:L^{2}([0,T])\to T_{x}M$ and $\lambda_{T}:T_{x}M\to \mathbb{R}$.
A control $u$, satisfying the necessary conditions for optimality stated in Proposition \ref{prop:Lagrange}, is said \emph{normal} in case (a) and \emph{abnormal} in case (b); moreover, directly from the definition we see that $\dEndp{u}$ is not surjective in the abnormal case. We stress again that the two possibilities are not mutually exclusive, and we define accordingly a control $u$ to be \emph{strictly normal} (resp. \emph{strictly abnormal}) if it is normal but not abnormal (resp. abnormal but not normal). Slightly abusing of the notation, we extend this language even to the associated optimal trajectories $t\mapsto x_u(t)$.
\subsection{Normal extremals and exponential map} \label{s:ham}
Let us denote by $\pi:T^*M\to M$ the canonical projection of the cotangent bundle, and by $\langle \lambda,v\rangle$ the duality pairing between a covector $\lambda\in T^*_xM$ and a vector $v\in T_xM$. In canonical coordinates $(p,x)$ on the cotangent space, we can express the Liouville form as $s=\sum_{i=1}^mp_idx_i$ and the standard symplectic form becomes $\sigma=ds=\sum_{i=1}^m dp_i\wedge dx_i$. We denote by $\ovr{h}$ the Hamiltonian vector field associated with a smooth function $h:T^*M\to \mathbb{R}$, defined by the identity:
\begin{equation}\label{eq:SymplLift}
\ovr{h}=\sum_{i=1}^m\frac{\partial h}{\partial p_i}\frac{\partial}{\partial x_i}-\frac{\partial h}{\partial x_i}\frac{\partial}{\partial p_i}.
\end{equation}
The Pontryagin Maximum Principle \cite{pontryaginbook,agrachevbook} tells us that candidate optimal trajectories are projections of extremals, which are integral curves of the constrained Hamiltonian system:
\begin{equation}
\dot{x}(t)=\frac{\partial\mc{H}}{\partial p}(u(t),\nu,p(t),x(t)),\quad \dot{p}(t)=-\frac{\partial\mc{H}}{\partial x}(u(t),\nu,p(t),x(t)),\quad 0=\frac{\partial\mc{H}}{\partial u}(u(t),\nu,p(t),x(t)),
\end{equation}
where the (control-dependent) Hamiltonian $\mc{H}:\mathbb{R}^d\times (-\infty,0]\times T^*M\to\mathbb{R}$, associated with the system \eqref{eq:contrsyst0}, is defined by:
\begin{equation}\label{eq:Hamiltonian}
\mc{H}^\nu(u,\nu,p,x)=\langle p,X_0(x)\rangle+\sum_{i=1}^d u_i\langle p,X_i(x)\rangle+\frac{\nu}{2}\sum_{i=1}^d u_i^2-\frac{\nu}{2}Q(x).
\end{equation}
In particular, the non-positive real constant $\nu$ remains constant along extremals; recalling the result of Proposition \ref{prop:Lagrange}, there holds either the identity $(p(T),\nu)=(\lambda_T,0)$ in the case of abnormal extremals, or $(p(T),\nu)=(\lambda_T,-1)$ for the normal ones. Moreover, we see that under the previous normalizations, the optimal control $u(t)$ along normal extremals can be recovered using the equality:
\begin{equation}\label{eq:NormControl}
u_i(t)=\langle p(t),X_i(x(t))\rangle,\qquad \textrm{for }i=1,\dotso,d.
\end{equation}
Normal extremals are therefore solutions to the differential system:
\begin{equation}
\dot{x}(t)=\frac{\partial H}{\partial p}(p(t),x(t)),\quad \dot{p}(t)=-\frac{\partial H}{\partial x}(p(t),x(t)),
\end{equation}
where the Hamiltonian $H$ has the expression:
\begin{equation}\label{eq:Hamiltonian2}
H(p,x)=\langle p,X_0(x)\rangle+\frac{1}{2}\sum_{i=1}^d \langle p,X_i(x)\rangle^2+\frac{1}{2}Q(x).
\end{equation}
In particular, being the solution to a smooth autonomous system of differential equations, the pair $(x(t),p(t))$ is smooth as well, which eventually implies that the control $u_i(t)=\langle p(t),X_i(x(t))\rangle$ associated to normal trajectories is itself smooth by \eqref{eq:NormControl}. It is well known that, under our assumptions, small pieces of normal trajectories are optimal among all the admissible curves that connect their end-points (see for instance \cite{agrachevbook}), that is, if $x_1=x_u(t_1)$ and $x_2=x_u(t_2)$ are sufficiently close points on the normal trajectory $x_u(\cdot)$, then the cost-minimizing admissible trajectory between $x_1$ and $x_{2}$ that solves \eqref{eq:ValueFunct} is precisely $x_u(\cdot)$.
\begin{defi}[Exponential map]\label{def:ExponentialMap}
The exponential map $\mc{E}$ with base point $x_0$ is defined as
\begin{equation}\label{eq:ExpMap}
\mc{E}_{x_0}(\cdot,\cdot):[0,T]\times T_{x_0}^*M\to M,\quad \mc{E}_{x_0}(s,\lambda)=\pi(e^{s\ovr{H}}(\lambda)).
\end{equation}
When the first argument is fixed, we employ the notation $\mc{E}_{x_0}^s:T_{x_0}^*M\to M$ to denote the exponential map with base point $x_0$ at time $s$; that is to say, we set $\mc{E}_{x_0}^s(\lambda):=\mc{E}_{x_0}(s,\lambda)$.
\end{defi}
Then we see that the exponential map parametrizes normal extremals; moreover, mimicking the classical notion in the Riemannian setting, it permits to define \emph{conjugate points} along these trajectories.
\begin{defi}
We say that a point $x=\mc{E}_{x_0}(s,\lambda)$ is conjugate to $x_{0}$ along the normal extremal $t\mapsto \mc{E}_{x_0}(t,\lambda)$ if $(s,\lambda)$ is a critical point of $\mc{E}_{x_0}$, i.e. if the differential $d_{(s,\lambda)}\mc{E}_{x_0}$ is not surjective.
\end{defi}
\section{Introduction}\label{sec:Intro}
The regularity of the value function associated with an optimal control problem is a classical topic of investigation in control theory and has been deeply studied in the last decades, extensively using tools from geometric control theory and non-smooth analysis. It is well-known that the value function associated with an optimal control problem fails to be everywhere differentiable and this is typically the case at those points where the uniqueness of minimizers is not guaranteed. Actually, it is not even continuous, in general, as soon as singular minimizers are allowed (see for instance \cite{AL09,Tre00}).
In this paper we investigate the regularity of the value function associated with affine optimal control problems, whose cost is written as a quadratic term plus a potential
The key starting point of our work is the characterization of points where the value function is continuous. As we said, in presence of singular minimizers for the control problem one could not expect the value function to be continuous.
Indeed, for a fixed final time $T>0$ and initial point $x_{0}$, the continuity of the value function $\Cost$ at a point $x$ is strictly related with the openness of the end-point map on the optimal controls steering the initial fixed point $x_0$ to $x$ in time $T>0$. Here by end-point map, we mean the map that to every control $u$ associates the final point of the corresponding trajectory (cf. Section \ref{s:general} for precise definitions).
Without assuming any condition on singular minimizers, we focus on the set of points, that we call \emph{tame points}, in the interior of the attainable set such that the end-point map is open \emph{and} a submersion at every optimal control. The main result of this paper is that we can find a large set of tame points. Since tame points are points of continuity for the value function, we deduce that $\Cost$ is continuous on an open and dense set of the interior of the attainable set.
Adapting then the arguments of \cite{agrachevsmooth,trelatrifford}, we prove that the value function is actually smooth on a (possibly smaller) open dense subset of the interior of the attainable set.
The main novelty with respect to the known results, valid in the drift-less case and with zero potential, is that in the latter case the value function is everywhere continuous as a consequence of the openness of the end-point map, even in presence of deep singular minimizers. The absence of such a property for affine control system makes the study of the continuity of the value function more delicate in our context.
Let us introduce briefly the notation and present the main results more in details.
\subsection{Setting and main results}
Let $M$ be a smooth, connected $m$-dimensional manifold and let $T>0$ be a given \emph{fixed} final time. A smooth \emph{affine control system} is a dynamical system which can be written in the form:
\begin{equation}\label{eq:contrsyst0}
\dot{x}(t)=X_0(x(t))+\sum_{i=1}^d u_i(t)X_i(x(t)),
\end{equation}
where $X_0,X_1,\dotso, X_d$ are smooth vector fields on $M$, and the map $t\mapsto u(t)=(u_1(t),\dotso,u_d(t))$ belongs to the Hilbert space $L^2([0,T],\mathbb{R}^d)$.
Given $x_{0}\in M$ we define:
\begin{itemize}
\item[(i)] the set of \emph{admissible controls} $\Omega^{T}_{x_{0}}$ as the subset of $u\in L^2([0,T],\mathbb{R}^d)$, such that the solution $x_u(\cdot)$ to \eqref{eq:contrsyst0} satisfying $x_{u}(0)=x_{0}$ is defined on the interval $[0,T]$. If $u\in \Omega^{T}_{x_{0}}$ we say that $x_u(\cdot)$ is an \emph{admissible trajectory}. By classical results of ODE theory, the set $\Omega^{T}_{x_{0}}$ is open.
\item[(ii)] the \emph{attainable set} $A_{x_0}^T$ (from the point $x_{0}$, in time $T>0$), as the set of points of $M$ that can be reached from $x_{0}$ by admissible trajectories in time $T$, i.e.,
$$A_{x_0}^T=\{x_{u}(T) \mid u\in \Omega_{x_0}^{T}\}.$$
\end{itemize}
For a given smooth function $Q:M\to \mathbb{R}$, we are interested in those trajectories minimizing the
\emph{cost} given by:
\begin{equation}\label{eq:Cost0}
C_T:\Omega^{T}_{x_{0}}\to \mathbb{R}, \qquad
\Costu{(u)}
\frac{1}{2}\int_0^T\left(\sum_{i=1}^d u_i(t)^2-Q(x_u(t))\right)dt.
\end{equation}
More precisely, given $x_0\in M$ and $T>0$, we are interested in the regularity properties of the \emph{value function} $\Cost{}:M\to\mathbb{R}$ defined as follows:
\begin{equation}\label{eq:ValueFunctI}
\Cost{(x)}=\inf\left\{\Costu{(u)}\mid u\in \Omega^{T}_{x_{0}},\, x_{u}(T)=x\right\};
\end{equation}
with the understanding that $\Cost{(x)}=+\infty$ if $x$ cannot be attained by admissible curves in time $T$.
We call \emph{optimal control} any control $u$ which solves the optimal control problem \eqref{eq:ValueFunctI}.
\begin{main} For the rest of the paper we make the following assumptions:
\begin{itemize}
\item[(H1)] The \emph{weak H\"ormander condition} holds on $M$. Namely, we require for every point $x\in M$ the equality
\begin{equation}\label{eq:WeakHormander}
\textrm{Lie}_x\left\{\left(\mathrm{ad}\,X_0\right)^jX_i\mid j\geq 0,\, i=1,\dotso,d\right\}=T_xM.
\end{equation}
where $(\mathrm{ad}\, X)Y=[X,Y]$, and $\textrm{Lie}_x\mathcal{F}\subset T_{x}M$ denotes the evaluation at the point $x$ of the Lie algebra generated by a family $\mathcal{F}$ of vector fields.
\item[(H2)] For every bounded family $\mc{U}$ of admissible controls, there exists a compact subset $K_T\subset M$ such that $x_u(t)\in K_T$, for every $u\in\mc{U}$ and $t\in[0,T]$.
\item[(H3)] The potential $Q$ is a smooth function bounded from above.
\end{itemize}
\end{main}
The assumption (H1) is needed to guarantee that the attainable set has at least non-empty interior, i.e., $\Int{A_{x_0}^T}\neq \emptyset$ (cf.\ \cite[Ch. 3, Thm.\ 3]{jurdjevicbook}).
The second assumption (H2) is a completeness/compactness assumption on the dynamical system that, together with (H3), is needed to guarantee the existence of optimal controls. We stress that (H2) and (H3) are automatically satisfied when $M$ is compact. When $M$ is not compact, (H2) holds true under a sublinear growth condition on the vector fields $X_0,\dotso, X_d$. We refer to Section 2 for more details on the role of these assumptions.
As already anticipated, the key starting point of our work is the characterization of points where the value function is continuous through the study of the set of \emph{tame points}. This is the set $\Sigma_{t}\subset \Int{A_{x_0}^T}$ of all points $x$ such that the end-point map is open \emph{and} a submersion at every optimal control steering $x_0$ to $x$. The main result of this paper, whose proof comprises its technical core, is that we can find a large set of tame points.
\begin{thm}\label{t:main2}
Fix $x_{0}\in M$ and let $\Cost{}$ be the value function associated with an optimal control problem of the form \eqref{eq:contrsyst0}-\eqref{eq:Cost0} satisfying assumptions \emph{(H1)-(H3)}. Then the set $\Sigma_t$ of tame points is open and dense in $\Int{A_{x_0}^T}$ and $\Cost{}$ is continuous on $\Sigma_{t}$.
\end{thm}
In the drift-less case (more precisely, when $X_{0}=0$ and $Q=0$) the end-point map is open at every point, even if it is not a submersion in the presence of singular minimizers. This, however, suffices for the sub-Riemannian distance to be continuous everywhere. Moreover this remains true for any $L^p$-topology on the space of controls, for $p<+\infty$, see \cite{BL}.
This is no more true if we introduce a drift field, and the characterization of the set of points where the end-point is open and the choice of the topology in the space of controls is more delicate.
The proof of Theorem \ref{t:main2} is inspired by the arguments, dealing with the sub-Riemannian case, presented among others by the first author in \cite[Chapter 11]{nostrolibro}, and starts by characterizing the set of points reached by a unique minimizer trajectory that is not strictly singular (called \emph{fair} points). The classical argument proves that this set is dense in the attainable set but, while in the drift-less case each of these points is also a continuity point for the value function, in this setting in principle it could likely be that the set of fair points and the set of continuity points, both dense, may have empty intersection.
Completing this gap requires ad hoc new arguments developed in Section \ref{s:tamet}.
Once Theorem \ref{t:main2} is proved, an adaptation of arguments from \cite{agrachevsmooth,trelatrifford} let us derive the following result.
\begin{thm}\label{t:main}
Under the assumptions of Theorem \ref{t:main2}, $\Cost{}$ is smooth on a non-empty open and dense subset of $\Int{A_{x_0}^T}$.
\end{thm}
In \cite{agrachevsmooth}, the author proves the analogue of Theorem \ref{t:main} for the value function associated with sub-Riemannian optimal control problems, i.e., drift-less systems with zero potential. Notice that in this case (H1) reduces to the classical H\"ormander condition, and the value function (at time $T$) coincides with one half of the square of the sub-Riemannian distance (divided by $T$) associated with the family of vector fields $X_{1},\ldots,X_{d}$.
Let us further mention that, even in the sub-Riemannian situation, it still remains an open question to establish whether the set of smoothness points of the value function has full measure in $\Int{A_{x_0}^T}$ or not.
\subsection{Further comments}
Regularity of the value function for these kinds of control system with techniques of geometric control has been also studied in \cite{CanRif08,Tre00}, where the authors assume that there are no abnormal optimal controls, a condition which yields the openness of the end-point already at the first order, while in \cit
{AL09} the authors obtain the openness of the end-point map on optimal controls with second-order techniques, assuming no optimal Goh abnormal controls exist.
For more details on Goh abnormals we refer the reader to \cite[Chapter 20]{agrachevbook} (see also \cite{nostrolibro,rifbook}). Let us mention that in \cite{CJT} the authors prove that the system \eqref{eq:contrsyst0} admits no Goh optimal trajectories for the generic choice of the $(d+1)$-tuple $X_0,\dotso, X_d$ (in the Whitney topology). Finally in \cite{prandi} the author proves the H\"older continuity of the value function under a strong bracket generating assumption, when one considers the $L^{1}$ cost.
\smallskip
For different approaches investigating the regularity of the value function through techniques of non-smooth analysis, one can see for instance the monographs \cite{bardibook,Cla98,cannarsabook,frankowskanotes}.
\subsection{Structure of the paper}
In Section \ref{s:general} we recall some properties of the end-point map, the existence of minimizers in our setting and recall their characterization in terms of the Hamiltonian equation. Section \ref{s:regular} introduces different sets of points that are relevant in our analysis. Section \ref{s:tamet} is devoted to the study of tame points and the proof of Theorem \ref{t:main2}. In the last Section \ref{s:cons} we complete the proof of Theorem \ref{t:main}. Finally, in Appendix \ref{sec:appendix} we present for readers' convenience the proof of a few technical facts, adapted with minor modifications to our setting.
\subsection{Fair points} We start by introducing the set of fair points.
\begin{defi}\label{def:FairP}
A point $x\in\Int{A_{x_0}^T}$ is said to be a \emph{fair point} if there exists a unique optimal trajectory steering $x_0$ to $x$, which admits a normal lift. We call $\Sigma_f$ the set of all fair points contained in the attainable set.
\end{defi}
We stress that only the uniqueness of the optimal trajectory matters in the definition of a fair point; abnormal lifts are as well admitted for the moment.
The lower semicontinuity of $\Cost{}$ permits to find a great abundance of fair points; their existence is related to the notion of proximal subdifferential (see for instance \cite{Cla98,trelatrifford} for more details).
\begin{defi}\label{defi:ProxSubd}
Let $F:\Int{A_{x_0}^T}\to\mathbb{R}$ be a lower semicontinuous function. For every $x\in\Int{A_{x_0}^T}$ we call the \emph{proximal subdifferential} at $x$ the subset of $T_x^*M$ defined by:
\begin{equation}\label{eq:ProxSubd}
\partial_P F(x)=\left\{\lambda=d_x\phi\in T^*_xM\mid \phi\in C^\infty(\Int{A_{x_0}^T})\textrm{ and } F-\phi\textrm{ attains a local minimum at }x\right\}.
\end{equation}
\end{defi}
The proximal subdifferential is a convex subset of $T^*_xM$ which is often non-empty in the case of a lower semicontinuous function \cite[Theorem 3.1]{Cla98}.
\begin{prop}\label{prop:NotEmpty}
Let $F:\Int{A_{x_0}^T}\to\mathbb{R}$ be a lower semicontinuous function. Then the proximal subdifferential $\partial_PF(x)$ is not empty for a dense set of points $x\in\Int{A_{x_0}^T}$.
\end{prop}
We showed in Proposition \ref{prop:Semicontinuous} that the value function $\Cost{}:\Int{A_{x_0}^T}\to \mathbb{R}$ is lower semicontinuous. By classical arguments, the proximal subdifferential machinery yields the following result (cf.\ also \cite{trelatrifford,agrachevsmooth}).
\begin{prop}\label{thm:DensityFair}
Let $x\in\Int{A_{x_0}^T}$ be such that $\partial_P\Cost{(x)}\neq \emptyset$. Then there exists a unique optimal trajectory $x_u(\cdot):[0,T]\to M$ steering $x_0$ to $x$, which admits a normal lift. In particular $x$ is a fair point.
\end{prop}
\begin{proof} Fix any $\lambda \in \partial_P\Cost{(x)}$. Let us prove that every optimal trajectory steering $x_0$ to $x$ admits a normal lift having $\lambda$ as final covector.
Indeed, if $\phi$ is a smooth function such that $\lambda=d_x\phi\in \partial_P\Cost{(x)}$, by definition the map
\begin{equation}\label{eq:Psi}
\psi:\Int{A_{x_0}^T}\to \mathbb{R},\quad \psi(y)=\Cost{(y)}-\phi(y)
\end{equation}
has a local minimum at $x$, i.e. there exists an open neighborhood $O\subset \Int{A_{x_0}^T}$ of $x$ such that $\psi(y)\geq \psi(x)$ for every $y\in O$. Then, let $t\mapsto x_u(t)$, $t\in[0,T]$ be an optimal trajectory from $x_0$ to $x$, let $u$ be the associated optimal control, and define the smooth map:
\begin{equation}\label{eq:Phi}
\Phi:\Omega_{x_0}^T\to \mathbb{R},\quad \Phi(v)=\Costu{(v)}-\phi(\Endp{(v)}).
\end{equation}
There exists a neighborhood $\mc{V}\subset \Omega_{x_0}^T$ of $u$ such that $\Endp{(\mc{V})}\subset O$, and since $\Costu{(v)}\geq \Cost{(\Endp{(v)})}$ we have the following chain of inequalities:
\begin{align}
\Phi(v)=\Costu{(v)}-\phi(\Endp{(v)})&\geq \Cost{(\Endp{(v)})}-\phi(\Endp{(v)})\\&\geq \Cost{(\Endp{(u)})}-\phi(\Endp{(u)})=\Costu{(u)}-\phi(\Endp{(u)})=\Phi(u),\quad \forall v\in\mc{V}.
\end{align}
Then:
\begin{equation}
0=d_u\Phi=d_u\Costu{}-\left(d_{x}\phi\right)\dEndp{u},
\end{equation}
and therefore we see that the curve $\lambda(t)=e^{(t-T)\vec H}(\lambda)$ is the desired normal lift of the trajectory $x_u(\cdot)$.
In particular, since any two normal extremal lifts having $\lambda$ as common final point have to coincide, we see that there can only be one optimal trajectory between $x_0$ and $x$, which precisely means that $x\in \Sigma_f$ is a fair point.
\end{proof}
\begin{remark}Notice that from the previous proof it follows that, when $\partial_P\Cost{(x)}\neq \emptyset$, then the unique normal trajectory steering $x_{0}$ to $x$ is strictly normal if and only if $\partial_P\Cost{(x)}$ is a singleton.
\end{remark}
\begin{cor}[Density of fair points]\label{cor:DensityFair}
The set $\Sigma_f$ of fair points is \emph{dense} in $\Int{A_{x_0}^T}$.
\end{cor}
In particular we have that all differentiability points of $\Cost{}$ are fair points.
\begin{cor}\label{cor:differentiability}
Suppose that $\Cost{}$ is differentiable at some point $x\in\Int{A_{x_0}^T}$. Then $x$ is a fair point, and its normal covector is $\lambda=d_x\Cost{}\in T^*_xM$.
\end{cor}
\begin{proof}
Indeed, let $u$ be any optimal control steering $x_0$ to $x$; then it is sufficient to consider the non-negative map
\begin{equation}
v\mapsto \Costu{(v)}-\Cost{(\Endp{(v)})},
\end{equation}
which has by definition a local minimum at $u$ (equal to zero). Then
\begin{equation}
0=d_u\Costu{}-\left(d_x\Cost{}\right)\dEndp{u},
\end{equation}
and the uniqueness of $u$ (hence the claim) follows as in the previous proof.
\end{proof}
\subsection{Continuity points}
We are also interested in the subset $\Sigma_c$ of the \emph{points of continuity} for the value function.
It is a fact from general topology that a lower semicontinuity functions has plenty of continuity points.
\begin{lemma}\label{prop:DensityContinuity}
The set $\Sigma_c$ is a residual subset of $\Int{A_{x_0}^T}$.
\end{lemma}
Recall that a residual subset of a topological space $X$ is the complement of a union of countably many nowhere dense subsets of $X$. This fact is well-known but the proof is often presented for functions defined on complete metric spaces. For the sake of completeness, we give a proof in the Appendix.
The existence of points of continuity is tightly related to the compactness of optimal controls, as it is shown in the next lemma.
\begin{lemma}\label{lemma:Contx}
Let $x\in\Int{A_{x_0}^T}$ be a continuity point of $\Cost{}$. Let $\{x_n\}_{n\in\mathbb{N}}\subset \Int{A_{x_0}^T}$ be a sequence converging to $x$ and let $u_n$ be an optimal control steering $x_0$ to $x_n$. Then there exists a subsequence $\{x_{n_k}\}_{k\in\mathbb{N}}\subset\{x_n\}_{n\in\mathbb{N}}$, whose associated sequence of optimal controls $\{u_{n_k}\}_{k\in\mathbb{N}}$, strongly converges in $L^2([0,T],\mathbb{R}^d)$ to some optimal control $u$ which steers $x_0$ to $x$.
\end{lemma}
\begin{proof}
Let $\{x_n\}_{n\in\mathbb{N}}\subset \Int{A_{x_0}^T}$ be a sequence converging to $x$ and let $\{u_n\}_{n\in\mathbb{N}}$ be the corresponding sequence of optimal controls. Since $x$ is a continuity point for the value function, it is not restrictive to assume that the sequence of norms $\{\|u_n\|_{L^2}\}_{n\in\mathbb{N}}$ remains uniformly bounded, and thus we can suppose to extract a subsequence $\{u_{n_k}\}_{k\in\mathbb{N}}\subset \{u_n\}_{n\in\mathbb{N}}$ such that $u_{n_k}\rightharpoonup u$ weakly in $L^2([0,T],\mathbb{R}^d)$, which in turn implies
\begin{equation}
\lim_{k\to\infty}\int_0^TQ(x_{u_{n_k}}(t))dt=\int_0^TQ(x_{u}(t))dt.
\end{equation}
Then we have
\begin{align}
\frac{1}{2}\|u\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u}(t))dt&\leq \liminf_{k\to\infty}\frac{1}{2}\|u_{n_k}\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u_{n_k}}(t))dt\\%&\leq \limsup_{k\to\infty}\frac{1}{2}\|u_{n_k}\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u_{n_k}}(t))dt\\
&=\lim_{k\to\infty}\Cost{(\Endp{(u_{n_k})})}=\lim_{k\to\infty}\Cost{(x_{n_k})}\\
&=\Cost{(x)}=\Cost{(\Endp{(u)})}\\&\leq\frac{1}{2}\|u\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_u(t))dt,
\end{align}
which readily means both that $\lim_{k\to\infty}\|u_{n_k}\|_{L^2}=\|u\|_{L^2}$ (from which the convergence in $L^{2}$ follows), and that $\Costu{(u)}=\Cost{(\Endp{(u)})}=\Cost{(x)}$.
\end{proof}
\subsection{Tame points}
We have introduced so far two subsets of $\Int{A_{x_0}^T}$, namely the sets $\Sigma_c$ of the continuity points of $\Cost{}$, and the set $\Sigma_f$ of fair points, which are essentially points that are well-parametrized by the exponential map; both these sets are dense in $\Int{A_{x_0}^T}$, still their intersection can be empty. Here is the main differences with respect to the arguments of \cite{agrachevsmooth}: indeed in that context every fair point is a point of continuity. In our setting, to relate $\Sigma_c$ and $\Sigma_f$, we introduce the following set.
\begin{defi}[Tame Points]\label{defi:tameP}
Let $x\in\Int{A_{x_0}^T}$. We say that $x$ is a \emph{tame point} if for every optimal control $u$ steering $x_0$ to $x$ there holds
\begin{equation}
\Rank{\dEndp{u}}=\dim{M}=m.
\end{equation}
We call $\Sigma_t$ the set of tame points.
\end{defi}
Tame points locate open sets on which the value function $\Cost{}$ is continuous. The precise statement is contained in the following lemma, whose first part of the proof is an adaptation of the arguments of \cite[Theorem 4.6]{Tre00}. A complete proof is contained in Appendix \ref{sec:appendix}.
\begin{lemma}\label{lemma:ContNearReg}
Let $x\in\Int{A_{x_0}^T}$ be a tame point. Then
\begin{itemize}
\item [(i)] $x$ is a point of continuity of $\Cost{}$;
\item [(ii)] there exists a neighborhood $O_x$ of $x$ such that every $y\in O_x$ is a tame point. In particular, the restriction $\Cost{}\big|_{O_x}$ is a continuous map.
\end{itemize}
\end{lemma}
The previous lemma can be restated as follows.
\begin{cor}\label{cor:TameP}
The set $\Sigma_t$ of tame points is open. Moreover $\Sigma_t\subset \Sigma_c$.
\end{cor}
\section{Density of tame points}\label{s:tamet}
This section is devoted to the proof that the set of tame point is open and dense in the interior of the attainable set. We start with the observation that the set of optimal controls reaching a fixed point $x$ is compact in the $L^2$-topology.
\begin{lemma}\label{lemma:Compx}
For every $x\in A_{x_0}^T$, the set
\begin{equation}\label{eq:Ux}
\mc{U}_x=\left\{u\in \Omega_{x_0}^T\mid u\textrm{ is an optimal control steering $x_0$ to $x$}\right\}
\end{equation}
is strongly compact in $L^2([0,T],\mathbb{R}^d)$.
\end{lemma}
\begin{proof}
Let $\{u_n\}_{n\in\mathbb{N}}\subset \mc{U}_x$. Then we have $\Cost{(x)}=\Costu{(u_n)}$ for every $n\in\mathbb{N}$, and consequently there exists $C>0$ such that $\|u_n\|_{L^2}\leq C$ for every $n\in\mathbb{N}$. Thus we may assume that there exists a subsequence $\{u_{n_k}\}_{k\in\mathbb{N}}\subset \{u_n\}_{n\in\mathbb{N}}$, and a control $u$ steering $x_0$ to $x$, such that $u_{n_k}\rightharpoonup u$ weakly in $L^2([0,T],\mathbb{R}^d)$. This, on the other hand, implies that
\begin{align}
\frac{1}{2}\|u\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u}(t))dt&\leq \liminf_{k\to\infty}\frac{1}{2}\|u_{n_k}\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u_{n_k}}(t))dt\\
&=\liminf_{k\to\infty}\Costu{(u_{n_k})}=\Cost{(x)}\\
&=\Costu{(u)}=\frac{1}{2}\|u\|^2_{L^2}-\frac{1}{2}\int_0^TQ(x_{u}(t))dt,
\end{align}
therefore $\|u\|_{L^2}=\lim_{k\to\infty}\|u_{n_k}\|_{L^2}$, and the claim is proved.
\end{proof}
We introduce now the notion of the \emph{class} of a point. Heuristically, the class of a point $x\in\Int{A_{x_0}^T}$ measures how much that point ``fails'' to be tame (see Definition \ref{defi:tameP}).
\begin{defi}
Let $x\in A_{x_0}^T$. We define
\begin{equation}\label{eq:Rankx}
\Class{(x)}=\min_{u\in\mc{U}_x}\Rank{\dEndp{u}}.
\end{equation}
\end{defi}
Any point $x\in\Int{A_{x_0}^T}$ satisfying $\Class{(x)}=m$ is necessarily a tame point.
\begin{defi}\label{defi:closedMinRank}
We also define the subset $\mc{U}_x^{\min}\subset\mc{U}_x$ as follows:
\begin{equation}\label{eq:Umin}
\mc{U}_x^{\min}=\left\{u\in\mc{U}_x\mid \Rank{\dEndp{u}}=\Class{(x)}\right\}.
\end{equation}
By the lower semicontinuity of the rank function, the set $\mc{U}_x^{\min}$ is closed in $\mc{U}_x$, hence (strongly) compact in $L^2([0,T],\mathbb{R}^d)$.
\end{defi}
We can now state the main result of this section.
\begin{thm}\label{thm:Reg}
The set $\Sigma_t$ of tame points is dense in $\Int{A_{x_0}^T}$.
\end{thm}
We postpone the proof of Theorem \ref{thm:Reg} at the end of the section, since we need first a series of preliminary results.
\begin{defi}\label{defi:Xi}
Pick $x$ in $\Int{A_{x_0}^T}$ and let $u\in \mc{U}_x^{\min}$. If $u$ is not strictly abnormal, then we choose a normal covector $\eta_x\in T^*_xM$ associated to $u$ and we define
\begin{equation}\label{eq:PiHatx}
\wh{\Xi}_x^u=\left\{\xi\in T^*_xM\mid \xi\dEndp{u}=\eta_x\dEndp{u}\right\}=\eta_x+\ker\left(\dEndp{u}\right)^*\subset T^*_xM.
\end{equation}
If instead $u$ is strictly abnormal, we simply set $\wh{\Xi}_x^u=\ker\left(\dEndp{u}\right)^*\subset T^*_xM$.
\end{defi}
Notice that whenever $u$ is strictly abnormal, then $\wh{\Xi}_x^u$ is a linear subspace, while if $u$ admits at least one normal lift, $\wh{\Xi}_x^u$ is affine; also, the dimension of these subspaces equals $m-\Class{(x)}\geq 0$.
We call $\wh{Z}_u\subset T^*_xM$ the orthogonal subspace to $\ker\left(\dEndp{u}\right)^*$, of dimension equal to $\Class{(x)}$, for which:
\begin{equation}\label{eq:splitting}
T_x^*M=\ker\left(\dEndp{u}\right)^*\oplus \wh{Z}_u;
\end{equation}
moreover we let $\pi_{\wh{Z}_u}:T^*_xM\to \wh{Z}_u$ to be the orthogonal projection subordinated to this splitting, that is satisfying:
\begin{equation}
\ker(\pi_{\wh{Z}_u})=\ker\left(\dEndp{u}\right)^*.
\end{equation}
Finally, by means of the adjoint map $\left(\Flow{0,T}{u}\right)^*$, we can pull the spaces $\wh{\Xi}_x^u$ ``back'' to $T_{x_0}M$, and set
\begin{equation}
\Xi_x^u:=\left(\Flow{0,T}{u}\right)^*\wh{\Xi}_x^u\subset T_{x_0}^*M.
\end{equation}
\begin{center}
\begin{figure}[ht!]
\includegraphics[scale=0.8]{dis1.pdf}
\caption{We set $y=\Endp{(v)}$. The subspace $\wh{\Xi}_y^v$ is linear if $v$ is strictly abnormal, and affine otherwise; $\wh{Z}_v$ and $\ker\left(\dEndp{v}\right)^*$ are orthogonal. The point $\wh{\xi}_v$ belong to $T^*_yM$, and is then pulled back on $T^*_{x_0}M$.}
\label{fig:fig1}
\end{figure}
\end{center}
The following estimate will be crucial in what follows.
\begin{prop}\label{prop:constant}
Let $O\subset \Int{A_{x_0}^T}$ be an open set, and assume that:
\begin{equation}
\Class{(z)}\equiv k_O< m,\quad\textrm{for every }z\in O.
\end{equation}
Let $x\in O$ and $u\in\mc{U}_x^{\min}$. Then there exists a neighborhood $\mc{V}_u\subset \Omega_{x_0}^T$ of $u$ such that, for every $\lambda_u\in\Xi_x^u\subset T_{x_0}^*M$, there exists a constant $K=K(\lambda_u)>1$ such that, for every $v\in\mc{V}_u\cap\mc{U}_{\Endp{(v)}}^{\min}$, there is $\xi_v\in\Xi_{\Endp{(v)}}^v\subset T_{x_0}^*M$ satisfying:
\begin{equation}
|\lambda_u-\xi_v|\leq K.
\end{equation}
\end{prop}
\begin{proof}
Let us choose a neighborhood $\mc{V}_u\subset\Omega_{x_0}^T$ of $u$, such that all the endpoints of admissible trajectories driven by controls in $\mathcal{V}_u$ belong to $O$.
Then, if $y=\Endp{(v)}$ for some $v\in\mathcal{V}_u$, it follows that $y\in O$; moreover, if also $v\in\mathcal{U}^{\min}_y$, we can define the $(m-k_O)$-dimensional subspace $\Xi_y^v\subset T^*_{x_0}M$ as in Definition \ref{defi:Xi}. Therefore we can assume from the beginning that all such subspaces $\Xi_y^v$ have dimension constantly equal to $m-k_O>0$.
Fix $\lambda_u\in\Xi_x^u$, and set
\begin{equation}
\wh{\lambda}_u^v=(\Flow{T,0}{v})^*\lambda_u\in T^*_yM,\quad v\in\mc{V}_u\cap \mc{U}_y^{\min},\quad y=\Endp{(v)}.
\end{equation}
The intersection $(\wh{\lambda}_u^v+\wh{Z}_v)\cap \wh{\Xi}_y^v$ (cf.\ with \eqref{eq:splitting} and Figure \ref{fig:fig1}) consists of the single point $\wh{\xi}_v$; since both $\wh{\lambda}_u^v$ and $\wh{\xi}_v$ belong to the affine subspace $\wh{\lambda}_u^v+\wh{Z}_v$, in order to estimate the norm $|\wh{\lambda}_u^v-\wh{\xi}_v|$ it is sufficient to evaluate the norm $|\pi_{\wh{Z}_v}(\wh{\lambda}_u^v)-\pi_{\wh{Z}_v}(\wh{\xi}_v)|$ of the projections onto the linear space $\wh{Z}_v=(\ker(\dEndp{v})^*)^{\perp}$. The key point is the computation of the norm of $|\pi_{\wh{Z}_v}(\wh{\xi}_v)|$: in fact, since $\ker(\dEndp{v})^*=(\textrm{Im}\, \dEndp{v})^{\perp}$, this amounts to evaluate
\begin{equation}\label{eq:Main}
|\pi_{\wh{Z}_v}(\wh{\xi}_v)|=\sup_{f\in\textrm{Im}\, \dEndp{v}}\frac{|\langle \wh{\xi}_v,f \rangle|}{|f|}.
\end{equation}
We deduce immediately from \eqref{eq:Main} that, whenever $v$ is strictly abnormal, then $\pi_{\wh{Z}_v}(\wh{\xi}_v)=0$, while from the expression for the normal control \eqref{eq:NormControl}
\begin{equation}
v_i(t)=\langle \wh{\xi}_v(t),X_i(x_v(t))\rangle=\langle \wh{\xi}_v,(\Flow{T,t}{v})_*X_i(x_v(t))\rangle,
\end{equation}
we see that $\langle v,w\rangle_{L^2}=\langle\wh{\xi}_v,\dEndp{v}(w)\rangle$, and we can continue from \eqref{eq:Main} as follows ($W_v$ denotes the $k_O$-dimensional subspace of $L^2([0,T],\mathbb{R}^d)$ on which the restriction $\dEndp{v}\big|_{W_v}$ is invertible):
\begin{align}\label{eq:Est}
|\pi_{\wh{Z}_v}(\wh{\xi}_v)|&=\sup_{w\in W_v}\frac{|\langle\wh{\xi}_v,\dEndp{v}(w)\rangle|}{|\dEndp{v}(w)|}\\
&\leq \sup_{w\in W_v}\frac{|\langle\wh{\xi}_v,\dEndp{v}(w)\rangle|}{\|w\|_{L^2}}\|(\dEndp{v}\big|_{W_v})^{-1}\|\\
&=\sup_{w\in W_v}\frac{|\langle v,w\rangle|}{\|w\|_{L^2}}\|(\dEndp{v}\big|_{W_v})^{-1}\|\\
&\leq \|v\|_{L^2}\|(\dEndp{v}\big|_{W_v})^{-1}\|.
\end{align}
It is not restrictive to assume that the $L^2$-norm of any element $v\in\mc{V}_u\cap\mc{U}_y^{\min}$ remains bounded; moreover, since all the subspaces have the same dimension, the map $v\mapsto W_v$ is continuous, which implies that so is the map $v\mapsto (\dEndp{v}\big|_{W_v})^{-1}$. This, on the other hand, guarantees that the operator norm $\|(\dEndp{v}\big|_{W_v})^{-1}\|$ remains bounded for all $v\in\mc{V}_u\cap\mc{U}_y^{\min}$, and then from \eqref{eq:Est} we conclude that for some $C>1$, the estimate $|\pi_{\wh{Z}_v}(\wh{\xi}_v)|\leq C$ holds true, which implies as well, by the triangular inequality, that:
\begin{equation}\label{eq:est}
|\wh{\lambda}_u^v-\wh{\xi}_v|\leq |\wh{\lambda}_u^v|+C.
\end{equation}
Finally, the continuity of both the map $v\mapsto \Flow{0,T}{v}$ and its inverse, implies that for another real constant $C>1$ we have:
\begin{equation}
\sup_{v\in\mc{V}_u}\left\{\|(\Flow{0,T}{v})^*\|,\|(\Flow{T,0}{v})^*\|\right\}\leq C.
\end{equation}
Thus, setting $\xi_v=(\Flow{0,T}{v})^*\wh{\xi}_v\in T^*_{x_0}M$ (cf.\ Figure \ref{fig:fig1}) we can compute (here $C$ denotes a constant that can change from line to line):
\begin{align}
|\lambda_u-\xi_v|&\leq C|\wh{\lambda}_u^v-\wh{\xi}_v|\\
&\leq C|\wh{\lambda}_u^v|+C^2\\
&\leq C^2\left(|\lambda_u|+1\right)\\
&\leq 2C^2\max\{|\lambda_u|,1\}.
\end{align}Setting $K(\lambda_u):=2C^2\max\{|\lambda_u|,1\}$ the claim is proved.
\end{proof}
\begin{remark}
Let us fix $\lambda_u\in\Xi_x^u\subset T^*_{x_0}M$ and consider the $k_O$-dimensional affine subspace
\begin{equation}
(\Flow{0,T}{v})^*(\wh{\lambda}_u^v+\wh{Z}_v)=\lambda_u+(\Flow{0,T}{v})^*\wh{Z}_v,
\end{equation}
with $\wh{Z}_v$ defined as in \eqref{eq:splitting}. Then if we call $Z_v:=(\Flow{0,T}{v})^*\wh{Z}_v\subset T^*_{x_0}M$, the map
\begin{equation}\label{eq:contmap}
v\mapsto \lambda_u+Z_v,\quad v\in\mc{V}_u\cap\mc{U}_y^{\min},\: y=\Endp{(v)}
\end{equation}
is continuous; moreover, $Z_v$ is by construction transversal to $\Xi_y^v$, and $\xi_v\in (\lambda_u+Z_v)\cap \Xi_y^v$.
\end{remark}
Having in mind this remark, we deduce the following:
\begin{cor}\label{cor:ball}
Let $O\subset \Int{A_{x_0}^T}$ be an open set, and assume that
\begin{equation}
\Class{(z)}\equiv k_O< m,\quad\textrm{for every }z\in O.
\end{equation}
Let $x\in O$, $u\in\mc{U}_x^{\min}$, and consider $\mc{V}_u\subset \Omega_{x_0}^T$ as in Proposition \ref{prop:constant}. Then, for every $\lambda_u\in \Xi_x^u$, there exists a $k_O$-dimensional compact ball $A_u$, centered at $\lambda_u$ and transversal to $\Xi_x^u$, such that:
\begin{equation}
A_u\cap\Xi_y^v\neq\emptyset\quad\textrm{for every }v\in\mc{V}_u\cap\mc{U}_y^{\min},\quad\textrm{where }y=\Endp{(v)}.
\end{equation}
\end{cor}
\begin{center}
\begin{figure}[ht!]
\includegraphics[scale=.6]{dis2.pdf}
\caption{On the fiber $T^*_{x_0}M$, the point $\eta$ denotes the intersection between $T_v$ and the affine space $\lambda_u+Z_u$.}
\label{fig:fig2}
\end{figure}
\end{center}
\begin{proof}
Let $\lambda_u\in\Xi_x^u$ be chosen, and assume without loss of generality that $\mc{V}_u$ is relatively compact. For every $v\in\mc{V}_u$, we can construct an $m$-dimensional ball $B_u^v$, of radius $C_0^v$ strictly greater than $K=K(\lambda_u)$ (given by Proposition \ref{prop:constant}), and centered at $\lambda_u$.
Then, the existence of an element $\xi_v\in\left(\lambda_u+Z_v\right)\cap \Xi_y^v$ satisfying $|\lambda_u-\xi_v|\leq K$, proved in Proposition \ref{prop:constant}, implies that the intersection of $B_u^v$ with $\Xi_y^v$ is a compact submanifold $T_v$ (with boundary); moreover, since the radius of $B_u^v$ is strictly greater than $|\lambda_u-\xi_v|$, it is also true that the intersection of $\lambda_u+Z_v$ with $\Int{T_v}$ is not empty.
Let us consider as before (cf.\ previous remark) the $k_O$-dimensional affine subspace $\lambda_u+Z_u$, which is transversal to $\Xi_x^u$: possibly increasing the radius $C_0^v$, the continuity of the map $w\mapsto \lambda_u+Z_w$ ensures that $\lambda_u+Z_u$ remains transversal to $T_v$, and in particular that the intersection $T_v\cap (\lambda_u+Z_u)$ is not empty (see Figure \ref{fig:fig2}). Moreover, it is clear that this conclusion is local, that is with the same choice of $C_0^v$ it can be drawn on some full neighborhood $\mc{W}_v$ of $v$. Then, to find a ball $B_u$ and a radius $C_0$ uniformly for the whole set $\mc{V}_u$, it is sufficient to extract a finite sub-cover $\mc{W}_{v_1},\dotso, \mc{W}_{v_l}$ of $\mc{V}_u$ , and choose $C_0$ as the maximum between $C_0^{v_1},\dotso, C_0^{v_l}$.
We conclude the proof setting $A_u=B_u\cap (\lambda_u+Z_u)$; indeed $A_u$ is a compact $k_O$-dimensional ball by construction, and moreover if we call $\eta_v$ any element in the intersection $T_v\cap (\lambda_u+Z_u)$, for $v\in\mc{V}_u$, then it follows that:
\begin{equation}
\eta_v\in \Xi_y^v\cap B_u\cap (\lambda_u+Z_u)=\Xi_y^v\cap A_u,
\end{equation}
that is, the intersection $\Xi_y^v\cap A_u$ is not empty for every $v\in\mc{V}_u\cap\mc{U}_y^{\min}$.
\end{proof}
\begin{lemma}\label{lemma:neigh}
Let $O\subset\Int{A_{x_0}^T}$ be an open set, and let
\begin{equation}\label{eq:kO}
k_O=\max_{x\in\Sigma_c\cap O}\Class{(x)}.
\end{equation}
Then there exists a neighborhood $O'\subset O$, such that $\Class{(y)}=k_O$, for every $y\in O'$.
\end{lemma}
\begin{proof}
Let $x\in\Sigma_c\cap O$ be a point of continuity for the value function $\Cost{}$, having the property that $\Class{(x)}=k_O$. Assume by contradiction that we can find a sequence $\{x_n\}_{n\in\mathbb{N}}$ converging to $x$ and satisfying $\Class{(x_n)}\leq k_O-1$ for every $n\in\mathbb{N}$. Accordingly, let $u_n\in \mc{U}_{x_n}^{\min}$ an associated sequence of optimal controls; in particular, for every $n\in\mathbb{N}$, we have by definition that $\Class{(x_n)}=\Rank{\dEndp{u_n}}$.
By Lemma \ref{lemma:Contx}, we can extract a subsequence $\{u_{n_k}\}_{k\in\mathbb{N}}\subset \{u_n\}_{n\in\mathbb{N}}$ which converges to some optimal control $u$ steering $x_0$ to $x$, strongly in the $L^2$-topology, and write:
\begin{equation}\label{eq:ConvRank}
\Class{(x)}\leq \Rank{\dEndp{u}}\leq \liminf_{k\to\infty}\Rank{\dEndp{u_{n_k}}}=\liminf_{k\to\infty}\Class{(x_{n_k})}\leq k_O-1,
\end{equation}
which is absurd by construction, and the claim follows.
\end{proof}
Collecting all the results we can now prove Theorem \ref{thm:Reg}.
\begin{proof}[Proof of Theorem \ref{thm:Reg}]
Let $O$ be an open set in $\Int{A_{x_0}^T}$ and define
\begin{equation}
k_O=\max_{x\in\Sigma_c\cap O}\Class{(x)};
\end{equation}
notice that this definition makes sense, since points of continuity are dense in $\Int{A_{x_0}^T}$ by Proposition \ref{prop:DensityContinuity}. Then we may suppose that $k_O$ is strictly less than $m$, for otherwise there would be nothing to prove. Moreover, by Lemma \ref{lemma:neigh} it is not restrictive to assume that $\Class{(y)}=k_O$ for every $y\in O$.
Fix then a point $x\in\Sigma_c\cap O$; since the hypotheses of Proposition \ref{prop:constant} are satisfied, for every $u\in\mc{U}_x^{\min}$ we can find a neighborhood $\mc{V}_u\subset\Omega_{x_0}^T$ of $u$, fix $\lambda_u\in\Xi_x^u$, and construct accordingly a compact $k_O$-dimensional ball $A_u$, centered at $\lambda_u$ and transversal to $\Xi_x^u$, such that (Corollary \ref{cor:ball})
\begin{equation}
A_u\cap\Xi_y^v\neq\emptyset\quad\textrm{for every }v\in\mc{V}_u,\:\,\textrm{and with }y=\Endp{(v)}.
\end{equation}
Since $\mc{U}_x^{\min}$ is compact (Definition \ref{defi:closedMinRank}), we can choose finitely many elements $u_1,\dotso, u_l$ in $\mc{U}_x^{\min}$ such that
\begin{equation}
\mc{U}_x^{\min}\subset \bigcup_{i=1}^l\mc{V}_{u_i}.
\end{equation}
The union $A_{u_1}\cup\dotso\cup A_{u_l}$ is again of positive codimension. Moreover, for every sequence $x_n$ of fair points converging to $x$, and whose associated sequence of optimal controls (by uniqueness of the optimal control, necessarily $u_n\in\mc{U}_{x_n}^{\min}$) the sequence $u_n$ converges to some $v\in \mc{V}_{u_i}\subset \mc{U}_x^{\min}$, we have that $A_{u_i}$ is also transversal to $\Xi_{x_n}^{u_n}$. In particular, possibly enlarging the ball $A_{u_i}$, we can assume that
\begin{equation}
A_{u_i}\cap\Xi_{x_n}^{u_n}\neq \emptyset,\quad\textrm{for every }n\in\mathbb{N}.
\end{equation}
For any fair point $z\in\Sigma_f\cap O$, the optimal control admits a normal lift, and we have the equality
\begin{equation}
\mc{E}_{x_0}^T(\Xi_{z}^{u})=z,
\end{equation}
where $\mc{E}_{x_0}^T$ is the exponential map with base point $x_0$ at time $T$ of Definition \ref{def:ExponentialMap}, so that we eventually deduce the inclusion:
\begin{equation}\label{eq:Sard}
\Sigma_f\cap O\subset \mc{E}_{x_0}^T\left(A_{u_1}\cup\dotso\cup A_{u_l}\right).
\end{equation}
The set on the right-hand side is closed, being the image of a compact set; moreover, it is of measure zero by the classical Sard Lemma \cite{Ste64}, as it is the image of a set of positive codimension by construction. Since the set $\Sigma_f\cap O$ is dense in $O$ by Corollary \ref{cor:DensityFair}, passing to the closures in \eqref{eq:Sard} we conclude that $\textrm{meas}(O)=0$, which is impossible.
\end{proof}
Combining now Lemma \ref{lemma:ContNearReg} and Theorem \ref{thm:Reg} we obtain the following (cf.\ Theorem \ref{t:main2}).
\begin{cor}\label{cor:ContOpenDense}
The set $\Sigma_t$ of tame points is open and dense in $\Int{A_{x_0}^T}$.
\end{cor}
|
2,877,628,088,581 | arxiv | \section{ Main theorems}
\setcounter{equation}{0}
\subsection{ The steady Euler equations with force}
Here we are concerned on the steady equations on $\Bbb R^N$ with
force.
\bb\label{main}
\left\{ \aligned & ( v\cdot \nabla ) v = -\nabla p +\Phi ,\\
&\mathrm{div} \, v=0,
\endaligned
\right.
\ee
where $v=v(x)=(v_1(x),\cdots, v_N(x))$ is the velocity, and $p=p(x)$ is the pressure.
The force function $\Phi[v]:\Bbb R^N\to \Bbb R^N$ satisfies the singlesignedness condition described below.
We study Liouville-type property of the solutions to
(\ref{main}) under this condition.
Let us fix $N\geq 2$, $k\geq 0$. Here we assume that the continuous function
$$\Phi[v] (x):=\Phi_k\left(x, v(x), Dv(x), \cdots, D^k v(x)\right)$$ for some
$\Phi_k: \Bbb R^M \to \Bbb R^N$ for the appropriate $M(N,k)$,
satisfies the condition of single
signedness:
\bb\label{da1}
\mbox{either $ \Phi[v] (x)\cdot v(x)\geq 0$ or $ \Phi[v](x)\cdot
v(x)\leq 0$ for all $x\in \Bbb R^N$,}
\ee
and
\bb\label{da2}
\mbox{ $\Phi[v](x)\cdot v(x)=0$ if only if $v(x)=0$}.
\ee
For such
given $\Phi$ we consider the system (\ref{main}).
Note that when $\Phi[v]=-v$ the system (\ref{main})-(\ref{da2}) becomes the usual steady
Euler equations with a damping term. We remark that the damped
Euler equations corresponds to a special case of the self-similar
Euler equations(see Appendix below for more details).
below. More generally $\Phi[v](x)= G(x, v(x),\cdots , D^k v(x))v(x)$ with a
scalar function
$G(x, v(x),\cdots , D^k v(x))\lessgtr 0)$ satisfies (\ref{da1})-(\ref{da2}). We will prove
that a Liouville type property for the system
(\ref{main})-(\ref{da2}) under quite mild decay conditions at
infinity on the solutions. More specifically we will prove the
following.
\begin{theorem}
Let $k\geq 0$, and $v$ be a $C^k(\Bbb R^N)$ solution of (\ref{main})-(\ref{da2}) with $\Phi=\Phi[v]$. Suppose there exists $q\in
(\frac{3N}{N-1} , \infty)$ such that
\bb\label{13a}
|v|^2+|p|\in L^{\frac{q}{2}}(\Bbb R^N).
\ee
Then, $v=0$.
\end{theorem}
{\em Remark 1.1 } If $\Phi$ satisfies an extra condition
div $\Phi =0$, then the condition $p\in L^{\frac{q}{2}} (\Bbb R^N)$ can be replaced by
the well-known velocity-pressure relation in the incompressible Euler and the Navier-stokes equations,
$$p(x)=\sum_{j,k=1}^N R_j R_k (v_jv_k)(x) $$
with the Riesz transform $R_j$, $j=1,\cdots,N,$ in $\Bbb R^N$
(\cite{ste}), which holds under the condition that $p(x)\to 0$ as $|x|\to \infty$.
In this case the $L^{\frac{q}{2}}$ estimate of the pressure follows from the
$L^q$ estimate for the velocity by the Calderon-Zygmund inequality,
\bb\label{cz}\|p\|_{L^{\frac{q}{2}}}\leq C\sum_{j,k=1}^N \|R_jR_k v_jv_k \|_{L^{\frac{q}{2}}}\leq C
\|v\|_{L^{q}}^2 \quad 2<q<\infty.
\ee
{\em Remark 1.2 } The theorem implies that $\mathrm{curl}
(\Phi[0]))=0$ is a necessary condition for the well-posedness of the
problem, namely $v=0$ is the unique solution of the equations.
\subsection{ The steady Navier-Stokes equations in $\Bbb R^3$}
Here we study the following system of steady Navier-Stokes equations in $\Bbb R^3$.
$$
(NS)\left\{ \aligned & ( v\cdot \nabla ) v = -\nabla p +\Delta v,\\
&\mathrm{div} \, v=0,
\endaligned
\right.
$$
We consider here the generalized solutions of the system (NS),
satisfying
\bb\label{diri}
\int_{\Bbb R^3} |\nabla v|^2dx <\infty,
\ee
and
\bb\label{12c}
\lim_{|x|\to \infty} v(x)= 0.
\ee
It is well-known that a generalized solution to (NS) belonging to $ W^{1,2}_{loc}(\Bbb R^3)$ implies that
$v$ is smooth(see e.g.\cite{gal}). Therefore without loss of
generality we can assume that our solutions to (NS) satisfying (\ref{diri}) are smooth.
The uniqueness question, or equivalently the question of Liouville
property of solution for the system (NS) under the assumptions
(\ref{diri}) and (\ref{12c}) is a long standing open problem.
On the other hand, it is well-known that the uniqueness of solution holds in the
class $L^{\frac92} (\Bbb R^3)$, namely a
smooth solution to (NS) satisfying
$v\in L^{\frac92} (\Bbb R^3)$ is $v=0$(see Theorem 9.7 of \cite{gal}). We assume here slightly stronger condition than
(\ref{diri}), but having the same scaling property, to deduce our Liouville-type result.
\begin{theorem}
Let $v$ be a smooth solution of (NS) satisfying (\ref{12c}) and
\bb\label{sdiri}
\int_{\Bbb R^3} |\Delta v|^{\frac65}\, dx<\infty.
\ee
Then, $v=0$ on
$\Bbb R^3$.
\end{theorem}
{\em Remark 1.3 } Under the assumption (\ref{12c}) we have the
inequalities with the norms of the {\em same scaling properties,}
$$
\|v\|_{L^6}\leq C\|\nabla v\|_{L^2} \leq C \|D^2 v\|_{L^{\frac65}}
\leq C\|\Delta v\|_{L^\frac65}<\infty
$$
due to the Sobolev and the Calderon-Zygmund inequalities. Thus,
(\ref{sdiri}) implies (\ref{diri}). There is no, however, mutual
implication relation between Theorem 1.2 and
the above mentioned $L^{\frac92}$ result, although our assumption (\ref{sdiri}) corresponds to $L^6(\Bbb R^3)$
at the level of scaling. \\
\section{Proof of the Main Theorems }
\setcounter{equation}{0}
\noindent{\bf Proof of Theorem 1.1 } We denote
$$[f]_+=\max\{0, f\}, \quad [f]_-=\max\{0, -f\},$$
and
$$
D_\pm:=\left\{ x\in \Bbb R^N\, \Big|\,\left[p(x)+\frac12
|v(x)|^2\right]_\pm >0\right\}
$$
respectively. We introduce the radial cut-off function $\sigma\in
C_0 ^\infty(\Bbb R^N)$ such that
\bb\label{16}
\sigma(|x|)=\left\{ \aligned
&1 \quad\mbox{if $|x|<1$},\\
&0 \quad\mbox{if $|x|>2$},
\endaligned \right.
\ee
and $0\leq \sigma (x)\leq 1$ for $1<|x|<2$. Then, for each $R
>0$, we define
$$
\s \left(\frac{|x|}{R}\right):=\s_R (|x|)\in C_0 ^\infty (\Bbb R^N).
$$
We multiply first equations of (\ref{main}) by $v$ to obtain
\bb\label{main1}
v\cdot \Phi =v\cdot \nabla \left(p+\frac12 |v|^2\right).
\ee
Next, we multiply (\ref{main1}) by
$ \left[p+\frac12 |v|^2\right]_+
^{\frac{qN-q-3N}{2N}}\s_R \, \mathrm{sign}\{v\cdot \Phi \} $ and
integrate over $\Bbb R^N$ to have
\bq\label{euler1}
\lefteqn{\int_{\Bbb R^N} \left[p+\frac12
|v|^2\right]_+^{\frac{qN-q-3N}{2N}}\left|v\cdot \Phi\right| \s_{R}\,
dx}\n \\
&&=\mathrm{sign}\{ v\cdot \Phi\}\int_{\Bbb R^N} \left[p+\frac12
|v|^2\right]_+^{\frac{qN-q-3N}{2N}}\s_R v \cdot\nabla
\left(p+\frac12 |v|^2\right) \,dx\n \\
&&:=I
\eq
We estimate $I$ as follows.
\bqn
|I|&=&\left|\int_{\Bbb R^N} \left[p+\frac12 |v|^2\right]_+^{\frac{qN-q-3N}{2N}} \s_R
v\cdot\nabla \left(p+\frac12 |v|^2 \right)\, dx\right| \n \\
&=&\left|\int_{D_+} \left[p+\frac12
|v|^2\right]_+^{\frac{qN-q-3N}{2N}} \s_R
v\cdot\nabla \left[p+\frac12 |v|^2 \right]_+\, dx\right|\n \\
&=& \frac{2N}{qN-q-N} \left|\int_{D_+}\si v\cdot\nabla \left[p+\frac12
|v|^2 \right]_+^{\frac{qN-q-N}{2N}} \, dx \right|\n \\
&=&\frac{2N}{qN-q-N} \left| \int_{D_+}\left[p+\frac12 |v|^2
\right]_+^{\frac{qN-q-N}{2N}} v\cdot\nabla \si \, dx\right| \n \\
&\leq&
\frac{C\|\nabla\s\|_{L^\infty} }{R} \left(\int_{\Bbb R^N}
(|p|+|v|^2)^{\frac{q}{2}} \, dx\right)^{\frac{qN-q-N}{qN}}
\|v\|_{L^q(R\leq |x|\leq 2R)} \times\n \\
&&\hspace{.5in} \times\left(\int_{\{ R\leq |x|\leq 2R\}} \,
dx \right)^{\frac1N}\n \\
&\leq& C\|\nabla\s\|_{L^\infty}
\left(\|p\|_{L^{\frac{q}{2}}} +\|v\|_{L^q}^2
\right)^{\frac{qN-q-N}{qN}}\|v\|_{L^q(R\leq |x|\leq 2R)}\to 0
\eqn
as $R\to \infty$.
Therefore, passing $R\to \infty$ in (\ref{euler1}), we obtain
\bb\label{116a}
\int_{\Bbb R^N} \left[p+\frac12
|v|^2\right]_+^{\frac{qN-q-3N}{2N}} \left|v\cdot \Phi\right| \, dx
=0 \ee by the Lebesgue Monotone Convergence Theorem.
Similarly, multiplying (\ref{main1}) by $ \left[p+\frac12
|v|^2\right]_-
^{\frac{qN-q-3N}{2N}} \s_R $, and integrate over $\Bbb R^N$, we
deduce by similarly to the above,
\bq\label{116aa}
\lefteqn{\int_{\Bbb R^N}
\left[p+\frac12 |v|^2\right]_-^{\frac{qN-q-3N}{2N}} \left|v\cdot \Phi\right| \s_{R}\, dx}\hspace{.3in}\n \\
&&=-\int_{\Bbb R^N} \left[p+\frac12 |v|^2\right]_-^{\frac{qN-q-3N}{2N}} \s_R
v\cdot\nabla \left(p+\frac12 |v|^2 \right)\, dx \n \\
&&=\int_{\Bbb R^N} \left[p+\frac12
|v|^2\right]_-^{\frac{qN-q-3N}{2N}} \s_R
v\cdot\nabla \left[p+\frac12 |v|^2 \right]_-\, dx\n \\
&&\leq C\|\nabla\s\|_{L^\infty}\left(\|p\|_{L^{\frac{q}{2}}}
+\|v\|_{L^q}^2 \right)^{\frac{qN-q-N}{qN}}\|v\|_{L^q(R\leq |x|\leq
2R)}\to 0\n \\
\eq
as $R\to \infty$. Hence,
\bb\label{116ab}\int_{\Bbb R^N}
\left[p+\frac12 |v|^2\right]_-^{\frac{qN-q-3N}{2N}}\left|v\cdot
\Phi\right|\, dx =0 \ee by the Lebesgue Monotone Convergence Theorem
again.
Let us define
$$ \mathcal{S}=\{ x\in \Bbb R^N\, |\, v(x)\neq0\}. $$
We note that $\mathcal{S}$ is an open set in $\Bbb R^N$.
Suppose $\mathcal{S}\neq \emptyset$. Then, (\ref{116aa}) and
(\ref{116ab}) together with (\ref{da1})-(\ref{da2}) imply
$$
\left[p(x)+\frac12 |v(x)|^2\right]_+=\left[p(x)+\frac12
|v(x)|^2\right]_-=0\quad \forall x\in \mathcal{S}.
$$
Namely,
$$
p(x)+\frac12 |v(x)|^2=0\quad \forall x\in \mathcal{S}.
$$
Since this holds for any open subset of $\mathcal{S}$, we have also
$ \nabla (p+\frac12 |v|^2) (x)=0$ for all $x\in \mathcal{S}$.
From (\ref{main1}) this implies
\bb\label{119}
\Phi[v](x)\cdot v(x)=0\qquad \forall x\in \mathcal{S}.
\ee
Considering the conditions on $\Phi$ in (\ref{da1})-(\ref{da2}), we have a contradiction, and therefore we need $\mathcal{S}=\emptyset$, namely
$v=0$ on $\Bbb R^N$.
$\square$\\
\ \\
Next in order to prove Theorem 1.2 we recall the following result proved by Galdi(see Theorem X.5.1 of \cite{gal} for more general version).
\begin{theorem}
Let $v(x)$ be a generalized solution of (NS) satisfying (\ref{diri}) and (\ref{12c}) and $p(x)$ be the associated pressure, then there exists $p_1\in \Bbb R$ such that
$$\lim_{|x|\to \infty} |D^\alpha v(x)|+\lim_{|x|\to \infty} |D^\alpha \left(p(x)-p_1\right)|=0 $$
uniformly for all multi-index $\alpha=(\alpha_1, \alpha_2, \alpha_3)\in [\Bbb N\cup\{0\}]^3$.
\end{theorem}
\noindent{\bf Proof of Theorem 1.2 }
Under the
assumption (\ref{sdiri}) and Remark 1.1, Theorem IX.6.1 of
\cite{gal} implies
that
\bb\label{decay}
\lim_{|x|\to \infty}|p(x)-p_1 |=0.
\ee
for a constant $p_1$.
Therefore, if we set
$$
\label{ber}Q(x):=\frac12 |v(x)|^2 +p(x)-p_1,
$$
then
\bb\label{13}
\lim_{|x|\to \infty} |Q(x)|=0.
\ee
As before we denote $[f]_+=\max\{0, f\}, \quad [f]_-=\max\{0,
-f\}.$ Given $\vare > 0$, we define
\bqn
D_+^\vare&=&\left\{ x\in \Bbb
R^3\, \Big|\,\left[Q(x)-\vare\right]_+>0\right\},\n
\\
D_-^\vare&=&\left\{ x\in \Bbb R^3\,
\Big|\,\left[Q(x)+\vare\right]_->0\right\}.
\eqn
respectively. Note
that (\ref{13}) implies that $D_\pm^\vare$ are bounded sets in $\Bbb
R^3$. Moreover,
\bb\label{obs} Q\mp\vare
=0\quad\mbox{on}\quad \partial D_\pm^\vare
\ee
respectively.
Also, thanks to the Sard theorem combined with the implicit function
theorem $\partial D_\pm^\vare$'s are smooth level surfaces in $\Bbb
R^3$ except the values of $\vare>0$, having the zero Lebesgue
measure, which corresponds to the critical values of $z=Q(x)$. It is
understood that our values of $\vare$ below avoids these
exceptional ones. We write the system (NS) in the form,
\bb\label{17}
-v\times\mathrm{ curl} \, v =-\nabla Q +\Delta v.
\ee
Let us
multiply (\ref{17}) by $ v \left[Q-\vare\right]_+$, and integrate it
over $\Bbb R^3$. Then, since
$
v\times \mathrm{curl}\,v \cdot v =0, $
we have
\bq\label{18}
0&=& -\int_{\Bbb R^3} \left[Q-\vare\right]_+ v \cdot\nabla
\left(Q-\vare\right) \,dx+\int_{\Bbb R^3}\left[Q-\vare\right]_+ v\cdot \Delta v\, dx\n\\
&:=&I_1 +I_2 .
\eq
Integrating by parts, using (\ref{obs}), we obtain
\bqn
I_1=-\int_{D_+^\vare} \left(Q-\vare\right)
v\cdot\nabla \left(Q -\vare\right)\, dx= -\frac{1}{2}
\int_{D_+^\vare}
v\cdot\nabla \left(Q-\vare \right)^{2} \, dx.
=0 \eqn
Using
\bb\label{f2aa}
v\cdot \Delta v=\Delta (\frac12|v|^2)-|\nabla v|^2,
\ee
and the well-known formula for the Navier-Stokes equations,
\bb\label{f2a}
\Delta p=|\o|^2-|\nabla v |^2,
\ee
we have
\bq\label{110}
I_2&=&-\int_{\Bbb R^3} |\nabla v|^2\left[ Q-\vare\right]_+ \, dx +\int_{\Bbb R^3}\Delta \left(\frac12 |v|^2\right)
\left[Q-\vare\right]_+ \, dx \n \\
&=&-\int_{\Bbb R^3}|\o|^2\left[Q-\vare\right]_+\, dx +\int_{\Bbb R^3} \Delta \left(Q-\vare\right)
\left[ Q-\vare\right]_+
\, dx\n \\
&:=&J_1+J_2.
\eq
Integrating by parts, we transform $J_2$ into
\bq\label{110a}
J_2=\int_{D_+^\vare} \Delta\left(Q-\vare\right)
\left( Q-\vare\right)
\,dx=-
\int_{D_+^\vare}\left|\nabla\left(Q-\vare\right)\right|^2
\,dx.
\eq
Thus, the derivations (\ref{18})-(\ref{110a})
lead us to
\bb\label{dom}
0=\int_{D_+^\vare} |\o|^2\left| Q-\vare\right|\,
dx+\int_{D_+^\vare}\left|\nabla\left(Q-\vare\right)\right|^2
\,dx
\ee for all $\vare>0$. The vanishing of the second term of
(\ref{dom}) implies
$$
\left[Q-\vare\right]_+=C_0\quad \mbox{on}\quad D_+^\vare
$$ for a constant $C_0$. From the fact (\ref{obs})
we have $C_0=0$, and $[Q-\vare]_+=0$ on $\Bbb R^3$, which holds for all $\vare >0.$
Hence,
\bb\label{plus}
\left[Q\right]_+=0 \quad\mbox{on $\Bbb R^3$}.
\ee
This shows that $Q\leq 0$ on $\Bbb R^3$. Suppose $Q=0$ on $\Bbb
R^3$. Then, from (\ref{17}), we have $v\cdot\Delta v=0$ on $\Bbb
R^3$. Hence,
$$ \Delta p=-\frac12\Delta |v|^2=-v\cdot\Delta v-|\nabla v|^2=-|\nabla
v|^2.
$$
Comparing this with (\ref{f2a}), we have $\o=0$. Combining this with
div $v=0$, we find that $v$ is a harmonic function in $\Bbb R^3$.
Thus, by (\ref{12c}) and the Liouville theorem for the harmonic
function, we have $v=0$, and we are done. Hence, without loss of generality, we may assume
$$0>\inf_{x\in \Bbb R^3} Q(x).
$$
Given $\delta>0$, we multiply (\ref{17}) by $ v
\left[Q+\vare\right]_- ^{\delta} $, and integrate it over $\Bbb
R^3$. Then, similarly to the above
we have
\bq\label{m18}
0&=& -\int_{\Bbb R^3} \left[Q+\vare\right]_-^{\delta} v \cdot\nabla
\left(Q+\vare\right) \,dx+\int_{\Bbb R^3} \left[Q+\vare\right]_-^{\delta} v\cdot \Delta v\, dx\n\\
&:=&I_1' +I_2' .
\eq
Observing $Q(x)+\vare=- \left[Q(x)+\vare\right]_-$ for all $x\in
D_-^\vare$, integrating by part, we obtain
\bqn
I_1'&=&\int_{D_-^\vare}\left[Q+\vare\right]_-^{\delta}
v\cdot\nabla \left[Q+\vare\right]_-\, dx\n \\
&=&\frac{1}{\delta+1} \int_{D_-^\vare} v\cdot\nabla \left[Q+\vare \right]_-^{\delta+1} \, dx =0.
\eqn
Thus, using (\ref{f2aa}), we have
\bq\label{m110}
0=-\int_{D_-^\vare} |\nabla v|^2\left[Q+\vare\right]_-^{\delta} \,
dx+\frac12 \int_{D_-^\vare}\left[ Q+\vare\right]_- ^{\delta}\Delta |v|^2
\, dx \eq
Now, we have the point-wise convergence
$$\left[Q(x)+\vare\right]_-^\delta \to 1 \quad \forall x\in D_-^\vare.
$$
as $\delta\downarrow 0$. Since
\bqn
\int_{\Bbb R^3}| v\cdot \Delta v|\, dx&\leq& \|v\|_{L^6}\|\Delta
v\|_{L^{\frac65}}\leq C \|\nabla
v\|_{L^2}\|\Delta v\|_{L^{\frac65}}\n \\
&\leq &C \|\Delta v\|_{L^{\frac65}}^2<\infty,
\eqn
we have
\bb\label{elone}\Delta |v|^2=2 v\cdot \Delta v +2 |\nabla v|^2 \in L^1(\Bbb R^2).
\ee
Hence, passing $\delta\downarrow 0$ in
(\ref{m110}),
by the dominated convergence
theorem, we obtain
\bb\label{m110a}
\int_{D_-^\vare} |\nabla v|^2\,dx= \frac12\int_{D_-^\vare}
\Delta |v|^2\, dx,
\ee
which holds for all $\vare>0$. For a sequence $\{\vare_n\}$ with
$\vare_n \downarrow 0$ as $n\to \infty$, we observe
$$ D_-^{\vare_n }\uparrow \cup_{n=1}^\infty D_-^{\vare_n}=D_-:=
\{ x\in \Bbb R^3\, |\, Q(x)<0.\}.
$$
Thus, observing (\ref{elone}) again, we can apply the dominated
convergence theorem in passing $\vare \downarrow 0$ in (\ref{m110a})
to deduce
\bb\label{mmdom1}
\int_{D_-} |\nabla v|^2\, dx=\frac12\int_{D_-}\Delta |v|^2\, dx.
\ee
Now, thanks to (\ref{plus}) the set
$$ S=\{ x\in \Bbb R^3\, |\, Q(x)=0\} $$
consists of critical(maximum) points of $Q$, and hence
$ \nabla Q(x)=0$ for all $x\in S,$ and the system (\ref{17}) reduces to
\bb\label{redu}
-v\times \o=\Delta v \quad\mbox{on}\quad S.
\ee
Multiplying (\ref{redu}) by $v$, we have that
$$ 0=v\cdot \Delta v=\frac12\Delta |v|^2-|\nabla v|^2\quad\mbox{on}\quad S.
$$
Therefore, one can extend the domain of integration in
(\ref{mmdom1}) from $D_-$ to $D_-\cup S=\Bbb R^3$, and therefore
\bb\label{mmdom2}
\int_{\Bbb R^3} |\nabla v|^2\, dx=\frac12\int_{\Bbb R^3}\Delta
|v|^2\, dx.
\ee
We now claim the right hand side of (\ref{mmdom2}) vanishes.
Indeed, since $\Delta |v|^2\in L^1(\Bbb R^3)$ from (\ref{elone}), applying the dominated
convergence theorem, we have
\bqn
\left|\int_{\Bbb R^3} \Delta |v|^2\, dx\right|&=&\lim_{R\to \infty} \left|\int_{\Bbb R^3}
\Delta |v|^2 \si \, dx\right|
=\lim_{R\to \infty} \left|\int_{\Bbb R^3}
|v|^2 \Delta\si \, dx\right|\n \\
&\leq&\lim_{R\to \infty}\int_{\Bbb R^3}
|v|^2 |\Delta\si| \, dx\n \\
&\leq& \lim_{R\to \infty}\frac{\|D^2\s\|_{L^\infty}}{R^2}
\|v\|_{L^6(R\leq |x|\leq
2R)}^2
\left(\int_{\{R\leq |x|\leq
2R\}} dx\right)^{\frac23}\n \\
&\leq &C\|D^2\s\|_{L^\infty}\lim_{R\to \infty}
\|v\|_{L^6(R\leq |x|\leq
2R)}^2=0
\eqn
as claimed.
Thus (\ref{mmdom2}) implies that
$$
\nabla v=0 \quad\mbox{on} \quad \Bbb R^3,
$$
and $v=$ constant. By (\ref{12c}) we have $v=0$.
$\square$\\
\ \\
\noindent{\em Remark after the proof of Theorem 1.2: } The first part of the
above proof, showing $ [Q]_+=0 $ can be also done by applying the
maximum principle, which follows from the following identity for $Q$,
$$ -\Delta Q+v\cdot \nabla Q =-|\o|^2 \leq 0
$$
I do not think, however, the maximum principle can also be applied
to the proof of the second part, showing $[Q]_-=0$, which is more
subtle than the first part. The above proof overall shows that the
argument of the proof I used for this second part can also be
adapted for the first part without using the
maximum principle, which exhibits consistency.\\
\[ \mbox{\large\bf Appendix}\]
|
2,877,628,088,582 | arxiv | \section*{\refname\markboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}}%
\addcontentsline{toc}{section}{\refname}%
}
\def\<{\langle}
\def\>{\rangle}
\deft_w{t_w}
\deft_p{t_p}
\newcommand{\textgr}[1]{\textcolor{red}{#1}}
\newcommand{\textgx}[1]{\textcolor{black}{#1}}
\title{Spatial correlations of elementary relaxation events in glass--forming liquids}
\author[1,*]{Raffaele Pastore}
\author[1]{Antonio Coniglio}
\author[2,1]{Massimo Pica Ciamarra}
\affil[1]{
CNR--SPIN, Dipartimento di Scienze Fisiche,
Universit\'a di Napoli Federico II, Italy
}
\affil[2]{
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, \newline Nanyang Technological University, Singapore
}
\affil[*] {Corresponding author: [email protected]}
\date{}
\begin{document}
\twocolumn[
\maketitle
\begin{onecolabstract}
The dynamical facilitation scenario, by which localized relaxation events promote
nearby relaxation events in an avalanching process, has been suggested as
the key mechanism connecting the microscopic and the macroscopic dynamics of structural glasses.
Here we investigate the statistical features of this process via the numerical
simulation of a model structural glass.
First we show that the relaxation dynamics of the system occurs through particle jumps
that are irreversible, and that cannot be decomposed in smaller irreversible events.
Then we show that each jump does actually trigger an avalanche. The characteristic
of this avalanche change on cooling, suggesting that the relaxation
dynamics crossovers from a noise dominated regime where jumps do not
trigger other relaxation events, to a regime dominated by the facilitation
process, where a jump trigger more relaxation events.
\end{onecolabstract}
]
\clearpage
\section{Introduction}
Structural glasses, which are amorphous solids obtained by
cooling liquids below their melting temperature avoiding crystallization,
provide an array of questions that has been challenging researchers
in the last decades~\cite{RevBerthier,BiroliGarrahn,Kirkpatrick2015}.
These include the nature of the glass transition,
the origin of the extraordinarily sensitivity of the relaxation time
on temperature, the Boson-peak, the relaxation dynamics.
In this respect, here we consider that there is not
yet an established connection between the short time single particle motion,
and the overall macroscopic dynamics.
When observed at the scale of a single particle,
the motion of structural glasses is well known to be intermittent.
This is commonly rationalized considering each particle
to rattle in the cage formed by its neighbors, until
it jumps to a different cage~\cite{Intermittence}.
Conversely, when the motion is observed at the macroscale,
a spatio-temporal correlated dynamics emerges~\cite{DHbook}.
Dynamical facilitation~\cite{GarrahanChandle2002,GarrahanChandle2003,GarrahanChandle2010},
by which a local relaxation event facilitates the occurrence
of relaxation events in its proximity, has been suggested as a key mechanism
connecting the microscopic and the macroscopic dynamics. Indeed,
kinetically constrained lattice model~\cite{RitortSollich}, which provide the conceptual
framework of the dynamical facilitation scenario, reproduce much of the
glassy phenomenology and are at the basis of a purely dynamical interpretation of
the glass transition. Different numerical approaches have
tried to identify irreversible relaxation events
~\cite{Heuer, Vollmayr, Bing, Onuki,WidmerCooper, Procaccia,Yodh, Baschnagel, Makse, Arenzon, del gado},
and both numerical~\cite{Candelier2, Chandler_PRX} and experimental works~\cite{Candelier1, Sood2014}
revealed signatures of a dynamical facilitation scenario.
Here we provide novel insights into the dynamical facilitation
mechanisms through the numerical investigation of a model glass former.
We show that it is possible to identify single particle jumps
that are {\it elementary} relaxations, being short-lasting
irreversible events that cannot be decomposed
in a sequence of smaller irreversible events.
We then clarify that these jumps lead to spatio-temporal correlations
as each jump triggers subsequent jumps in an avalanching process.
The statistical features of the avalanches changes on cooling.
Around the temperature where the Stokes-Einstein relation first breaks down,
the dynamics shows a crossover from
a high temperature regime, in which the avalanches do not
spread and the dynamics is dominated by thermal noise,
to a low temperature regime, where the avalanches percolate.
These results suggest to interpret dynamical facilitation as a spreading
process \cite{spread}, and might open the way to the developing of dynamical
probabilistic models to describe the relaxation of glass formers.
\section{Methods\label{sec:method}}
We have performed NVT molecular dynamics
simulations~\cite{LAMMPS} of a two-dimensional
50:50 binary mixture of $2N = 10^3$ of disks,
with a diameter ratio $\sigma_{L}/\sigma_{S} =1.4$,
known to inhibit crystallization, at a fixed area fraction $\phi = 1$
in a box of side $L$.
Particles interact via an soft potential~\cite{Likos}, $V(r_{ij}) = \epsilon
\left((\sigma_{ij}-r_{ij})
/\sigma_L\right)^\alpha \Theta(\sigma_{ij}-r_{ij})$, with $\alpha=2$ (Harmonic).
Here $r_{ij}$ is the interparticle separation and $\sigma_{ij}$ the average
diameter of the interacting particles.
This interaction and its variants (characterized by different values of $\alpha$)
are largely used to model dense colloidal systems,
such as foams\cite{Durian}, microgels\cite{Zaccarelli} and glasses\cite{Berthier_Witten, Manning_Liu}.
Units are reduced so that $\sigma_{L}=m=\epsilon=k_B=1$,
where $m$ is the mass of both particle species and $k_B$ the Boltzmann's
constant. The two species behave in a qualitatively analogous way,
and all data presented here refer to the smallest component.\\
{\it Cage--jump detection algorithm.}
We segment the trajectory of each particle in a series of cages
interrupted by jumps using the algorithm of Ref.~\cite{SM14},
following earlier approaches~\cite{Vollmayr}.
\textgx{Briefly, we consider that,
on a timescale $\delta$ of few particle collisions,
the fluctuation $S^2(t)$ of a caged particle position
is of the order of the Debye--Waller factor (DWF) $\<u^2\>$.
By comparing $S^2(t)$ with $\<u^2\>$ we therefore consider a particle as caged
if $S^2(t) < \<u^2\>$, and as jumping otherwise. Practically,
we compute $S^2(t)$ as $\< (r(t)-\<r(t)\>_\delta)^2\>_\delta$,
where the averages are computed in the time interval $[t-\delta:t+\delta]$,
with $\delta \simeq 10t_b$, and $t_b$ is the ballistic time.
At each temperature DWF is defined according to Ref.~\cite{Leporini},
$\<u^2\> = \<r^2(t_{DW})\>$,
where $t_{DW}$ is the time of minimal diffusivity of the system, i.e. the time
at which the derivative of $\log \<r^2(t)\>$ with respect to $\log(t)$ is minimal.
At each instant the algorithm allows to identify the jumping particles and the caged ones.
We stress that in this approach a jump is a process with a finite duration.
Indeed, by monitoring when $S^2$ equals $\<u^2\>$, we are able to
identify the time at which each jump (or cage) starts and ends.
We thus have access to the time, $t_p$, a particle
persists in its cage before making the first after an arbitrary chosen $t=0$ (persistence time),
to the waiting time between subsequent jump of the same particle $t_w$ (cage duration),
and to the duration $\Delta t_j$ and the length $\Delta r_J$ of each jump.
}
\section{Results}
\subsection{Jumps as irreversible elementary processes \label{res1}}
The idea of describing the relaxation of structural glasses
as consisting of a sequence of irreversible processes is not new,
and different approaches have been followed to identify these events.
For instance, irreversible events have been
associated to change of neighbors~\cite{Onuki,WidmerCooper,Procaccia},
to displacements overcoming a threshold in a fixed time laps~\cite{Chandler_PRX},
to processes identified through clustering algorithm applied to the particle trajectories~\cite{Candelier2, Candelier1},
or to more sophisticated approaches~\cite{Baschnagel}.
We notice that since at long time particles move diffusively, all procedures that
coarse grains the particle trajectory enough will eventually identify
irreversible events. Here we show that the jumps we have identified
are irreversible, and we give evidence suggesting that these
can be considered as `elementary' irreversible events, i.e
that they are the smallest irreversible single--particle move,
at least in the range of parameters we have investigated.
Investigating both the model considered here~\cite{SM14},
as well as the 3d Kob-Andersen Lennard-Jones (3d KA LJ) binary mixture~\cite{SciRep} and
experimental colloidal glass~\cite{SM15}, we have previously shown that
the protocol defined in Sec.~\ref{sec:method} leads to the identification
of irreversible events. Indeed, the mean square displacement
of the particles increases linearly with the number of jumps,
allowing to describe the dynamics as a continuous time random walk (CTRW) \cite{CTRW}.
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.33]{fig1_bis.eps}
\end{center}
\caption{\label{fig:times}
Average persistence time, $\<t_p\>$, cage duration, $\<t_w\>$ and jump duration,
$\<\Delta t_J\>$, as a function of the temperature.
$\<t_w\>$ grows as an Arrhenius $\<t_w\> \propto \exp\left(A/T\right)$ (red full line),
whereas $\<t_p\>$ is compatible with several super--Arrhenius laws.
\textgx{The black full line is, for example, a fit $\<t_p\> \propto \exp\left(A/T^2\right)$,
while the black dashed line is a Vogel--Fulcher law $\<t_p\> \propto \exp\left(B/(T-T_{0})\right)$,
predicting a divergence at a finite temperature $T_{0}\simeq 0.001$.
The arrow indicates the temperature $T_x=0.002$ where $\<t_p\>$ and $\<t_w\>$
decouple and the SE relation breaks down.}
Conversely, $\<\Delta t_J\>$ remains roughly constant on cooling.
}
\end{figure}
Within this approach two fundamental timescales are found,
the average persistence time $\<t_p\>$ and the average cage
duration $\<t_w\>$.
The former corresponds to the relaxation time at the wavelength of the order
of the jump length $\<\Delta r_J\>$, while the latter is related to the self diffusion constant,
$D\propto \<\Delta r_J^2\>/\<t_w\>$.
Fig.\ref{fig:times} shows that the two timescales are equal at high temperature, but decouple
at a temperature $T_x\simeq 0.002$,
\textgx{which marks the onset of the Stokes-Einstein (SE) breakdown
at the wavelength of the jump length.}
We find that $\<t_w\>$ shows an Arrhenius temperature dependence $\<t_w\> \propto \exp\left(A/T\right)$,
while $\<t_p\>$ increases with a faster super--Arrhenius behaviour
\textgx{(see the caption of Fig.\ref{fig:times})}.
It is worth noticing that the decoupling between the average persistence and waiting time,
is known to control the breakdown of the SE relation at generic wavelengths,
and to induce temporal heterogeneities \cite{Garrahan_CTRW, SciRep}.
These findings suggest that $T_x$ may represent a crossover from
a localized to a more correlated relaxation process.
\textgx{A similar scenario has been recently reported for models
of atomic glass forming liquids, where the SE breaks down and the
size of dynamics heterogeneities markedly accelerates below a well defined value of $T_x$.\cite{Jaiswal}}
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.33]{fig2.eps}
\end{center}
\caption{\label{fig:jump}
Mean squared jump length $\<\Delta r_{J}^2\>$ as a function of the jump
duration $\Delta t_J$
at different temperatures.
}
\end{figure}
We performed two investigations supporting the elementary nature of the jumps we have identified.
First, we have considered the change of the average jump duration $\<\Delta t_J\>$ on cooling,
as the duration of elementary relaxations is expected not to grow with the relaxation time.
Fig.~\ref{fig:times} shows that the $\<\Delta t_J\>$ is essentially constant,
despite the relaxation time $\<t_p\>$ varying by order of magnitudes.
Indeed, at low temperature $\<t_p\>/\<\Delta t_J\> \gg 1$,
clarifying why we call them `jumps'.
Then we have considered how particles move while making a jump.
Fig.~\ref{fig:jump} illustrates that the mean squared jump length
grows subdiffusively as a function of the jump duration, with a subdiffusive
exponent that decreases on cooling.
Conversely, one would expect a diffusive behaviour
if jumps were decomposable in a series of irreversible steps.
These results supports the identification of the jumps we have defined
with the elementary relaxations leading to the macroscopic relaxation of the particle system.
\subsection{Correlations between jumps}
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.33]{fig3.eps}
\end{center}
\caption{\label{fig:Pe_r_0}
{\bf }
Excess probability to observe contemporary jumps, $C_J(r,0)$, as
function of the distance and at different temperature, as indicated.
The dashed line is a guide to the eyes $\propto \exp(-1.35/r)$.
}
\end{figure}
While each particle behaves as a random walker as it performs subsequent
jumps, yet jumps of different particles could be spatially and temporally
correlated.
We investigate these correlations focusing on the properties of a jump birth
scalar field, defined as
\begin{equation}
\label{eq:dn_j}
b(r,t) = \frac{1}{N} \sum_{i}^N b_i(t) \delta(r-r_i(t)).
\end{equation}
Here $b_i(t)=1$ if particle $i$ starts a jump between $t$ and $t+\delta t$,
where $\delta t$ is our temporal resolution, $b_i(t)=0$ otherwise.
The scalar field $b$ allows to investigate
the statistical features of the facilitation process
by which a jump triggers subsequent ones.
To this end, we indicate with
$\<b(r,t)\>_{b(0,0)=1}$ the probability that a jump starts in $(t,r)$
given a jump in $(t=0,r=0)$, and investigate the correlation function
\begin{equation}
\label{eq:Pe_r_t}
C_J(r,t)= \left[ \frac{\<b(r,t)\>_{b(0,0)=1}- \<b\>}{g(r,t)} \right].
\end{equation}
Here $g(r,t)$ is a time dependent generalization
of the radial distribution function
\begin{equation}
\label{eq:g_r_t}
g(r,t)dr=\frac{1}{2\pi r \rho (N-1)} \sum_{i\neq j} \delta (r-|r_j(t)-r_i(0)|),
\end{equation}
through which we avoid the appearance of spurious oscillations in the correlation function
$C_J(r,t)$ due to the short range ordering of the system.
In Eq.\ref{eq:Pe_r_t}, $\<b\>$ is the spatio-temporal average of the jump birth,
and decreases on cooling as $\<b\>=(\<t_w\>+\<\Delta t_J\>)^{-1}$
(at low temperature $\<b\>\simeq\<t_w\>^{-1}$ as $\<t_w\><<\<\Delta t_J\> $).
Accordingly, the correlation function $C_J(r,t)$ is the probability that a
jump triggers a subsequent one at a distance $r$ after a time $t$.
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.36]{fig4.eps}
\end{center}
\caption{\label{fig:Pe_r_t}
Evolution of the spatial correlation between jumps with time. Each panel refer
to a different temperature, as indicated. Within each panel,
the different curves correspond to $t=0, 10, 20, 30, 100, 500, 10^3$ and $10^5$,
from top to bottom. At high temperature data corresponding to the largest
times are missing as the correlation is too small to be measured.
}
\end{figure}
We first consider the spatial correlations between contemporary jumps,
where two jumps are considered contemporary if occurring within our temporal
resolution.
Fig.~\ref{fig:Pe_r_0} shows that $C_J(r,0)$ decays exponentially,
with a temperature independent correlation length $\xi_J(0,T)\simeq
1.35$.
This result clarifies that jumps aggregate in cluster of roughly
$N_{corr}\simeq\rho \pi \xi_J^2(0)\simeq 5$ events.
A similar scenario has been observed in a different model system, where jumps
have been observed to aggregate in clusters of roughly $7.6$
particles~\cite{Candelier2}.
Our results also support previous findings suggesting~\cite{Chandler_PRX} that
the elementary excitations
of structural glasses have a temperature-independent length not larger than few
particle diameters and are consistent with a recently introduced first principle
extension of the Mode Coupling Theory~\cite{Rizzo}.
The investigation of the displacements of the particle jumping in each cluster
does not reveal characteristic spatial features. Structured particle motion,
such as string-like particle displacements~\cite{String} or displacements
reminiscent of T1 events~\cite{Zhoua2015} must therefore result
from a succession of events rather than a single one.
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.33]{fig5.eps}
\end{center}
\caption{\label{fig:Axi}
Panel a shows that the amplitude $A(t)$ of the jump correlation function $C_J(r,t)$.
Panels b and c clarify that a first
exponential decay is followed, at low temperature,
by a second one, which approximately follows a power law.
}
\end{figure}
We now consider the time evolution of the spatial correlation between jumps.
Fig.~\ref{fig:Pe_r_t} illustrates that at all temperatures and times the decay of the correlation function
is compatible with an exponential, $C_J(r,t)\propto A(t) \exp(-r/\xi_J(t))$.
The time dependence of the amplitude is illustrated in Fig.~\ref{fig:Axi}.
At all temperatures the short time decay of the amplitude is exponential,
$A(t,T) = A(0,T) \exp(-t/\tau_A(T))$, the characteristic decay time slightly increasing on cooling.
While no other decay is observed at high temperatures, at low temperatures the
exponential decay crossovers towards a much slower power-law decay
$A(t) \sim t^{-a}$, with $a \simeq 0.4$.
Fig.~\ref{fig:xi_t} shows that the correlation length slowly grows in time,
approximately as $\xi_J(t) \sim t^b$, with $b \simeq 0.1$.
\begin{figure}[t!]
\begin{center}
\includegraphics*[scale=0.33]{fig6_bis.eps}
\end{center}
\caption{\label{fig:xi_t}
Time dependence of the jump correlation length, at different temperatures.
The data suggest that at low temperature
the correlation length slowly grows in time, as $\xi_J(T) \propto t^{0.1}$.
}
\end{figure}
The initial fast decrease of the amplitude makes difficult to obtain
reliable estimates of its time dependence and correlation length,
despite intense computational efforts.
Nevertheless, our data clearly show the
reported exponential to power--law crossover in the decay of the amplitude of $C_J(r,t)$.
The highest temperature at which this decay exhibits a power law tail,
is consistent with the temperature $T_x$ where
$\<t_w\>$ and $\<t_p\>$ first decouple,
and the SE relation breaks down (see Sec.\ref{res1}).
This suggests that the breakdown of the SE relation
is related to a crossover in the features of the facilitation process.
We investigate this crossover focussing on the number of jumps triggered by a given jump.
This is given by $N_{\rm tr}(T) \propto \int_0^\infty n(t,T)dt$,
where $n(t,T)=\int C({\bf r},t) {\bf r} d{\bf r} \propto A(t,T) \xi^2(t,T) dt$,
is the number of jumps it triggers at time $t$.
As at high temperature the variation of the correlation
length is small with respect to that of the amplitude,
one can assume $\xi(t,T) \simeq \xi(0,T)$
and estimate $N_{\rm tr}(T) \propto A(0,T) \xi^2(0,T) \tau_A(T)$.
At low temperature, the integral is dominated by the long time power law behavior of
the amplitude and of the correlation length, and the number
of triggered events diverges as
$N_{\rm tr}(T,t) \propto \int_0^t A(t) \xi^2(t) dt \propto t^{-a+2b+1} \propto t^{0.8}$.
\section{Discussion}
We conclude by noticing that the above scenario suggests to interpret facilitation
as an infection spreading process, in which a particle is infected each time it jumps.
Since each particle can be infected more than once, the relevant infection model is of susceptible-infected-susceptible (SIS) type.
In this framework, the exponential to power--law crossover in the decay of the amplitude of
$C_J(r,t)$ signals a transition from a high temperature resilient regime, in which
a single infected site only triggers a finite number of infections,
to a low temperature regime in which the number of triggered infection diverges.
A complementary interpretation can be inspired by the diffusing defect paradigm \cite{defects, RevBerthier}.
We suggest that the correlation length of contemporary jumps, $\xi_J(0)$, is akin to
the typical defect size, which, according to our results, is temperature independent.
In the high temperature regime, this is the only relevant correlation length,
as defects are rapidly created and destroyed by noisy random fluctuations,
before they can sensibly diffuse. At low temperature, the effect of noise becomes smaller:
the short time correlation length is still dominated by the defect size, $\xi_J(t<\tau_A) \simeq \xi_J(0)$,
whereas its long time behaviour, $\xi_J(t>>\tau_A)$, is controlled by the typical distance defects have moved up to time $t$.
Further studies are necessary to investigate which of the two interpretations is more appropriate.
\bigskip
\noindent{{\bf Acknowledgement}\\
We acknowledge financial support
from MIUR-FIRB RBFR081IUK,
from the SPIN SEED 2014 project {\it Charge separation and charge transport in hybrid solar cells},
and from the CNR--NTU joint laboratory {\it Amorphous materials for energy harvesting applications}.
}
|
2,877,628,088,583 | arxiv | \section*{References}
\medskip
|
2,877,628,088,584 | arxiv |
\section{Introduction}
\input{sections/introduction}
\section{A model for sequence validity}
\input{sections/model_description}
\section{Online generation of synthetic training data}
\input{sections/active_learning}
\section{Experiments}
\input{sections/python}
\input{sections/molecules}
\section{Discussion}
\input{sections/discussion}
\subsection{Active learning}
\label{sec:active}
Let $x_{1:T}$ denote an arbitrary sequence and let $y$ be the unknown binary label
indicating whether $x_{1:T}$ is valid or not. Our model's predictive
distribution for $y$, that is, $p(y|x_{1:T},w)$ is given by (\ref{eq:predictions}).
The amount of information on $w$ that we expect to gain by
labeling and adding $x_{1:T}$ to $\mathcal{D}$ can be measured
in terms of the expected reduction in the entropy of the posterior distribution
$p(w|\mathcal{D})$. That is,
\begin{align}
\alpha(x_{1:T}) = \text{H}[p(w |\mathcal{D})] - \mathbb{E}_{p(y|x_{1:T},w)} \text{H}[{p(w | \mathcal{D} \cup (x_{1:T}, y)}]\,,
\label{eq:info-gain}
\end{align}
where $\text{H}(\cdot)$ computes the entropy of a distribution. This formulation
of the entropy-based active learning criterion is, however, difficult to
approximate, because it requires us to condition on $x_{1:T}$ -- effectively . To obtain a simpler expression we follow
\citet{houlsby2011bayesian} and note that $\alpha(x_{1:T})$ is equal to the
mutual information between $y$ and $w$ given $x_{1:T}$ and $\mathcal{D}$
\begin{align}
\alpha(x_{1:T}) =
\text{H}\{\mathbb{E}_{p(w | \mathcal{D})} [ p(y|x_{1:T},w)]\} - \mathbb{E}_{p(w|\mathcal{D})}\{ \text{H}[p(y|x_{1:T},w)]\}\,,
\label{eq:mutual-info}
\end{align}
which is easier to work with as the required entropy is now that of Bernoulli
predictive distributions, an analytic quantity. Let $\mathcal{B}(p)$
denote a Bernoulli distribution with probability $p$, and with probability mass
$p^z(1-p)^{1-z}$ for values $z \in \{0, 1\}$. The entropy of $\mathcal{B}(p)$
can be easily obtained as
\begin{align}
\text{H}[\mathcal{B}(p)] = -p \log p - (1-p) \log(1-p)\equiv g(p)\,.\label{eq:bernoulli-entropy}
\end{align}
The expectation with respect to $p(w | \mathcal{D})$ can be easily approximated by
Monte Carlo. We could attempt to sequentially construct $\mathcal{D}$ by optimising
(\ref{eq:mutual-info}). However, this optimisation process would still be
difficult, as it would require evaluating $\alpha(x_{1:T})$ exhaustively on all
the elements of $\mathcal{X}$. To avoid this, we follow a greedy approach and
construct our informative sequence in a sequential manner. In particular, at
each time step $t=1,\dots,T$, we select $x_t$ by optimising the mutual
information between $w$ and $Q^\star(x_{<t}, x_t)$, where $x_{<t}$ denotes here
the prefix already selected at previous steps of the optimisation process. This
mutual information quantity is denoted by $\alpha(x_t | x_{<t})$ and its
expression is given by
\begin{align}
\label{eq:1}
\alpha(x_t | x_{<t}) = \text{H}\{\mathbb{E}_{p(w | \mathcal{D})} [\mathcal{B}(\mathrm{y}(x_t|x_{<t},\weights))]\} -
\mathbb{E}_{p(w|\mathcal{D})}\{ \text{H}[\mathcal{B}(\mathrm{y}(x_t|x_{<t},\weights))] \}.
\end{align}
The generation of an informative sequence can then be performed
efficiently by sequentially optimising (\ref{eq:1}), an operation that requires
only $|\mathcal{C}|\times T$ evaluations of $\alpha(x_t | x_{<t})$.
To obtain an approximation to (\ref{eq:1}), we first approximate the posterior
distribution $p(w|\mathcal{D})$ with $q(w)$ and then estimate the expectations
in (\ref{eq:1}) by Monte Carlo using $K$ samples drawn from $q(w)$. The
resulting estimator is given by
\begin{align}
\hat \alpha(x_t \mid x_{<t}) = g\bigg[ \frac{1}{K} \sum_{k=1}^K \mathrm{y}(x_t | x_{<t}, w_k)\bigg] - \frac{1}{K} \sum_{k=1}^K g\left[ \mathrm{y}(x_t | x_{<t}, w_k) \right]\,,
\end{align}
where $w_1,\ldots,w_K \sim q(w)$ and $g(\cdot)$ is defined in
(\ref{eq:bernoulli-entropy}). The nonlinearity of $g(\cdot)$ means that our
Monte Carlo approximation is biased, but still consistent. We found that
reasonable estimates can be obtained even for small $K$. In our experiments we
use $K = 16$.
The iterative procedure just described is designed to produce a single
informative sequence. In practice, we would like to generate a batch of
informative and diverse sequences. The reason for this is that, when training
neural networks, processing a batch of data is computationally more efficient
than individually processing multiple data points. To construct a batch with $L$
informative sequences, we propose to repeat the previous iterative procedure $L$
times. To introduce diversity in the batch-generation process, we ``soften'' the
greedy maximisation operation at each step by injecting a small amount of noise
in the evaluation of the objective function \citep{finkel2006solving}. Besides
introducing diversity, this can also lead to better overall solutions than those
produced by the noiseless greedy approach \citep{cho2016noisy}. We introduce
noise into the greedy selection process by sampling from
\begin{align}
p(x_t| x_{<t},\theta) = \frac{\exp\{\alpha(x_t |x_{<t})/\theta\}}{\sum_{x_t' \in \mathcal{C}} \exp\{\alpha(x_t' | x_{<t})/\theta\}}\,
\end{align}
for each $t=1,\dots,T$, which is a Boltzmann distribution with sampling
temperature $\theta$. By adjusting this temperature parameter, we can trade off
the diversity of samples in the batch vs. their similarity.
\subsection{Data augmentation}
\label{sec:aug}
In some settings, such as the molecule domain we will consider later, we have
databases of known-valid examples (e.g.~collections of known drug-like
molecules), but rarely are sets of invalid examples available. Obtaining invalid
sequences may seem trivial, as invalid samples may be obtained by sampling
uniformly from $\mathcal{X}$, however these are almost always so far from any
valid sequence that they carry little information about the boundary of
valid and invalid sequences. Using just a known data set also carries the danger
of overfitting to the subset of $\mathcal{X}_+$ covered by the data.
We address this by perturbing sequences from a database of valid sequences, such
that approximately half of the thus generated sequences are invalid. These
perturbed sequences $x'_{1:T}$ are constructed by setting each $x'_t$ to be a
symbol selected independently from $\mathcal{C}$ with probability $\gamma$,
while remaining the original $x_t$ with probability $1-\gamma$. In expectation
this changes $\gamma T$ entries in the sequence. We choose $\gamma = 0.05$,
which results in synthetic data that is approximately 50\% valid.
\section{Coverage estimation}
\label{appendix:coverage}
Ideally, we would like to check that the learned model $\mathrm{y}(x_t|x_{<t},\weights)$ assigns positive probability to exactly those points
which may lead to valid sequences, but for large discrete spaces this is impossible to compute or even accurately estimate.
A simple check for accuracy could be to evaluate whether the model correctly identifies points as valid in a known, held-out validation or test set of real data, relative to randomly sampled sequences (which are nearly always invalid).
However, if the validation set is too ``similar'' to the training data, even showing 100\% accuracy in classifying these as valid may simply indicate having overfit to the training data: a discriminator which identifies data as similar to the training data needs to be accurate over a much smaller space than a discriminator which estimates validity over all of $\mathcal{X}$.
Instead, we propose to evaluate the trade-off between accuracy on a validation set, and an approximation to the size of the effective support of $\prod_t \mathrm{y}(x_t|x_{<t},\weights)$ over $\mathcal{X}$.
Let $\mathcal{X}_+$ denote the valid subset of $\mathcal{X}$.
Suppose we estimate the valid fraction $f_+ = \nicefrac{|\mathcal{X}_+|}{|\mathcal{X}|}$ by simple Monte Carlo, sampling uniformly from $\mathcal{X}$. We can then estimate $N_+ = |\mathcal{X}_+|$ by $f_+|\mathcal{X}|$, where $|\mathcal{X}| = C^T$, a known quantity. A uniform distribution over $N_+$ sequences would have an entropy of $\log N_+$. We denote the entropy of output from the model $H$. If our model was perfectly uniform over the sequences it can generate, it would then be capable of generating $N_{\textrm{model}} = e^H$. As our model at its optimum is extremely not uniform over sequences $x \in \mathcal{X}$, this is very much a lower bound a coverage.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth,trim={14cm 0 6cm 0},clip]{figs/heatmap.pdf}
\caption{Full heatmap showing predictions $\mathrm{y}(x_t|x_{<t},\weights)$ for the molecule in Figure~\ref{fig:example-mol}.}
\label{fig:full-heatmap}
\end{figure}
\subsection{SMILES molecules}
SMILES strings \citep{weininger1970smiles} are one of the most common
representations for molecules, consisting of an ordering of atoms and
bonds. It is attractive for many applications because it maps the graphical
representation of a molecule to a sequential representation, capturing not just
its chemical composition but also structure. This structural information
is captured by intricate dependencies in SMILES strings based on chemical
properties of individual atoms and valid atom connectivities. For
instance, the atom Bromine can only bond with a single other atom, meaning that it may only occur at the beginning or end of a SMILES string, or within a
so-called `branch', denoted by a bracketed expression \texttt{(Br)}. We
illustrate some of these rules, including a Bromine branch, in
figure~\ref{fig:example-mol}, with a graphical representation of a molecule alongside its corresponding SMILES string. There, we also show examples of how a string
may fail to form a valid SMILES molecule representation. The full SMILES
alphabet is presented in table~\ref{tab:smiles-alphabet}.
\begin{table}[hb]
\centering
\caption{SMILES alphabet}
\label{tab:smiles-alphabet}
\begin{tabular}{c c c c}
\toprule
atoms/chirality & bonds/ringbonds & charges & branches/brackets \\
\midrule
\texttt{B} \texttt{C} \texttt{N} \texttt{O} \texttt{S} \texttt{P} \texttt{F} \texttt{I} \texttt{H} \texttt{Cl} \texttt{Br} \texttt{@} & \texttt{=\#/\textbackslash12345678} & \texttt{-+} & \texttt{()[]} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/molecule_structure.pdf}
\caption{Predictions $\mathrm{y}(x_t|x_{<t},\weights)$ of the agent at each step $t$ for the valid test
molecule shown in the top left figure, for a subset of possible actions
(selecting as next character \texttt{C}, \texttt{F}, \texttt{)}, or
\texttt{]}). Each column shows which actions the trained agent believes are
valid at each $t$, given the characters $x_{<t}$ preceding it. We see that the
validity model has learned basic valence constraints: for example the oxygen
atom \texttt{O} at position 10 can form at most 2 bonds, and since it is
preceded by a double bond, the model knows that neither carbon \texttt{C} nor
fluorine \texttt{F} can immediately follow it at position 11; we see the same
after the bromine \texttt{Br} at position 18, which can only form a single
bond. The model also correctly identifies that closing branch symbols
\texttt{)} cannot immediately follow opening branches (after positions 6, 8,
and 17), as well as that closing brackets \texttt{]} cannot occur until an
open bracket has been followed by at least one atom (at positions 32--35). The
full output heatmap for this example molecule is shown in
Figure~\ref{fig:full-heatmap} in the appendix. }
\label{fig:example-mol}
\end{figure}
The intricacy of SMILES strings makes them a suitable testing ground for our
method. There are two technical distinctions to make between this experimental
setup and the previously considered Python 3 mathematical expressions. As there
exist databases of SMILES strings, we leverage those by using the data
augmentation technique described in section \ref{sec:aug}. The main data source
considered is the ZINC data set \cite{zinc}, as used in \cite{kusner17}. We also
use the USPTO 15k reaction products data \citep{uspto} and a set of molecule
solubility information \citep{solubility} as withheld test data.
Secondly, whereas we used fixed length Python 3 expressions in order to obtain
coverage bounds, molecules are inherently of variable length. We deal with this by
padding all molecules to fixed length.
\paragraph{Validating grammar model accuracy} As a first test of the suitability
of our proposed validity model, we train it on augmented ZINC data and examine
the accuracy of its predictions on a withheld test partition of that same data
set as well as the two unseen molecule data sets. Accuracy is the ability of the
model to accurately recognise which perturbations make a certain SMILES string
invalid, and which leave it valid -- effectively how well the model has captured
the grammar of SMILES strings in the vicinity of the data manifold. Recalling
that a sequence is invalid if $\tilde{v}(x_{1:t}) = 0$ at any $t \leq T$, we
consider the model prediction for molecule $x_{1:T}$ to be
$\prod_{t=1}^T \mathbb{I}\left[\mathrm{y}(x_t|x_{<t},\weights) \geq 0.5\right]$, and compare this to its
true label as given by rdkit, a chemical informatics software. The results are
encouraging, with the model achieving 0.998 accuracy on perturbed ZINC (test)
and 1.000 accuracy on both perturbed USPTO and perturbed Solubility withheld
data. Perturbation rate was selected such that approximately half of the
perturbed strings are valid.
\paragraph{Integrating with Character VAE} To demonstrate the models capability
of improving preexisting generative models for discrete structures, we show how
it can be used to improve the results of previous work, a character variational
autoencoder (CVAE) applied to SMILES strings
\citep{gomez-bombarelli_automatic_2016, kingma2013auto}. Therein, an encoder
maps points in $\mathcal{X}_+$ to a continuous latent representation $\mathcal{Z}$
and a paired decoder maps points in $\mathcal{Z}$ back to $\mathcal{X}_+$. A
reconstruction based loss is minimised such that training points mapped to the
latent space decode back into the same SMILES strings. The fraction of test
points that do is termed reconstruction accuracy. The loss also features a term
that encourages the posterior over $\mathcal{Z}$ to be close to some prior,
typically a normal distribution. A key metric for the performance of variational
autoencoder models for discrete structures is the fraction of points sampled
from the prior over $\mathcal{Z}$ that decode into valid molecules. If many
points do not correspond to valid molecules, any sort of predictive modeling on
that space will likely also mostly output invalid SMILES strings.
The decoder functions by outputting a set of weights $f(x_t | z)$ for each
character $x_t$ in the reconstructed sequence conditioned on a latent point
$z\in\mathcal{Z}$; the sequence is recovered by sampling from a multinomial
according to these weights. To integrate our validity model into this framework,
we take the decoder output for each step $t$ and mask out choices that we
predict cannot give valid sequence continuations. We thus sample characters with
weights given by $f(x_t | z) \cdot \mathbb{I}\left[\mathrm{y}(x_t|x_{<t},\weights) \geq 0.5\right]$.
\paragraph{Autoencoding benchmarks} Table \ref{tbl:vae} contains a comparison of
our work to a plain CVAE and to the Grammar VAE approach. We use a Kekul\'{e}
format of the ZINC data in our experiments, a specific representation of
aromatic bonds that our model handled particularly well. Note that the results
we quote for Grammar VAE are taken directly from \cite{kusner17} and on
non-Kekul\'{e} format data. The CVAE model is trained for 100 epochs, as
per previous work -- further training improves reconstruction accuracy.
We note that the binary nature of the proposed grammar model means that it does
not affect the reconstruction accuracy. In fact, some modest gains are present.
The addition of our grammar model to the character VAE significantly improves
its ability to decode discrete structures, as seen by the order of magnitude
increase in latent sample validity. The action of our model is completely
post-hoc and thus can be applied to any pre-trained character-based VAE model
where elements of the latent space correspond to a structured discrete sequence.
\begin{table}[h]
\centering
\begin{tabular}[h]{c c c}
\toprule
Model & reconstruction accuracy & sample validity \\
\midrule
CVAE + Validity Model & 50.2\% & 22.3\% \\
Grammar VAE & 53.7\% & 7.2\% \\
Plain CVAE & 49.7\% & 0.5\% \\
\bottomrule
\end{tabular}
\caption{Performance metrics for VAE-based molecule model trained for 100 epochs
on ZINC (train) data, with and without the proposed validity model overlaid at
test time, and the Grammar VAE method. Sample validity is the fraction of
samples from the prior over $\mathcal{Z}$ that decode into valid molecules.}
\label{tbl:vae}
\end{table}
\subsection{Mathematical expressions}
We illustrate the utility of the proposed validity model and sequential Bayesian
active learning in the context of Python 3 mathematical expressions. Here,
$\mathcal{X}$ consists of all length 25 sequences that can be constructed from the
alphabet of numbers and symbols shown in table \ref{tab:py-alphabet}. The
validity of any given expression is determined using the Python 3 \texttt{eval}
function: a valid expression is one that does not raise an exception when
evaluated.
\begin{table}[hb] \centering
\caption{Python 3 expression alphabet}
\label{tab:py-alphabet}
\begin{tabular}{c c c c}
\toprule
digits & operators & comparisons & brackets \\
\midrule
\texttt{1234567890} & \texttt{+-*/\%!} & \texttt{=<>} & \texttt{()} \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Measuring model performance} Within this problem domain we do not
assume the existence of a data set of positive examples. Without a validation data
set to measure performance on, we compare the models in terms of their
capability to provide high entropy distributions over valid sequences. We define a
generative procedure to sample from the model and measure the validity and
entropy of the samples. To sample stochastically, we use a Boltzmann policy,
i.e.~a policy which samples next actions according to
\begin{align}
\pi(x_t = c | x_{<t}, w, \tau) = \frac{\exp(\mathrm{y}( c | x_{<t}, w)/\tau)}{\sum_{j \in \mathcal{C}} \exp(\mathrm{y}( j | x_{<t}, w)/\tau)}
\end{align}
where $\tau$ is a temperature constant that governs the trade-off between
exploration and exploitation. Note that this is not the same as the Boltzmann
distribution used as a proposal generation scheme during active learning, which
was defined not on $Q$-function values but rather on the estimated mutual
information.
We obtain samples $\set{x^{(1)},\dots,x^{(N)}}_{\tau_i}$ for a range of
temperatures $\tau_i$ and compute the validity fraction and entropy of each set
of samples. These points now plot a curve of the trade-off between validity and
entropy that a given model provides. Without a preferred level of sequence
validity, the area under this validity-entropy curve (V-H AUC) can be utilised
as a metric of model quality. To provide some context for the entropy values, we
estimate an information theoretic lower bound for the fraction of the set
$\mathcal{X}_{+}$ that our model is able to generate. This translates to upper
bounding the false negative rate for our model.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\input{figs/auc_time_expr.pgf}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\input{figs/e25_tradeoff.pgf}
\end{subfigure}
\caption{Experiments with length 25 Python expressions. (Left) Area under
validity-entropy curve as training progresses, 10-90 percentiles shaded. Active learning converges
faster and reaches a higher maximum. (Right) Entropy versus validity for
median active and median passive model after $200$k training sequences. Both
models have learnt a high entropy distribution over valid sequences.}
\label{fig:expr-entropy}
\end{figure}
\paragraph{Experimental setup and results} We train two models using our
proposed Q-function method: \textit{passive}, where training sequences are
sampled from a uniform distribution over $\mathcal{X}$, and \textit{active}, where
we use the procedure described in section \ref{sec:active} to select training
sequences. The two models are otherwise identical.
Both trained models give a diverse output distribution over valid sequences
(figure \ref{fig:expr-entropy}). However, as expected, we find that the
\textit{active} method is able to learn a model of sequence validity much more
rapidly than sampling uniformly from $\mathcal{X}$, and the corresponding converged
model is capable of generating many more distinct valid sequences than that
trained using the \textit{passive} method. In table \ref{tab:expr-coverage} we
present lower bounds on the support of the two respective models. The details of
how this lower bound is computed can be found in appendix
\ref{appendix:coverage}. Note that the overhead of the active learning data
generating procedure is minimal: processing 10,000 takes 31s with
\textit{passive} versus 37s with \textit{active}.
\begin{table}
\centering
\caption{Estimated lower bound of coverage $N$ for \textit{passive} and
\textit{active} models, defined as the size of the set of Python expressions
on which the respective model places positive probability mass. Evaluation
is on models trained until convergence ($800,000$ training points, beyond
the scope of figure \ref{fig:expr-entropy}). The lower bound estimation
method is detailed in Appendix~\ref{appendix:coverage}.}
\begin{tabular}[h]{c c c c c }
\toprule
\multirow{2}{*}{temperature $\tau$}& \multicolumn{2}{c }{passive model} & \multicolumn{2}{c}{active model} \\
& validity & $ N $ & validity & $ N $ \\
\midrule
0.100 & 0.850 & $9.7\times 10^{27}$ & 0.841 & $8.2\times 10^{28}$ \\
0.025 & 0.969 & $2.9\times 10^{25}$ & 0.995 & $4.3\times 10^{27}$ \\
0.005 & 1.000 & $1.1\times 10^{22}$ & 1.000 & $1.3\times 10^{27}$ \\
\bottomrule
\end{tabular}
\label{tab:expr-coverage}
\end{table}
|
2,877,628,088,585 | arxiv | \section{Introduction}
Turbulent mean field dynamos are thought to be at the
heart of magnetic field generation and maintenance in
most astrophysical bodies, like the sun or the galaxy.
A particularly important driver of the mean field dynamo (MFD)
is the $\alpha$-effect which, in the kinematic regime,
depends only on the helical properties of the turbulence.
It is crucial to understand how the $\alpha$-effect gets
modified due to the backreaction of
the generated mean and fluctuating fields.
Using closure schemes or the quasi-linear approximation
it has been argued that, due to Lorentz forces, the $\alpha$-effect gets
``renormalized'' by the addition of a term proportional to the
current helicity of the generated small scale magnetic fields
\citep{pouq,GD,KRR95,S99,KR99,BF,RKR,BS05}.
The presence of such an additional term is uncontroversial if a helical
small scale magnetic field is present even in the absence of a mean field.
However, it has been argued that, in the absence of
such a pre-existing small scale magnetic field, the $\alpha$-effect
can be expressed exclusively in terms of
the velocity field, albeit one which is a solution
of the full momentum equation including the Lorentz force
\citep{proctor,RR07}.
In the latter case, it is not obvious that the helicity of the
small scale magnetic field plays any explicit role in the backreaction to
$\alpha$.
It is important to clarify this issue, as it will decide
how one should understand the saturation of turbulent dynamos,
as well as the possibility of catastrophic quenching
of the $\alpha$-effect and ways to alleviate such quenching.
Here and below, ``catastrophic'' means that $\alpha$ is quenched down
to values on the order of the inverse magnetic Reynolds number.
In order to clarify these conflicting views,
we examine here an exactly solvable model of
the nonlinear backreaction to the $\alpha$-effect,
where we assume small magnetic and fluid Reynolds numbers.
Obviously, this approach does not allow us to address the question
of catastrophic quenching of astrophysical dynamos directly, but it
allows us to make novel and unambiguous statements that help clarifying
the nature of magnetic saturation.
We will show that, at least
in this simple context, both the above viewpoints are
consistent, if interpreted properly.
\section{Mean field electrodynamics } \label{MFED}
In mean field electrodynamics \citep{KR80,M78},
any field ${\bm{F}}$ is split into a mean field $\overline {\bm F}$ and a `fluctuating'
small scale field $\bm f$, such that ${\bm{F}}=\overline {\bm F}+\bm f$.
The fluctuating velocity (or magnetic) field is assumed to possess
a correlation length $l$ small compared
to the length scale $L$ of the variation of the
mean field. The magnetic field obeys
the induction equation,
\begin{equation}
\label{indB}
{\partial{\bm{{{B}}}}\over\partial t}=
\eta\nabla^2\bm{{{B}}}+\nabla\times(\bm U\times\bm{{{B}}}), \quad
\nabla\cdot\bm{{{B}}}=0,
\end{equation}
where $\bm U$ represents the fluid velocity,
$\eta=(\mu_0\sigma)^{-1}$ is the magnetic
diffusivity (assumed constant), $\sigma$ is the electric conductivity,
and $\mu_0$ is the vacuum permeability.
Averaging Eq.~(\ref{indB}), we obtain
the standard mean-field dynamo equation
\begin{equation}
\label{indmeanB}
{\partial{\bm{\overline{{B}}}}\over\partial t}=
\eta\nabla^2\bm{\overline{{B}}}+\nabla\times(\bm{\overline{{U}}}\times\bm{\overline{{B}}}+\mbox{\boldmath ${\cal E}$}), \quad
\nabla\cdot\bm{\overline{{B}}}=0.
\end{equation}
This averaged equation now has a new term,
the mean electromotive force (emf)
$\mbox{\boldmath ${\cal E}$}={\overline {\bm{{{u}}}\times\bm{{{b}}}}}$, which crucially depends on
the statistical properties of the $\bm{{{u}}}$ and $\bm{{{b}}}$ fields. The central
closure problem in mean field theory is to find an expression for
the correlator $\mbox{\boldmath ${\cal E}$}$ in terms of the mean fields.
To find an expression for $\mbox{\boldmath ${\cal E}$}$, one needs the
evolution equations for both the fluctuating magnetic field
$\bm{{{b}}}$ and the fluctuating velocity field $\bm{{{u}}}$. The first follows from
subtracting Eq.~(\ref{indmeanB}) from Eq.~(\ref{indB}),
\begin{equation}
\label{flucb}
{\partial{\bm{{{b}}}}\over\partial t}=
{\eta\nabla^2{\bm{{{b}}}}}+\nabla\times(\bm{\overline{{U}}}\times\bm{{{b}}}+\bm{{{u}}}\times\bm{\overline{{B}}})+\bm G, \quad
\nabla\cdot{\bm{{{b}}}}=0.
\end{equation}
Here $\bm G=\nabla\times(\bm{{{u}}}\times\bm{{{b}}})'$ with
$(\bm{{{u}}}\times\bm{{{b}}})'=\bm{{{u}}}\times\bm{{{b}}}-{\overline {\bm{{{u}}}\times\bm{{{b}}}}}$.
In what follows, we will set the mean field velocity to zero, i.e.\
$\bm{\overline{{U}}} = 0$ and focus solely on the effect of the
fluctuating velocity.
The evolution equation for $\bm{{{u}}}$ can be derived in a similar manner
by subtracting the averaged momentum equation from the full momentum
equation.
We assume the flow to be incompressible with ${\nabla\cdot\bm{{{u}}}}=0$.
We get
\begin{eqnarray}
{\partial{\bm{{{u}}}}\over\partial t} &=&
-{1\over\rho}\nabla\left(p+{1\over\mu_0}{\bm{\overline{{B}}}\cdot\bm{{{b}}}}\right)+\nu\nabla^2\bm{{{u}}}
\nonumber \\
&&+{1\over\mu_0\rho}\left[{(\bm{\overline{{B}}}\cdot\nabla)\bm{{{b}}}}
+{(\bm{{{b}}}\cdot\nabla){\bm{\overline{{B}}}}}\right]+\bm f+\bm T.
\label{flucu}
\end{eqnarray}
Here $\rho$ is the mass density, $p$ is the perturbed
fluid pressure, $\nu$ is the kinematic viscosity taken to be constant,
$\bm f$ is the fluctuating force, and
\begin{equation}
{\bm T}=-({\bm{{{u}}}\cdot\nabla\bm{{{u}}}})'-{1\over\mu_0\rho}
\left[(\bm{{{b}}}\cdot\nabla\bm{{{b}}})'-{\textstyle{1\over2}}\nabla(\bm{{{b}}}^{2})'\right]
\end{equation}
contains the second order
terms in ${\bm{{{u}}}}$ and ${\bm{{{b}}}}$.
Here, primed quantities indicate deviations from the mean,
i.e.\ $X' = X - \overline{X}$.
We will also redefine $\bm{{{b}}}/\sqrt{\mu_0\rho} \to \bm{{{b}}}$,
by setting $\mu_0\rho=1$, so that
the magnetic field is measured in velocity units.
In order to find $\mbox{\boldmath ${\cal E}$}$ under the influence of the Lorentz force
one has to solve Eqs~(\ref{flucb}) and (\ref{flucu}) simultaneously
and compute ${\overline {\bm{{{u}}}\times\bm{{{b}}}}}$. In general this is a
difficult problem and one has to take recourse to closure
approximations or numerical simulations. To make progress
we assume here $R_\mathrm{m} = ul/\eta \ll 1$ and $\mbox{\rm Re}= ul/\nu \ll 1$; that is
both the magnetic and fluid Reynolds numbers are small
compared to unity.
In this case there is no small scale dynamo action and so the small scale
magnetic field is solely due to shredding the large scale magnetic field.
Here $u$ and $b$ (see below) are typical strengths of the
fluctuating velocity and magnetic fields respectively.
In the low magnetic Reynolds number limit the ratio of the
first nonlinear term in $\bm G$ to the resistive term in Eq.~(\ref{flucb})
is $\sim (ub/l)/(\eta b/l^2) \sim R_\mathrm{m} \ll 1$. So this part of $\bm G$ can be neglected
compared to the resistive term.
(Note that the second term in $\bm G$ vanishes automatically when
taking the averages to evaluate the mean emf.)
Neglecting the nonlinear term, the generation rate of
$b$ is $\sim u \overline{B}/l$, while
its destruction rate is $\sim \eta b/l^2$.
Equating these two rates, this also implies that $b \sim R_\mathrm{m} \overline{B}$
and the fluctuation field is only a small perturbation to mean fields.
Similarly the ratio of the nonlinear advection term
to the viscous term in Eq.~(\ref{flucu}),
is $\sim (u^2/l)/(\nu u/l^2) \sim \mbox{\rm Re}\ll 1$ and the ratio of the parts of the
Lorentz force nonlinear in $\bm{{{b}}}$ to that linear in $\bm{{{b}}}$ is
$\sim (b^2/l)/(b{\overline B}/l) \sim R_\mathrm{m} \ll 1$. So $\bm T$ can also be
neglected in Eq.~(\ref{flucu}).
In this limit, one can therefore apply the
well known first order smoothing approximation (FOSA).
It is sometimes also referred to as the second order correlation approximation
\citep[or SOCA; see, e.g.,][]{KR80}.
This approximation consists of neglecting the nonlinear terms $\bm G$ and $\bm T$,
to solve {\it both} the induction and momentum equation.
Since FOSA is applied to the momentum equation as well, we
will refer to this as ``double FOSA''.
In order to make the problem analytically tractable,
we will take ${\bm{\overline{{B}}}=\bm{{{B}}}_{0}}=\mbox{const}$.
This also allows us to isolate the $\alpha$-effect in a straightforward fashion.
In the next section we begin by considering for simplicity the case of steady
forcing.
It is then possible to also neglect the time derivatives in
Eqs~(\ref{flucb}) and (\ref{flucu}).
We return to consider time dependent
forcing in detail in Section~\ref{Section:timedep}.
\section{Computing $\mbox{\boldmath ${\cal E}$}$ for steady forcing}
\label{Section:meanemf}
Under the assumptions highlighted above, one can solve directly for
${\bm{{{u}}}}$ and ${\bm{{{b}}}}$ in terms of the forcing function ${\bm f}$.
This in turn allows the calculation of the mean emf in four ways.
\begin{itemize}
\item[A.] We use the induction equation to solve for $\bm{{{b}}}$ in terms of $\bm{{{u}}}$.
Then one can write the emf completely in terms of the velocity field,
as in normal FOSA and then substitute for ${\bm{{{u}}}}$ in terms of ${\bm f}$.
\item[B.] We compute $\mbox{\boldmath ${\cal E}$}={\overline {\bm{{{u}}}\times\bm{{{b}}}}}$ directly.
\item[C.] We use the momentum equation to solve for ${\bm{{{u}}}}$ in terms
of ${\bm{{{b}}}}$ and the forcing function ${\bm f}$, and then substitute
for ${\bm{{{b}}}}$ in terms of ${\bm f}$.
\item[D.] Compute $\mbox{\boldmath ${\cal E}$}$ from the
$\partial\mbox{\boldmath ${\cal E}$}/\partial t=0$ relation, as in $\tau$-approximation closures.
\end{itemize}
We will show that all four methods give the same answer for the mean emf
in terms of the forcing function $\bm f$. The first Method~A gives
the traditional FOSA result for the $\alpha$-effect being
dependent on the helical properties of the velocity field,
while Method~C can be interpreted to reflect the idea of
a renormalized $\alpha$ due to the helicity of small scale magnetic fields.
But we show that the final answer in terms of the forcing is identical.
Before going into the various methods as highlighted above,
we solve for ${\bm{{{u}}}}$ and ${\bm{{{b}}}}$ in terms of the forcing function
${\bm f}$.
In the low conductivity limit, neglecting the time variation of $\bm{{{b}}}$
in Eq.~(\ref{flucb}) we have,
\begin{equation}
-{\eta\nabla^2{\bm{{{b}}}}}=\bm{{{B}}}_0\cdot\nabla\bm{{{u}}}.
\label{loconflucb}
\end{equation}
Similarly, in the limit of low $\mbox{\rm Re}$ and $R_\mathrm{m}$, Eq.~(\ref{flucu}) becomes,
\begin{equation}
-{\nu\nabla^2\bm{{{u}}}}=
\bm{{{B}}}_0\cdot\nabla{\bm{{{b}}}}+\bm f-\nablap_\mathrm{eff},
\label{loconflucu}
\end{equation}
where $p_\mathrm{eff}$ combines the hydrodynamic and the magnetic pressure.
Using the incompressibility condition, one can eliminate
$p_\mathrm{eff}$.
We will solve these equations in Fourier space.
Throughout this paper we will be using the convention
\begin{equation}
{\tilde{\bm{{{u}}}}}({\bm k})=
{1\over (2\pi)^3}{\int{\bm u}({\bm x})e^{-{\rm i}{\bm{k}}\cdot\bm{x}}\mathrm{d}{\bm x}},
\end{equation}
which satisfies the inverse relation
\begin{equation}
{\bm{{{u}}}}({\bm x})=
{\int{{\tilde{\bm{{{u}}}}}({\bm k})e^{{\rm i}{\bm{k}}\cdot\bm{x}}\mathrm{d}{\bm k}}}.
\end{equation}
In Fourier space, Eqs~(\ref{loconflucb}) and (\ref{loconflucu}) become
\begin{equation}
\label{bintu}
{\eta k^2\tilde{b}_{i}}({\bm k})=
({\rm i}{\bm{k}}\cdot\bm{{{B}}}_0)\,\tilde{u}_{i}({\bm{k}}),
\end{equation}
\begin{equation}
\label{uintbf1}
\nu k^2\tilde{u}_{i}({\bm{k}})=
({\rm i}{\bm{k}}\cdot\bm{{{B}}}_0)\,\tilde{b}_{i}({\bm{k}})+\tilde{f}_{i}({\bm{k}}),
\end{equation}
where we have chosen the forcing to be divergence free,
with ${\rm i}{\bm{k}}\cdot\tilde{\bm{f}}=0$.
We can therefore solve the above two equations simultaneously
to express $\tilde{\bm{{{u}}}}$ and $\tilde{\bm{{{b}}}}$ completely in terms of $\tilde{\bm f}$,
\begin{equation}
\label{uintf}
{\tilde{u}_{i}}({\bm k})=
\frac{\tilde{f}_i({\bm{k}})}{\nu k^2+({\bm{{{B}}}_0\cdot{\bm{k}}})^2/\eta k^2},
\end{equation}
\begin{equation}
\label{bintf}
{\tilde{b}_i}({\bm{k}})=
\frac{\tilde{f}_i({\bm{k}})}{\nu k^2+({\bm{{{B}}}_0\cdot{\bm{k}}})^2/\eta k^2}
\,\frac{{\rm i}{\bm{k}}\cdot\bm{{{B}}}_0}{\eta k^2}.
\end{equation}
We can use these solutions to calculate $\mbox{\boldmath ${\cal E}$}$. For getting
an explicit expression, we also need the equal time
force correlation function.
For isotropic and homogeneous forcing, this is given by
\begin{equation}
\label{ffcor}
\overline{{\tilde{f}_{j}}({{\bm{p}}},t)\;{\tilde{f}_{k}}({{\bm{q}}},t)} =
{\delta^3({{\bm{p}}+{\bm{q}}})}{{F_{jk}}({{\bm{q}}})}.
\end{equation}
Here, $F_{jk}$ is the force spectrum tensor which is given by
\begin{equation}
\label{fjk}
{F_{jk}}({\bm{k}})=
P_{jk}\frac{\Phi(k)}{4\pi k^{2}} +
\epsilon_{jkm}\frac{{{\rm i} k_{m}}{\chi(k)}}{8{\pi}k^{4}},
\end{equation}
where ${P_{jk}}={\delta_{jk}}-{{k_{j}}{k_{k}}/k^2}$ is the
projection operator,
and $\Phi(k)$ and $\chi(k)$ are spectra characterizing the
mean squared value and the helicity of the forcing function,
normalized such that
\begin{equation}
\int_0^\infty\Phi(k)\;\mathrm{d}{k}={\textstyle{1\over2}}\overline{\bm{{{f}}}^2}
\equiv{\textstyle{1\over2}} A_{\rm f}^2.
\end{equation}
\begin{equation}
\int_0^\infty\chi(k)\;\mathrm{d}{k}=\overline{\bm{{{f}}}\cdot(\nabla\times\bm{{{f}}})}
\equiv H_{\rm f}.
\end{equation}
The mean emf can be written as
\begin{equation}
\label{meanemf}
{\cal E}_{i}({\bm x})=
{\epsilon_{ijk}}{\overline {{\bm{{{u}}}}_j({\bm x}){\bm{{{b}}}}_k({\bm x})}}
= \int{{\tilde{{\cal E}_{i}}({\bm k})}\;e^{{\rm i}{\bm{k}}\cdot\bm{x}}
\mathrm{d}{\bm k}},
\end{equation}
where the Fourier transform $\tilde{\mbox{\boldmath ${\cal E}$}}$ is given by
\begin{equation}
\label{fsemf}
{\tilde{{\cal E}_{i}}({\bm k})}= \epsilon_{ijk} {\int{{\overline {\tilde{u}_{j}(\bm{k}-\bm{q})\tilde{b}_{k}(\bm{q})}}\;\mathrm{d}{\bm q}}}.
\end{equation}
We now turn to the calculation of the nonlinear mean emf and the resulting
nonlinear $\alpha$-effect in the four different methods outlined above.
\subsection{Method~A: express $\bm{{{b}}}$ in terms of $\bm{{{u}}}$ and then
solve for $\mbox{\boldmath ${\cal E}$}$}
\label{metha}
In this approach we use the induction equation to solve for
$\bm{{{b}}}$ in terms of $\bm{{{u}}}$.
Using \Eq{bintu} to express $\bm{{{b}}}$ in terms of $\bm{{{u}}}$ in \Eq{fsemf} gives
\begin{equation}
\label{emfbu1}
{\tilde{{\cal E}_{i}}({\bm k})}=
{{\rm i}}{\epsilon_{ijk}}\;{\int{\frac{{\bm{{{B}}}_0\cdot{\bm{q}}}}{\eta q^2}
\;{\overline {\tilde{u}_{j}(\bm{k}-\bm{q})\tilde{u}_{k}(\bm{q})}}\;\mathrm{d}{\bm q}}}.
\end{equation}
At this stage one can put the emf completely in terms of
the velocity field and recover the
usual FOSA expression that in the low conductivity and isotropic limit,
the ${\alpha}$-effect is related to
the helicity of the velocity potential
\citep{KR80,RR07}.
This can be shown in the following manner: since $\nabla\cdot\bm{{{u}}} =0$,
the velocity field can be expressed as
$\bm{{{u}}}=\nabla\times\bm{\psi}$, where $\bm{\psi}$ is the velocity vector
potential with the gauge condition
$\nabla\cdot\bm{\psi}=0$. We then have
$\tilde{u}_k({\bm{q}})=
{{\rm i}}q_p\epsilon_{kpl}\tilde{\psi}_l({\bm{q}})$.
Substituting this expression in Eq.~(\ref{emfbu1}),
and using the fact that the velocity field is divergence-less,
we get
\begin{eqnarray}
\label{emfbu2}
\nonumber
{\tilde{{\cal E}_{i}}({\bm k})}&=&
k_j{\int{\frac{{\bm{{{B}}}_0\cdot{\bm{q}}}}{\eta q^2}\;
{\overline{\tilde{u}_{j}(\bm{k}-\bm{q})\tilde{\psi}_{i}(\bm{q})}}\;\mathrm{d}{\bm q}}} \\
&&-{\int{\frac{{\bm{{{B}}}_0\cdot{\bm{q}}}}{\eta q^2}\;
q_i\;{\overline{\tilde{u}_{l}(\bm{k}-\bm{q})\tilde{\psi}_{l}(\bm{q})}}\;\mathrm{d}{\bm q}}}.
\end{eqnarray}
For homogeneous and isotropic turbulence, the ${\overline{\tilde{u}_{j}(\bm{k}-\bm{q})\tilde{\psi}_{i}(\bm{q})}}$
correlation is proportional to $\delta^3({\bm{k}})$. Since the first term in
Eq.~(\ref{emfbu2}) is $\propto k_j$, it
does not contribute to $\mbox{\boldmath ${\cal E}$}$. Therefore,
\begin{equation}
\label{emfbu3}
{\tilde{{\cal E}_{i}}({\bm k})}=
-{\int{\frac{{B_{0m}\;q_m}}{\eta q^2}\;
q_i\;{\overline{\tilde{u}_{l}(\bm{k}-\bm{q})\tilde{\psi}_{l}(\bm{q})}}\;\mathrm{d}{\bm q}}}.
\end{equation}
Again, for homogeneous and isotropic turbulence,
the ${\overline{\tilde{u}_{l}(\bm{k}-\bm{q})\tilde{\psi}_{l}(\bm{q})}}$ correlation is $\propto \delta^3({\bm{k}}) g(|{\bm{q}}|)$.
One can then carry out the angular integral in
Eq.~(\ref{emfbu3}) using
$\int (q_m q_i/q^2) (d\Omega/4\pi)
=(1/3) \delta_{mi}$ to get
$ {{\cal E}_{i}(\bm{x})}= \alpha B_{0i}$,
where $\alpha$ is given by
\begin{equation}
\label{uualpha}
\alpha=-\frac{1}{3\eta}\,\overline{\bm{\psi}\cdot\bm{{{u}}}},
\end{equation}
which is identical to the expressions obtained by \cite{KR80};
see also \cite{RR07}.
(Note that when the Lorentz force becomes important the
assumption of isotropy in the above derivation breaks down and
the $\alpha$-effect becomes anisotropic,
as calculated below and detailed in Section 3.5.)
Since we have already solved for the velocity field explicitly,
we can now derive an expression for the mean emf in this
approach. Substituting the velocity
in terms of the forcing function, the mean emf in coordinate
space is given by
\begin{equation}
\label{emfmeta}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\int{{\bm{{{B}}}_{0}\cdot{\bm q}}\over{\eta q^2}}
\frac{F_{jk}({\bm{q}})}
{\left[\nu q^2 +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}/{\eta q^2}\right]^2}\;\mathrm{d}{{\bm{q}}}.
\end{equation}
Here $F_{jk}$ the spectrum tensor for the force-field given
by Eq.~(\ref{fjk}). Note that only the antisymmetric part of
$F_{jk}$ contributes to $\mbox{\boldmath ${\cal E}$}$, due to the presence of
$\epsilon_{ijk}$ on the RHS of the above equation.
We can also write $\mbox{\boldmath ${\cal E}$}$ as
\begin{equation}
\label{emfmetan}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\int\frac{\bm{{{B}}}_0\cdot{\bm{q}}}{(\eta q^2)(\nu q^2)^2}
\frac{F_{jk}({\bm{q}})}{\left[1+N\right]^2}\;\mathrm{d}{{\bm{q}}},
\end{equation}
where $N=({\bm{{{B}}}_0\cdot{\bm{q}}})^2/(\eta\nu q^4)$ determines the importance of the
Lorentz forces on the mean emf. It is to be noted that the limit of
small Lorentz forces corresponds to taking $N \ll 1$ above.
\subsection{Method~B: compute $\mbox{\boldmath ${\cal E}$}$ directly}
\label{methb}
In this approach, we directly compute
$\mbox{\boldmath ${\cal E}$}={\overline {\bm{{{u}}}\times\bm{{{b}}}}}$ by
substituting $\bm{{{u}}}$ and $\bm{{{b}}}$
in terms of $\bm f$, using Eqs~(\ref{uintf}) and (\ref{bintf}).
We then get
\begin{equation}
\label{meanemfff}
\tilde{{\cal E}_i}({\bm{k}})=
{\rm i}\epsilon_{ijk}\; \int \frac{\bm{{{B}}}_{0}\cdot{\bm{q}}}{\eta q^2}
\; \frac{{\overline {\tilde{f}_{j}(\bm{k}-\bm{q})\tilde{f}_{k}(\bm{q})}}}{\gamma({\bm{k}}-{\bm{q}})\gamma({\bm{q}})} \;\mathrm{d}{\bm q},
\end{equation}
where we have defined
$\gamma({\bm{q}}) = \nu q^2+({\bm{{{B}}}_0\cdot{\bm{q}}})^2/\eta q^2$.
Substituting for the force correlation, the mean emf in
coordinate space is given by
\begin{equation}
\label{emfmetb}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\int{{\bm{{{B}}}_{0}\cdot{\bm q}}\over{\eta q^2}}
\frac{F_{jk}({\bm{q}})}
{\left[\nu q^2 +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}/{\eta q^2}\right]^2}\;\mathrm{d}{{\bm{q}}},
\end{equation}
which is identical to Eq.~(\ref{emfmeta}) for $\mbox{\boldmath ${\cal E}$}$
obtained by Method~A.
\subsection{Method~C: express $\bm{{{u}}}$ in terms of $\bm{{{b}}}$ and
then solve for $\mbox{\boldmath ${\cal E}$}$}
\label{methc}
Note that one could also start from the momentum equation
to compute $\mbox{\boldmath ${\cal E}$}$. In this approach, we first solve for $\bm{{{u}}}$ in terms
of $\bm{{{b}}}$ and the forcing function $\bm f$,
and then substitute for $\bm{{{b}}}$ in terms of $\bm f$
using Eq.~(\ref{bintf}).
The difference from the earlier treatments
will be an additional term containing an
$\bm f\times\bm{{{b}}}$-like correlation, which turns out to be
essential for calculating the $\mbox{\boldmath ${\cal E}$}$ correctly.
Using Eq.~(\ref{uintbf1}) one can write
\begin{equation}
\label{uintbf2}
{\tilde{u}_{i}}({\bm k})=
\frac{1}{\nu k^2}\left[({\rm i}{\bm{k}}\cdot\bm{{{B}}}_{0})\,{\tilde{b}_{i}}({\bm k})
+{\tilde{f}_{i}}({\bm k})\right].
\end{equation}
From Eq.~(\ref{fsemf}) the mean emf can then be written as
\begin{eqnarray}
\nonumber
{\tilde{{\cal E}_{i}}({\bm k})}&\!=\!&
{\epsilon_{ijk}}{\int{\frac{1}{\nu({\bm{k}}-{\bm{q}})^2}{\overline {\tilde{f}_{j}(\bm{k}-\bm{q})\tilde{b}_{k}(\bm{q})}}\;\mathrm{d}{\bm q}}} \\
&&\!\!+{{\rm i}}{\epsilon_{ijk}}{\int\frac{\bm{{{B}}}_0\cdot({\bm{k-q}})}{\nu({\bm{k}}-{\bm{q}})^2}
\,{\overline {\tilde{b}_{j}(\bm{k}-\bm{q})\tilde{b}_{k}(\bm{q})}}\;\mathrm{d}{\bm q}}.
\label{metC}
\end{eqnarray}
Here the first term involves the $\bm f\times\bm{{{b}}}$-like correlation.
To elucidate the meaning of the second term it is useful
to define the magnetic field as
$\bm{{{b}}} = \nabla\times\bm a$, where $\bm a$ is the small scale
magnetic vector potential in the Coulomb gauge ($\nabla\cdot\bm a =0$).
Then, for isotropic small scale fields, following the approach
in Method~A, the second term in Eq.~(\ref{metC})
gives a contribution to $\mbox{\boldmath ${\cal E}$}$ of the form $\hat{\alpha}_{\rm M}\bm{{{B}}}_{0}$, where
\begin{equation}
\hat{\alpha}_{\rm M}={1\over3\nu}\,\overline{\bm a\cdot\bm{{{b}}}}.
\end{equation}
So this contribution is proportional to the magnetic helicity
of the small scale magnetic field (analogous to the helicity of the vector
potential of the velocity field).
If we substitute $\bm{{{b}}}$ in terms of $\bm f$ from Eq.~(\ref{bintf})
and then integrate over the delta function, the mean emf in coordinate
space can be expressed as
\begin{eqnarray}
\label{emfmc}
{{\cal E}_{i}({\bm x})}&=&
{\rm i}{\epsilon_{ijk}}\int\frac{\bm{{{B}}}_0\cdot{\bm{q}}}{\eta q^2}
\frac{F_{jk}({\bm{q}})}{(\nu q^2)^2\;\left[1+N\right]}\mathrm{d}{{\bm{q}}} \\ \nonumber
&&-{\rm i}{\epsilon_{ijk}}\int\frac{\bm{{{B}}}_0\cdot{\bm{q}}}{\eta q^2}
\frac{F_{jk}({\bm{q}})}{(\nu q^2)^2\;\left[1+N\right]}
\,\frac{N}{1+N}\,\mathrm{d}{{\bm{q}}}.
\end{eqnarray}
The two terms on the RHS of the above equation have an
interesting interpretation. As mentioned above, the limit of small
Lorentz forces corresponds to taking
$N = ({\bm{{{B}}}_0\cdot{\bm{q}}})^2/\eta\nu q^4 \ll 1$.
In this limit the second integral vanishes while the first one
[i.e.\ the $\bm f\times\bm{{{b}}}$-like correlation in Eq.~(\ref{emfmc}),
which is really a $(\nabla^{-2}\bm f)\times\bm{{{b}}}$ correlation]
goes over to a kinematic $\alpha$-effect.
[One can see by comparing Eq.~(\ref{emfmetan}) and the first term in
Eq.~(\ref{emfmc}) that the two are identical in the
limit $N \ll 1$].
In fact, this part of the $\alpha$-effect can be obtained from
Eqs~(\ref{uintf}) and (\ref{bintf}) by neglecting the
Lorentz force in the expression for ${\tilde{u}_{i}}({\bm k})$.
In the following we refer to the contribution of the
field-aligned component of this term
divided by $B_0$, as the $\hat{\alpha}_{\rm F}$ term,
because it comes from the $\bm f\times\bm{{{b}}}$-like correlation.
As $N$ is increased the contribution from the first term decreases.
In the same limit, the second term,
which depends on the magnetic helicity, gains in
importance. Since it has the opposite sign, it partially cancels the
first term and further suppresses the total $\alpha$-effect.
This is reminiscent of the suppression of
the kinetic alpha due to the addition of a magnetic alpha
(proportional to helical part of $\bm{{{b}}}$) found in several
closure models \citep{pouq,KR82,GD,BF02,BS05b}.
When adding the two terms in Eq.~(\ref{emfmc}),
the mean emf turns out to be
\begin{equation}
\label{emfmetc}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\int{{\bm{{{B}}}_{0}\cdot{\bm q}}\over{\eta q^2}}
\frac{F_{jk}(q)}
{\left[\nu q^2 +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}/{\eta q^2}\right]^2}\;\mathrm{d}{{\bm{q}}},
\end{equation}
which is identical to the expressions obtained in Methods~A and B;
see Eqs~(\ref{emfmeta}) and (\ref{emfmetb}), respectively.
\subsection{Method~D: compute $\mbox{\boldmath ${\cal E}$}$ from the
$\partial\mbox{\boldmath ${\cal E}$}/\partial t=0$ relation}
In recent years the so-called $\tau$-approximation has received increased
attention \citep{KMR96,BF02,RKR,BS05,RR07}.
This involves invoking a closure whereby triple correlations
which arise during the evaluation of $\partial\mbox{\boldmath ${\cal E}$}/\partial t$,
are assumed to provide a damping term proportional to $\mbox{\boldmath ${\cal E}$}$ itself.
In the present context there is no need to invoke a closure for the
triple correlations, because
these terms are small for low fluid and magnetic Reynolds numbers.
It turns out that the correct expression for $\mbox{\boldmath ${\cal E}$}$ can still be
derived in the same framework, where one evaluates the
$\partial\mbox{\boldmath ${\cal E}$}/\partial t$ expression.
The expression for $\partial\mbox{\boldmath ${\cal E}$}/\partial t$
is governed by two terms, $\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$ and
$\overline{\bm{{{u}}}\times\dot{\bm{{{b}}}}}$, where dots denote partial time
differentiation.
Of course, both $\dot{\bm{{{u}}}}$ an $\dot{\bm{{{b}}}}$ vanish in the present case,
but this is the result of a cancellation of driving and dissipating terms.
In the present analysis both terms will be retained, because the
dissipating term, which is related to the desired $\mbox{\boldmath ${\cal E}$}$, can then
just be written as the negative of the driving term.
We perform the analysis in Fourier space and begin by defining
$\bm E({\bm{k}},{\bm{q}}) = \overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}})\times\tilde{\bm{{{b}}}}({\bm{q}})}$.
Note the required
\begin{equation}
\mbox{\boldmath ${\cal E}$} = \int {\bm E({\bm{k}},{\bm{q}})} e^{ i {{\bm{k}}\cdot\bm{x}}}
\mathrm{d}{\bm k}\, \mathrm{d}{\bm q}.
\label{EEdef}
\end{equation}
To calculate the time derivative $\partial\mbox{\boldmath ${\cal E}$}/\partial t$, one
needs to evaluate $\dot{\bm E} = \dot{\bm E}_{\rm K} + \dot{\bm E}_{\rm M}$, where
\begin{equation}
\dot{\bm E}_{\rm K}({\bm{k}},{\bm{q}}) =
\overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}})\times\dot{\tilde{\bm{{{b}}}}}({\bm{q}})},
\label{ubdottindep}
\end{equation}
\begin{equation}
\dot{\bm E}_{\rm M}({\bm{k}},{\bm{q}}) =
\overline{\dot{\tilde{\bm{{{u}}}}}({\bm{k}}-{\bm{q}})\times\tilde{\bm{{{b}}}}({\bm{q}})}.
\label{udotbtindep}
\end{equation}
For $\dot{\tilde{\bm{{{u}}}}}$ and $\dot{\tilde{\bm{{{b}}}}}$ we restore the
time derivatives in Eqs~(\ref{bintu}) and (\ref{uintbf1}),
and obtain
\begin{eqnarray}
\label{emfKdot}
\dot{\bm E}_{\rm K}&=&{\rm i}{{\bm{q}}\cdot\bm{{{B}}}_0}\,
[\overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}})\times\tilde{\bm{{{u}}}}({\bm{q}})}]
-\eta q^2 {\bm E},
\\
\label{emfMdot}
\dot{\bm E}_{\rm M} &=&{\rm i}({\bm{k}}-{\bm{q}})\cdot\bm{{{B}}}_0\,
[\overline{\tilde{\bm{{{b}}}}({\bm{k}}-{\bm{q}})\times\tilde{\bm{{{b}}}}({\bm{q}})}]
-\nu ({\bm{k}}-{\bm{q}})^2 \bm E
\nonumber \\
&&+\overline{\tilde{\bm{{{f}}}}({\bm{k}}-{\bm{q}})\times\tilde{\bm{{{b}}}}({\bm{q}})}.
\end{eqnarray}
Since all time derivatives are negligible,
we can simplify the RHS of the above equations
by using Eqs~(\ref{uintf}) and (\ref{bintf}) to express $\tilde{\bm{{{u}}}}$ and
$\tilde{\bm{{{b}}}}$ in terms of the forcing function.
Adding the two Eqs~(\ref{emfKdot}) and (\ref{emfMdot}) yields
\begin{eqnarray}
\left[
\eta q^2 + \nu ({{\bm{k}} -{\bm{q}}})^2\right] {{\bm E}_{i}} =
\delta^3({\bm{k}}) \frac{{\rm i}{\bm{q}}\cdot\bm{{{B}}}_0}
{\eta q^2}
\frac{\epsilon_{ijk}\;F_{jk}({\bm{q}})}{\gamma({\bm{q}})\gamma({\bm{k}}-{\bm{q}})}
\nonumber \\
\times \left[\eta q^2 + \gamma({\bm{k}} -{\bm{q}}) - \frac{(({\bm{k}}-{\bm{q}})\cdot\bm{{{B}}}_0)^2}
{\eta({\bm{k}}-{\bm{q}})^2} \right],
\label{Ekq}
\end{eqnarray}
where the function $\gamma$ was defined just below \Eq{meanemfff}.
The expression in the squared brackets on the right hand side
of the expression above exactly reduces to
$\eta q^2 + \nu ({\bm{k}} -{\bm{q}})^2$ and so, in the steady state limit,
$\partial/\partial t=0$, we have
\begin{equation}
\label{tau1}
{{\bm E}_{i}({\bm{k}},{\bm{q}})} = \delta^3({\bm{k}}) \frac{{\rm i}{{\bm{q}}\cdot\bm{{{B}}}_0}}{\eta q^2}
\frac{\epsilon_{ijk}\; F_{jk}({\bm{q}})}{\gamma({\bm{q}})\gamma({\bm{k}}-{\bm{q}})}.
\end{equation}
Using this expression in Eq.~(\ref{EEdef}), and integrating
over ${\bm{k}}$, we again recover the form of $\mbox{\boldmath ${\cal E}$}$ identical
to Methods~A, B and C.
Thus, in this simple example where one can apply
FOSA to both the induction and
momentum equations, one gets identical expression for
$\mbox{\boldmath ${\cal E}$}$ in
terms of the correlation properties of the forcing
function $\bm f$, in all the four methods.
\subsection{The nonlinear $\alpha$-effect}
We now compute the nonlinear $\alpha$-effect explicitly
from the expression of $\mbox{\boldmath ${\cal E}$}$ as obtained in
the four methods discussed above. As has been mentioned
earlier, only the antisymmetric part of $F_{jk}$
contributes to $\mbox{\boldmath ${\cal E}$}$ in Eq.~(\ref{fjk}), so Eq.~(\ref{emfmetc}) takes
the simple form
\begin{equation}
\label{alpha1}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\!\int{\!{\bm{{{B}}}_{0}\cdot{\bm q}}\over{\eta q^2}}
\frac{{{\rm i}}q_m\epsilon_{kjm}\chi(q)}
{8\pi q^4\left[\nu q^2 +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}/{\eta q^2}\right]^2}
\;\mathrm{d}{{\bm{q}}},
\end{equation}
Contraction between the two $\epsilon$'s and solving for
$\alpha=\mbox{\boldmath ${\cal E}$}\cdot\bm{{{B}}}_0/B_0^2$ leads to
\begin{equation}
\label{alpha2}
\alpha=
-\int{\chi(q)\over\eta\nu^2q^6}\,
\frac{({\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}}})^2}
{\left[1+(\hat{\bm{{{B}}}}_ 0\cdot\hat{{\bm{q}}})^2\beta^2\right]^2}\,
{\mathrm{d}{{\bm{q}}}\over4\pi q^2},
\end{equation}
where we have introduced $\beta^2=B_0^2/(\eta\nu q^2)$ and
hats denote unit vectors.
The solution involves an angular integral with respect to the cosine
of the polar angle, $\mu=\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}}$,
\begin{equation}
F(\beta)=\int_{-1}^1{\mu^2\,\mathrm{d}{\mu}\over(1+\beta^2\mu^2)^2}
={1\over\beta^2}\left({\tan^{-1}\!\beta\over\beta}-{1\over1+\beta^2}\right),
\end{equation}
so that
\begin{equation}
\label{alpha3}
\alpha=-{1\over2\eta\nu^2}
\int_{0}^{\infty}\frac{\chi(q)}{q^6}\,F(\beta)\,\mathrm{d}{q}.
\end{equation}
Note that for small values of $\beta$ we have $F(\beta)\approx2/3-4\beta^2\!/5$.
In the limit of large values of $B_0$ and $\beta$ we have
$F(\beta)\to\pi/(2\beta^3)$, so the expression of $\alpha$ reduces to
\begin{equation}
\label{alpha5}
\alpha\to
-\frac{\pi}{4B_0^3}\sqrt{\frac{\eta}{\nu}}
\int_{0}^{\infty}\frac{\chi(q)}{q^3}\,\mathrm{d}{q}
\quad\mbox{(for $B_0\to\infty$)}.
\end{equation}
So, in the asymptotic limit of large $B_0$, we have
$\alpha \rightarrow B_0^{-3}$.
This is a well-known result that goes back to the pioneering
works of \cite{Mof72} and \cite{Rue74}; see also \cite{RK93}.
Note that $\mbox{\boldmath ${\cal E}$}\times\bm{{{B}}}_0=\bm{0}$, because the corresponding
angular integral would be over the product of a sine and cosine term
which vanishes.
To illustrate further the dependence of
$\alpha$ on $B_0$ we need to adopt some form for
the spectrum $\chi(q)$. We assume that
the forcing is at a particular wavenumber, $q_0$, and choose
$\chi(q) = H_{\rm f}\delta(q - q_{0})$ where $H_{\rm f}$ is the helicity
of the forcing.
Then the integration of the delta function simply gives
\begin{equation}
\label{totalpha}
\frac{\alpha}{\alpha_0}=
{3\over2}\left({B_0\over B_{\rm cr}}\right)^{-2}
\left[{\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}
-{1\over1+B_0^2/B_{\rm cr}^2}\right]\!,
\end{equation}
where we have defined
\begin{equation}
\alpha_0=-H_{\rm f}/(3\eta\nu^2\!q_0^6),\quad
B_{\rm cr}=\sqrt{\eta\nu}q_{0}.
\end{equation}
If we express $\alpha = \hat{\alpha}_{\rm F} + \hat{\alpha}_{\rm M}$, where
$\hat{\alpha}_{\rm F}$ is computed from the $(\nabla^{-2}\bm f)\times\bm{{{b}}}$ term
and $\hat{\alpha}_{\rm M}$ from the $(\nabla^{-2}\bm{{{b}}})\times\bm{{{b}}}$ term
in Eq.~(\ref{emfmc}), we have
\begin{equation}
\label{alphaF}
\frac{\hat{\alpha}_{\rm F}}{\alpha_0}=
3\left({B_0\over B_{\rm cr}}\right)^{-2}
\left[1 - {\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}\right],
\end{equation}
\begin{equation}
\frac{\hat{\alpha}_{\rm M}}{\alpha_0}=
3\left({B_0\over B_{\rm cr}}\right)^{-2}\!
\left[
{3\over 2}{\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}
-\frac{3/2+B_0^2/B_{\rm cr}^2}{1+B_0^2/B_{\rm cr}^2}
\right]\!.
\label{alphaM}
\end{equation}
The hats on the $\alpha$s indicate that a special choice has been
made to divide $\alpha$ up into different contributions.
A different choice without hats that had been derived under the $\tau$
approximation, will be discussed in Section~\ref{tauUnsteady}.
We plot in Fig.~\ref{panalyt_vs_B0} the variation of $\alpha$ with $B_0$.
This shows that $\alpha \sim \alpha_0$ for $B_0 \la B_{\rm cr}$
and in the asymptotic limit $\alpha$ decreases $\propto B_0^{-3}$.
This figure also shows the variations of $\hat{\alpha}_{\rm F}$
and $\hat{\alpha}_{\rm M}$ with $B_0$ as predicted from Eqs~(\ref{alphaF})
and (\ref{alphaM}).
Both decrease asymptotically like $B_0^{-2}$ because here, unlike in
Eq.~(\ref{totalpha}), the term in squared brackets remains constant.
Their sum decreases as $B_0^{-3}$.
One could also define a kinetic $\hat{\alpha}_{\rm K}$
from the $(\nabla^{-2}\bm{{{u}}})\times\bm{{{u}}}$ term
in Eq.~(\ref{emfmetan}). In this case, for steady forcing we have
$\alpha = \hat{\alpha}_{\rm K} = \hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$.
\begin{figure} \begin{center}
\includegraphics[width=\columnwidth]{panalyt_vs_B0}
\end{center}\caption[]{Variation of $\alpha$, $\hat{\alpha}_{\rm F}$, and
$-\hat{\alpha}_{\rm M}$ with $B_0$ from the analytical theory.
Note the mutual approach of $\hat{\alpha}_{\rm F}$ and $-\hat{\alpha}_{\rm M}$
(asymptotic slope $-2$) to produce a lower, quenched value
(asymptotic slope $-3$).}
\label{panalyt_vs_B0}
\end{figure}
\begin{figure} \begin{center}
\includegraphics[width=\columnwidth]{palpall_vs_B0}
\end{center}\caption[]{Variation of $\alpha$, $\hat{\alpha}_{\rm F}$, and
$-\hat{\alpha}_{\rm M}$ with $B_0$ for the ABC flow
at low fluid and magnetic Reynolds numbers ($\mbox{\rm Re}=R_{\rm m}=10^{-4}$),
compared with the analytic theory predicted for a fully isotropic flow.
Note that the numerically determined values of $\hat{\alpha}_{\rm F}$ are
smaller and those of $-\hat{\alpha}_{\rm M}$ larger than the corresponding
analytic values.
However they still add up to satisfy the relation
$\hat{\alpha}_{\rm F} + \hat{\alpha}_{\rm M} = \alpha$, predicted by analytic theory.
In all cases the numerically determined values of $\hat{\alpha}_{\rm K}$
agree with the numerically determined values of $\alpha$.
}\label{palpall_vs_B0}
\end{figure}
\subsection{Comparison with simulations}
\label{NumericalVerification}
Simulations allow us to alleviate some of the restrictions imposed by the
analytical approach such as the limit of low fluid and magnetic Reynolds
numbers, but they also introduce additional restrictions related for
example to the degree of anisotropy.
We adopt here a simple and frequently used steady and monochromatic
forcing function that is related to an ABC flow, i.e.\
\begin{equation}
\bm{{{f}}}(\bm{x})={A_{\rm f}\over\sqrt{3}}\pmatrix{
\sin k_{\rm f}z+\cos k_{\rm f}y\cr
\sin k_{\rm f}x+\cos k_{\rm f}z\cr
\sin k_{\rm f}y+\cos k_{\rm f}x},
\end{equation}
where $A_{\rm f}$ denotes the amplitude and $k_{\rm f}$ the
wavenumber of the forcing function.
This forcing function is isotropic with respect to the three
coordinate directions, but not with respect to other directions.
The helicity of this forcing function is $H_{\rm f} = k_{\rm f} A_{\rm f}^2$.
We use the \textsc{Pencil Code},\footnote{
\url{http://www.nordita.dk/software/pencil-code}}
which is a high-order finite-difference code (sixth order in space and
third order in time) for solving the compressible hydromagnetic equations.
We adopt a box size of $(2\pi)^3$
and take $A_{\rm f}=10^{-4}$, $k_{\rm f}=1$, and determine
$\alpha={\cal E}_z/B_{0z}$ as well as $\hat{\alpha}_{\rm K}$,
$\hat{\alpha}_{\rm F}$, and $\hat{\alpha}_{\rm M}$, which are given
respectively by the three integrals in Eqs~(\ref{emfbu1}) and (\ref{metC}).
The result is shown in Fig.~\ref{palpall_vs_B0}.
For these runs a resolution of just $32^3$ meshpoints is sufficient,
as demonstrated by comparing with runs with $64^3$ meshpoints.
For $B_0/B_{\rm cr}\leq1$ the resulting values of $\alpha$
agree in all cases perfectly with both $\hat{\alpha}_{\rm K}$ and
$\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$.
However, for $B_0/B_{\rm cr}>1$ the numerically determined values
of $\alpha$ are smaller than those expected theoretically using $q_0=k_{\rm f}$.
As in the analytical theory, the quenching is explained by an
uprise of $-\hat{\alpha}_{\rm M}$.
Note, however, that in the simulations this quantity attains
a maximum at somewhat weaker field strength than in the analytical theory;
cf.\ the dashed line and crosses in Fig.~\ref{palpall_vs_B0}.
We believe that this discrepancy is explained by an insufficient
degree of isotropy of the forcing function.
Finally, it is interesting to
address the question of the Reynolds number dependence
of the quenching behaviour.
In Fig.~\ref{palptau_vs_Rm_nu2_B2_corr2} we show the results for
$\alpha$, $\hat{\alpha}_{\rm K}$, $\hat{\alpha}_{\rm F}$, and $-\hat{\alpha}_{\rm M}$.
Since the velocity is of the order of
$u_{\rm ref}\equiv A_{\rm f}/(\nu k_{\rm f}^2)$,
we have defined the fluid and magnetic Reynolds numbers as
$\mbox{\rm Re}=u_{\rm ref}/(\nu k_{\rm f})$ and
$R_{\rm m}=u_{\rm ref}/(\eta k_{\rm f})$, respectively.
For all runs we have assumed $\mbox{\rm Re}=1$ and $B_0=u_{\rm ref}$.
Again, we see quite clearly the approach of $-\hat{\alpha}_{\rm M}$ toward
$\hat{\alpha}_{\rm F}$, so as to make their sum diminish toward $\alpha$
with increasing values of $R_{\rm m}$.
For $R_{\rm m}<1$ the numerical data agree well with the analytic ones,
whilst for $R_{\rm m}>1$ the numerical values for all alphas lie below the
analytic ones (not shown here).
In particular, for $R_{\rm m}>1$ the value of $\hat{\alpha}_{\rm K}$, based on
the integral in Eq.~(\ref{emfbu1}), begins to exceed the value of $\alpha$.
This apparently signifies the break-down of FOSA.
However, one may expect that the relevant inverse time scales or rates are no
longer governed by just the resistive rate, $\sim \eta k_{\rm f}^2$,
but also by a dynamical rate, $\sim u_{\rm ref} k_{\rm f}$.
This leads to a correction factor, $1/(1+aR_{\rm m})$, where $a\approx1$
is an empirically determined coefficient quantifying the importance of
this correction.
In Fig.~\ref{palptau_vs_Rm_nu2_B2_corr2} we show that both
$\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$ as well as $\hat{\alpha}_{\rm K}/(1+aR_{\rm m})$
with $a=0.7$ are close to $\alpha$ for $R_{\rm m}\leq30$.
Note that no correction is necessary for $\hat{\alpha}_{\rm F}$ or
$\hat{\alpha}_{\rm M}$, because these quantities are determined by the
momentum equation and hence the viscous time scale.
However, since $\mbox{\rm Re}$ is small, no correction is necessary here.
Again, a numerical resolution of $32^3$ meshpoints was used except
for $R_{\rm m}\geq10$, where we used $64^3$ meshpoints.
\begin{figure}\begin{center}
\includegraphics[width=\columnwidth]{palptau_vs_Rm_nu2_B2_corr2}
\end{center}\caption[]{
Dependence of the $\alpha$-effect on $R_{\rm m}$
for fixed field strength, $B_0=u_{\rm ref}$,
for ABC-flow forcing at $\mbox{\rm Re}=1$.
Note the agreement between $\alpha$ and $\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$
as well as $\hat{\alpha}_{\rm K}/(1+aR_{\rm m})$ for $R_{\rm m}\leq30$.
}\label{palptau_vs_Rm_nu2_B2_corr2}\end{figure}
\section{Time-dependent forcing}
\label{Section:timedep}
We now consider the case when $\bm{{{f}}}$ depends on time, but is ne\-ver\-the\-less
statistically stationary.
In that case, both $\dot{\bm{{{b}}}}$ and $\dot{\bm{{{u}}}}$ are finite, and hence
also $\overline{\bm{{{u}}}\times\dot{\bm{{{b}}}}}$ and $\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$
can in general be finite, even though their sum might vanish in the
statistically steady state.
Later we specialize to one case of particular
interest, when the correlation time of the forcing is small.
This was the case, for example, in the
simulations of \cite{B01} and \cite{BS05b}.
As in Section~\ref{Section:meanemf},
we will assume here small magnetic and fluid Reynolds numbers
and neglect the nonlinear terms in the induction and
momentum equations, but retain the time dependence.
We will also now take a Fourier transform in time and define
\begin{equation}
{\tilde{\bm{{{u}}}}}({\bm k},\omega)=
{1\over (2\pi)^4}{\int{\bm u}({\bm x},t)e^{-{\rm i}{\bm{k}}\cdot\bm{x} + {\rm i} \omega t}\mathrm{d}{\bm x}}\,\mathrm{d} t,
\end{equation}
which satisfies the inverse relation
\begin{equation}
{\bm{{{u}}}}({\bm x},t)={\int{{\tilde{\bm{{{u}}}}}({\bm k},\omega)
e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i} \omega t}\mathrm{d}{\bm k} \,\mathrm{d} \omega}}.
\end{equation}
In Fourier space, Eqs~(\ref{bintu}) and (\ref{uintbf1}) become
\begin{equation}
\label{bintut}
(-{\rm i}\omega + \eta k^2)\tilde{b}_{i}({\bm k},\omega)=
({\rm i}{\bm{k}}\cdot\bm{{{B}}}_0)\,\tilde{u}_{i}({\bm{k}},\omega),
\end{equation}
\begin{equation}
\label{uintbf1t}
(-{\rm i}\omega + \nu k^2)\tilde{u}_{i}({\bm{k}},\omega)=
({\rm i}{\bm{k}}\cdot\bm{{{B}}}_0)\,\tilde{b}_{i}({\bm{k}},\omega)+\tilde{f}_{i}({\bm{k}},\omega).
\end{equation}
In order to simplify the writing of the equations below, it is convenient
to define complex frequencies
\begin{equation}
\Gamma_\eta = -{\rm i}\omega + \eta k^2,\quad
\Gamma_\nu =-{\rm i}\omega + \nu k^2.
\end{equation}
As before, we can solve equations \eq{bintut} and \eq{uintbf1t} simultaneously
to express $\tilde{\bm{{{u}}}}$ and $\tilde{\bm{{{b}}}}$ completely in terms of $\tilde{\bm f}$,
\begin{equation}
\label{uintft}
{\tilde{u}_{i}}({\bm k},\omega)=
\frac{\tilde{f}_i({\bm{k}},\omega)}{\Gamma_\nu + ({\bm{{{B}}}_0\cdot{\bm{k}}})^2/\Gamma_\eta},
\end{equation}
\begin{equation}
\label{bintft}
{\tilde{b}_i}({\bm{k}},\omega)=
\frac{\tilde{f}_i({\bm{k}},\omega)}{\Gamma_\nu +({\bm{{{B}}}_0\cdot{\bm{k}}})^2/\Gamma_\eta }
\,\frac{{\rm i}{\bm{k}}\cdot\bm{{{B}}}_0}{\Gamma_\eta }.
\end{equation}
We can use these solutions to calculate $\mbox{\boldmath ${\cal E}$}$. For getting
an explicit expression, we also need the force correlation function in
Fourier space.
For isotropic, homogeneous and statistically stationary forcing, this is given by
\begin{equation}
\label{ffcort}
\overline{{\tilde{f}_{j}}({{\bm{p}}},\omega)\;{\tilde{f}_{k}}({{\bm{q}}},\omega')} =
\delta^3({{\bm{p}}+{\bm{q}}})\delta(\omega+\omega')F_{jk}({\bm{q}},\omega),
\end{equation}
where we can still take $F_{jk}$ to be of the form given by Eq.~(\ref{fjk}),
with spectral functions now changed to say, a frequency dependent
$\bar\Phi(k,\omega)$ and $\bar\chi(k,\omega)$.
Note that in the limit where the correlation time of the forcing function
is short (delta-correlated in time) the
Fourier space spectral function $F_{jk}$ is nearly
independent of the frequency $\omega$.
However if one evaluates the helicity of the forcing, one gets
\begin{equation}
\int \bar \chi(k,\omega)\;\mathrm{d}{k}\,\mathrm{d}\omega
=\overline{\bm{{{f}}}\cdot(\nabla\times\bm{{{f}}})}
\equiv H_{\rm f},
\end{equation}
where the $k$ integral is from $0$ to $\infty$, while the
$\omega$ integration goes from $-\infty$ to $+\infty$.
For $\bar\chi$ independent of $\omega$ this would be infinite.
So we still keep a spectral dependence and write $\bar\chi(q,\omega)
=\chi(q) g(\omega)$, where $g$ is an even function of $\omega$, satisfying
$\int g(\omega) \mathrm{d}\omega = 1$.
[The property that $g$ is even
is a consequence of the forcing function being real; see
Eq.~(7.44) of \citep{M78}.]
For a forcing with say a correlation time $\tau$,
$g(\omega)$ will be nearly constant for $\omega\tau \sim 1 $ and
decay at large $\omega$.
In the limit of small $\tau$ the range
for which $g(\omega)$ is nearly constant will be very large. We will need only
$g(0) \sim \tau$ in most of what follows.
Note that the other extreme limit of steady forcing corresponds to taking
$g(\omega) \to \delta(\omega)$.
The mean emf can be written as
\begin{equation}
\label{meanemft}
{\cal E}_{i}({\bm x},t)=
{\epsilon_{ijk}}{\overline {{\bm{{{u}}}}_j{\bm{{{b}}}}_k}}
= \int {{\tilde{{\cal E}_{i}}({\bm k},\Omega)}\;e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i}\Omega t}
\mathrm{d}{\bm k}}\,\mathrm{d}\Omega,
\end{equation}
where the Fourier transform $\tilde{\mbox{\boldmath ${\cal E}$}}$ is given by
\begin{equation}
\label{fsemft}
\tilde{{\cal E}_{i}}({\bm k},\Omega)= \epsilon_{ijk} {\int{\ubkqt
\;\mathrm{d}{\bm q}\,\mathrm{d}\omega}}.
\end{equation}
We now turn to the calculation of the nonlinear mean emf and the resulting
nonlinear $\alpha$-effect. We focus on Method~A, the direct Method~B and also
Method~C to illustrate the similarities and differences from
the case when the time evolution is neglected.
We also discuss in detail the result of applying a
$\tau$-approximation type method in the
subsequent section.
\subsection{Computing $\mbox{\boldmath ${\cal E}$}$ from the induction equation }
\label{methat}
As before, we first start from the induction equation to solve
for $\bm{{{b}}}$ in terms of $\bm{{{u}}}$ using Eq.~(\ref{bintut}), so
\begin{equation}
\label{emfbutdep}
{\tilde{{\cal E}_{i}}({\bm k})}=
{{\rm i}}{\epsilon_{ijk}}\;\int \bm{{{B}}}_0\cdot{\bm{q}}
\;\frac{\uukqt}{-{\rm i}\omega + \eta q^2}\;\mathrm{d}{\bm q} \mathrm{d}\omega.
\end{equation}
We can then express
$\bm{{{u}}}$ in terms of $\bm{{{f}}}$ using Eq.~(\ref{uintft}). Substituting
from Eq.~(\ref{ffcort}) for the force correlation
in a time dependent flow, the mean emf in coordinate space is then given
by
\begin{equation}
\label{emfmetbt}
{{\cal E}_{i}({\bm x})}=
{\rm i}{\epsilon_{ijk}}\int {{\bm{{{B}}}_{0}\cdot{\bm q}}\over{\Gamma_\eta}}
\frac{F_{jk}({\bm{q}}, \omega)}
{|\Gamma_\nu +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}/{\Gamma_\eta}|^2}\;
\mathrm{d}{{\bm{q}}}\,\mathrm{d}\omega.
\end{equation}
As usual, we define $\alpha=\mbox{\boldmath ${\cal E}$}\cdot\bm{{{B}}}_0/B_0^2$.
Since only the
antisymmetric part of $F_{jk}$ contributes in the above integral,
we have,
\begin{equation}
\label{alpKhat}
\alpha=
-\int\!\frac{(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^2\;\chi(q)\;
g(\omega)\;({\rm i}\omega+\eta q^2)\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;},
\end{equation}
In the following we refer to this expression for $\alpha$ also as $\hat{\alpha}_{\rm K}$,
since it is seen to arise purely from the ($\nabla^{-2}\bm{{{u}}} \times \bm{{{u}}}$)-type velocity
correlation, generalized to the time dependent case.
Note that the denominator of Eq.~(\ref{alpKhat}) is even in $\omega$ and
so is the spectral function $g(\omega)$. Therefore
the term in the above integral, which has an ${\rm i}\omega$ in the numerator,
vanishes on integration over $\omega$ (by symmetry). So the mean emf
is a real quantity as it should be.
Before evaluating the above integral
explicitly, let us ask if we get the same expression for Eq.~(\ref{emfmetbt})
using Methods~B and C, even in the time-dependent case.
\subsection{Computing $\mbox{\boldmath ${\cal E}$}$ directly }
\label{methbt}
Let us directly compute
$\mbox{\boldmath ${\cal E}$}={\overline {\bm{{{u}}}\times\bm{{{b}}}}}$ by
substituting $\bm{{{u}}}$ and $\bm{{{b}}}$
in terms of $\bm f$, using Eqs~(\ref{uintft}) and (\ref{bintft}).
We also substitute from Eq.~(\ref{ffcort}) for the force correlation
in a time dependent flow.
The mean emf in coordinate space is then given by Eq.~(\ref{emfmetbt}),
so we do not repeat it here.
\subsection{$\mbox{\boldmath ${\cal E}$}$ from the momentum equation}
As before we start from the momentum equation,
solve for $\bm{{{u}}}$ in terms of $\bm{{{b}}}$ and the forcing function $\bm f$,
and then substitute for $\bm{{{b}}}$ in terms of $\bm f$
using Eq.~(\ref{bintft}). We particularly wish to examine
if the $(\nabla^{-2}\bm f)\times\bm{{{b}}}$-like correlation is
essential for calculating the $\mbox{\boldmath ${\cal E}$}$ correctly, even for
the time dependent case.
Using Eq.~(\ref{uintbf1t}) one can write
\begin{equation}
\label{uintbf2}
{\tilde{u}_{i}}({\bm k},\omega)=
\frac{1}{\Gamma_\nu }\left[({\rm i}{\bm{k}}\cdot\bm{{{B}}}_{0})\,{\tilde{b}_{i}}({\bm k},\omega)
+{\tilde{f}_{i}}({\bm k},\omega)\right].
\end{equation}
From Eq.~(\ref{fsemft}) the mean emf can then be written as
\begin{eqnarray}
{\tilde{{\cal E}_{i}}({\bm k},\Omega)}&\!=\!&
{\epsilon_{ijk}}{\int {\frac{{\overline {\tilde{f}_{j}(\bm{k}-\bm{q},\Omega-\omega)\tilde{b}_{k}(\bm{q},\omega)}}}{-{\rm i}(\Omega -\omega) + \nu({\bm{k}}-{\bm{q}})^2}\;
\mathrm{d}{\bm q} \,\mathrm{d}\omega}}
\nonumber \\
&&\!\!+{{\rm i}}{\epsilon_{ijk}}
{\int \frac{\bm{{{B}}}_0\cdot({\bm{k-q}})}{-{\rm i} (\Omega-\omega) + \nu({\bm{k}}-{\bm{q}})^2}}
\nonumber \\
&& \qquad \times \; {{\overline {\tilde{b}_{j}(\bm{k}-\bm{q},\Omega-\omega)\tilde{b}_{k}(\bm{q},\omega)}}} \;\mathrm{d}{\bm q} \,\mathrm{d}\omega.
\label{metCt}
\end{eqnarray}
Here the first term involves the $(\nabla^{-2}\bm f)\times\bm{{{b}}}$-like
correlation, generalized to the time-dependent case.
Substituting $\tilde{\bm{{{b}}}}$ in terms of $\tilde{\bm f}$ from Eq.~(\ref{bintft})
and integrating over the delta functions in wavenumbers and frequencies,
the mean emf in coordinate space can be expressed as
\begin{eqnarray}
\label{emfmct}
{{\cal E}_{i}({\bm x})}&=&
{\rm i}{\epsilon_{ijk}}\int\!\frac{\bm{{{B}}}_0\cdot{\bm{q}}}{\Gamma_\eta }
\frac{F_{jk}({\bm{q}},\omega)}{\vert\Gamma_\nu\vert^2 \;
\left[1+\bar N\right]}\,\mathrm{d}{{\bm{q}}}\,\mathrm{d}\omega \\ \nonumber
&&-{\rm i}{\epsilon_{ijk}}\int\!\frac{\bm{{{B}}}_0\cdot{\bm{q}}}{\Gamma_\eta}
\frac{F_{jk}({\bm{q}},\omega)}{\vert\Gamma_\nu \vert^2\;\left[1+\bar N\right]}\,
\frac{\bar N^*}{1+\bar N^*}\,\mathrm{d}{{\bm{q}}}\,\mathrm{d}\omega.
\end{eqnarray}
Here we have defined $\bar N = ({\bm{{{B}}}_0\cdot{\bm{q}}})^2/\Gamma_\eta \Gamma_\nu $.
Now the limit of small Lorentz forces corresponds to taking
$\vert \bar N\vert \ll 1$. Again in this limit the second integral
vanishes while the first one [i.e.\ the generalized
$(\nabla^{-2}\bm f)\times\bm{{{b}}}$-like correlation in Eq.~(\ref{emfmct}),
goes over to a kinematic $\alpha$-effect.
In fact, this part of the $\alpha$-effect can be obtained from
Eqs~(\ref{uintft}) and (\ref{bintft}) by neglecting the
Lorentz force in the expression for ${\tilde{u}_{i}}({\bm k},\omega)$.
On adding the two terms in Eq.~(\ref{emfmct}),
the mean emf turns out to be
identical to Eq.~(\ref{emfmetbt}), obtained when starting from
the induction equation or in the direct Method.
Therefore, one could either compute the mean emf starting from the
induction equation, or directly, or from the momentum equation,
by the addition of a generalized $(\nabla^{-2}\bm f)\times\bm{{{b}}}$-like
correlation and a purely magnetic correlation.
We can again define $\hat{\alpha}_{\rm F}$ and $\hat{\alpha}_{\rm M}$
for the time dependent forcing, from the first and the second terms on the
r.h.s.\ of Eq.~(\ref{emfmct}) respectively. We have
\begin{equation}
\label{alpFhat}
\hat{\alpha}_{\rm F} =
-\int\!\frac{(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^2\;\chi(q)\;
g(\omega)\left[\Gamma_{\nu}^*\Gamma_{\eta}^* + (\bm{{{B}}}_0\cdot{\bm{q}})^2\right]
\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\Gamma_{\nu}^*\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;},
\end{equation}
\begin{equation}
\label{alpMhat}
\hat{\alpha}_{\rm M} =
\int\!\frac{B_0^2\;(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^4\;\chi(q)\;
g(\omega)\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\Gamma_{\nu}^*\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;}
\end{equation}
In terms of the total $\alpha$,
we have $\alpha=\hat{\alpha}_{\rm F} + \hat{\alpha}_{\rm M}$.
It is now explicitly apparent that in the time dependent case,
$\alpha=\hat{\alpha}_{\rm K}=\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$
in agreement with what is obtained for steady forcing. Further, in the limit
of steady forcing where, $g(\omega) \to \delta(\omega)$, the above
generalized expressions reduce to the $\hat{\alpha}$'s obtained from
Eqs~(\ref{emfmetan}) and (\ref{emfmc}) respectively.
\subsection{The nonlinear $\alpha$-effect}
\label{NonlinAlpTimedep}
Let us return to the explicit computation of the nonlinear $\alpha$-effect
for the delta-correlated flow.
Solving for
$\alpha=\mbox{\boldmath ${\cal E}$}\cdot\bm{{{B}}}_0/B_0^2$
from Eq.~(\ref{alpKhat}) leads to
\begin{equation}
\label{alpha2t}
\alpha=
-\int \chi(q)\, (\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}})^2 \; (\eta q^2) \; I({\bm{q}}) \;
{\mathrm{d}{{\bm{q}}}\over4\pi q^2},
\end{equation}
where $I$ is the integral over $\omega$ given by
\begin{equation}
I = \int \frac{g(\omega) \mathrm{d}\omega}{|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2}
= \tau \int \frac{\mathrm{d}\omega}{|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2}
.
\label{Ieval}
\end{equation}
where the latter expression for $I$ obtains in the limit of a
delta-correlated forcing, where $g(\omega) = \tau$ is almost constant
through the range where the rest of the integrand contributes
significantly.
We now focus on the special case of $\nu=\eta$ for the explicit
evaluation of the above integral. Note that calculating $I$
is then straightforward but tedious. We briefly
outline the steps and then quote the result. First the denominator
of Eq.~(\ref{Ieval}) can be expanded and then factorized to give
\begin{equation}
|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2
=(\omega \!+\! z)(\omega \!+\! z^*)(\omega \!-\! z)(\omega \!-\! z^*),
\label{factorize}
\end{equation}
where $z= (\bm{{{B}}}_ 0\cdot{\bm{q}}) + {\rm i} \nu q^2$.
Then the integral can be rewritten as
\begin{equation}
\int_{-\infty}^{\infty}\! \frac{\tau \mathrm{d}\omega}{2 (z^{*2} -z^2)\vert z\vert^2}
\left[\!\frac{z}{\omega -z^*}\!-\!\frac{z}{\omega +z^*}\!-\!
\frac{z^*}{\omega -z}\!+\!\frac{z^*}{\omega +z}\!\right]\!.
\end{equation}
Grouping the term having $\omega\!-\!z^*$ with the one having $\omega\!-\!z$ and
likewise the two terms with $\omega\!+\!z^*$ and $\omega\!+\!z$, we get
\begin{eqnarray}
I &=&\int_{-\infty}^{\infty} \frac{\tau \mathrm{d}\omega}{2 (z^* + z)\vert z\vert^2}\quad\times
\\ \nonumber
&& \left[\frac{-\omega + 2\bm{{{B}}}_ 0\cdot{\bm{q}}}{(\omega -\bm{{{B}}}_ 0\cdot{\bm{q}})^2 + \nu^2q^4}
+ \frac{\omega + 2\bm{{{B}}}_0\cdot{\bm{q}}}{(\omega +\bm{{{B}}}_0\cdot{\bm{q}})^2
+ \nu^2q^4} \; \right].
\end{eqnarray}
where the last expression is explicitly real. A change of variables
allows the above integral to be done easily to give%
\footnote{We thank K.-H.\ R\"adler for pointing out to us that this
result can be generalized for $\eta\neq\nu$, to give
$I = \pi\tau/\{(\eta+\nu)q^2[(\bm{{{B}}}_0\cdot{\bm{q}})^2+\eta\nu q^4]\}$.}
\begin{equation}
I= \frac{\pi}{2}\frac{\tau}{\nu q^2[(\bm{{{B}}}_ 0\cdot{\bm{q}})^2 +\nu^2q^4]}.
\end{equation}
Substituting $I$ into Eq.~(\ref{alpha2t}), and carrying out
the angular integral over $\mu=\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}}$ then gives
\begin{equation}
\label{alpha3t}
\alpha=-\frac{\pi}{2}
\tau \int_{0}^{\infty}\frac{\chi(q)}{\nu^2q^4}\,G(\beta)\,\mathrm{d}{q},
\end{equation}
where $\beta=B_0/\nu q$ for the $\nu=\eta$ case, and
\begin{equation}
G(\beta)= {1\over\beta^2}\left(1- {\tan^{-1}\!\beta\over\beta}\right).
\end{equation}
So for the time dependent delta-correlated flow,
in the asymptotic limit of large $B_0$, we have
$\alpha \rightarrow B_0^{-2}$.
If we further assume that
the forcing is at a particular wavenumber, $q_0$, and choose
$\chi(q) = H_{\rm f}\delta(q - q_{0})$, we have
\begin{equation}
\label{totalphat}
\frac{\alpha}{\alpha_0}=
3\left({B_0\over B_{\rm cr}}\right)^{-2}
\left[1- {\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}
\right],
\end{equation}
where we have now defined
\begin{equation}
\alpha_0=-\frac{\pi}{6} \; \frac{\tau H_{\rm f}}{q_0^2B_{\rm cr}^2} ,\quad
B_{\rm cr}=\nu q_{0}.
\label{Defalp0Bcr}
\end{equation}
\subsection{Comparison with $\tau$-approximation}
\label{tauUnsteady}
Note that in the case of time dependent forcing,
considered for example in the
simulations of \cite{B01} and \cite{BS05b}, both $\dot{\bm{{{b}}}}$ and
$\dot{\bm{{{u}}}}$ are finite, and hence also $\overline{\bm{{{u}}}\times\dot{\bm{{{b}}}}}$
and $\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$ can in general be finite, even
though their sum would vanish in the statistically steady state.
So taking a time derivative of $\mbox{\boldmath ${\cal E}$}$ and then examining the stationary
situation, could break the degeneracy between $\hat{\alpha}_{\rm K}$ and
$\hat{\alpha}_{\rm F}$ in the kinematic limit, and also lead to novel
insights. Indeed, if one were not able to solve
for $\bm{{{u}}}$ and $\bm{{{b}}}$ explicitly in terms of the forcing
this would be the practical route to follow.
We now examine the time-dependent case in a manner analogous
to the so-called $\tau$-approximation. As mentioned earlier,
the $\tau$-approximation closures involve invoking a closure whereby triple correlations
which arise during the evaluation of $\partial\mbox{\boldmath ${\cal E}$}/\partial t$,
are assumed to provide a damping term proportional to $\mbox{\boldmath ${\cal E}$}$ itself.
In the present context there is no need to invoke a closure for the
triple correlations, because
these terms are small for low fluid and magnetic Reynolds numbers.
It turns out that the correct expression for $\mbox{\boldmath ${\cal E}$}$ can still be
derived in the same framework, where one evaluates the
$\partial\mbox{\boldmath ${\cal E}$}/\partial t$ expression.
Further, it allows us to define a new set of $\alpha$'s
for the time dependent forcing, $\alpha_{\rm K}, \alpha_{\rm M}$ and
$\alpha_{\rm F}$, i.e.\ without hat,
which respectively incorporate the kinetic ($\bm{{{u}}},\bm{{{u}}}$), magnetic ($\bm{{{b}}},\bm{{{b}}}$)
and the force-field ($\bm{{{f}}},\bm{{{b}}}$) correlations (see below). These $\alpha$'s
have properties very similar to those which arise
in the $\tau$-approximation closure for large $R_\mathrm{m}$ systems.
For example, we showed above that for steady forcing,
the $\nabla^{-2}\bm{{{f}}} \times \bm{{{b}}}$ correlation is non-zero and essential to
calculate the $\mbox{\boldmath ${\cal E}$}$ correctly in Method~C.
We also showed above that such a correlation is important to include
even for a time-dependent forcing, if one starts
with the explicit solution of the momentum equation for $\tilde{\bm{{{u}}}}$.
On the other hand, in a $\tau$-approximation type approach one
first evaluates the $\partial\mbox{\boldmath ${\cal E}$}/\partial t$ expression,
which involves a $\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$ type correlation,
instead of solving first for $\bm{{{u}}}$.
It is then interesting to ask, is the corresponding $(\bm{{{f}}},\bm{{{b}}})$ correlation,
or the $\alpha_{\rm F}$ term defined below, which arises in the evaluation of
$\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$,
still non-zero for the time-dependent, delta-correlated forcing? Or does it
vanish in the $\tau$-approximation type approach, as assumed
in earlier work \citep{BF02,RKR,BS05}?
In particular, does one then recover
$\alpha=\alpha_{\rm K} + \alpha_{\rm M}$, a
relation which one obtains in the $\tau$-approximation at large $R_\mathrm{m}$?
We examine these issues in detail below.
We write the time derivative of the emf
as $\dot{\mbox{\boldmath ${\cal E}$}} = \dot{\mbox{\boldmath ${\cal E}$}}_{\rm K} + \dot{\mbox{\boldmath ${\cal E}$}}_{\rm M}$, where
$\dot{\mbox{\boldmath ${\cal E}$}}_{\rm K}= \overline{\bm{{{u}}}\times\dot{\bm{{{b}}}}}$ and $\dot{\mbox{\boldmath ${\cal E}$}}_{\rm M} =
\overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}$. From the induction equation for $\bm{{{b}}}$ and the
momentum equation for $\bm{{{u}}}$, we now have
\begin{equation}
\dot{\mbox{\boldmath ${\cal E}$}}_{\rm K} = \overline{\bm{{{u}}}\times\dot{\bm{{{b}}}}} = \overline{\bm{{{u}}} \times \bm{{{B}}}_0\cdot\nabla\bm{{{u}}}}
+\overline{\bm{{{u}}} \times \eta\nabla^2{\bm{{{b}}}}},
\label{udotb}
\end{equation}
\begin{equation}
\dot{\mbox{\boldmath ${\cal E}$}}_{\rm M} = \overline{\dot{\bm{{{u}}}}\times\bm{{{b}}}}
= \overline{\bm f \times \bm{{{b}}}} + \overline{
\bm{{{B}}}_0\cdot\nabla{\bm{{{b}}}} \times \bm{{{b}}}}
+\overline{\nu\nabla^2\bm{{{u}}} \times \bm{{{b}}}},
\label{dotub}
\end{equation}
where the perturbed pressure term vanishes for divergence free forcing.
We can evaluate each of these terms in Fourier space,
since we have the Fourier space solutions for both
$\tilde{\bm{{{u}}}}$ and $\tilde{\bm{{{b}}}}$ completely in terms of $\tilde{\bm f}$.
In the time dependent case, we define by analogy to Eq.~(\ref{EEdef}),
\begin{equation}
\label{emftdep}
\bm E({\bm{k}},{\bm{q}}, \Omega, \omega) =
\overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}},\Omega-\omega)\times\tilde{\bm{{{b}}}}({\bm{q}},\omega)},
\end{equation}
so that the emf in coordinate space is,
\begin{equation}
\mbox{\boldmath ${\cal E}$} = \int {{{\bm E}({\bm{k}},{\bm{q}},\Omega,\omega)}\;
e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i}\Omega t}\mathrm{d}{\bm k}}\,\mathrm{d}{\bm q}\,\mathrm{d}\Omega\,
\mathrm{d}\omega .
\label{emftotal}
\end{equation}
We also define, analogous to Eqs~(\ref{ubdottindep}) and (\ref{udotbtindep}),
\begin{equation}
{\bm E}_{{\rm K}}^{(t)}({\bm{k}},{\bm{q}},\Omega,\omega)=
\overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}}, \Omega-\omega)\times[-{\rm i}\omega{\tilde{\bm{{{b}}}}}
({\bm{q}},\omega)]} ,
\label{ubdottidep}
\end{equation}
\begin{equation}
{\bm E}_{\rm M}^{(t)}({\bm{k}},{\bm{q}},\Omega,\omega) =
\overline{[-{\rm i}(\Omega-\omega){\tilde{\bm{{{u}}}}}
({\bm{k}}-{\bm{q}}, \Omega-\omega)]\times\tilde{\bm{{{b}}}}({\bm{q}},\omega)} .
\label{udotbtdep}
\end{equation}
Note that ${\bm E}_{{\rm K}}^{(t)} + {\bm E}_{\rm M}^{(t)} = -{\rm i}\Omega \bm E$.
Using the induction and momentum equations, we have explicitly,
\begin{eqnarray}
\label{emfKdottdep}
{\bm E}_{\rm K}^{(t)}&=&{\rm i}{{\bm{q}}\cdot\bm{{{B}}}_0}\,
[\overline{\tilde{\bm{{{u}}}}({\bm{k}}-{\bm{q}}, \Omega-\omega)\times\tilde{\bm{{{u}}}}({\bm{q}},\omega)}]
-\eta q^2 \bm E
\\
\label{emfMdottdep}
{\bm E}_{\rm M}^{(t)} &=&{\rm i}({\bm{k}}-{\bm{q}})\cdot\bm{{{B}}}_0\,
[\overline{\tilde{\bm{{{b}}}}({\bm{k}}-{\bm{q}},\Omega-\omega)\times\tilde{\bm{{{b}}}}({\bm{q}},\omega)}]
\nonumber \\
&+&\overline{\tilde{\bm{{{f}}}}({\bm{k}}-{\bm{q}},\Omega-\omega)\times\tilde{\bm{{{b}}}}({\bm{q}},\omega)}
-\nu ({\bm{k}}-{\bm{q}})^2 \bm E
\end{eqnarray}
We add Eq.~(\ref{emfKdottdep}) and Eq.~(\ref{emfMdottdep}), to write
\begin{eqnarray}
-{\rm i}\Omega{\bm E} + \frac{\bm E}{\tau_{\rm eff}}
&=& {\rm i}{{\bm{q}}\cdot\bm{{{B}}}_0}\Phi(\tilde{\bm{{{u}}}},\tilde{\bm{{{u}}}})
+ {\rm i}({\bm{k}}-{\bm{q}})\cdot\bm{{{B}}}_0\Phi(\tilde{\bm{{{b}}}},\tilde{\bm{{{b}}}}) \nonumber \\
&& \quad + \Phi(\tilde{\bm{{{f}}}},\tilde{\bm{{{b}}}}),
\label{newMTA}
\end{eqnarray}
where we have defined, for any pair of vector fields $\bm{{{f}}}_1$ and $\bm{{{f}}}_2$,
\begin{equation}
\Phi(\tilde{\bm{{{f}}}}_1,\tilde{\bm{{{f}}}}_2) =
\overline{\tilde{\bm{{{f}}}_1}({\bm{k}}-{\bm{q}}, \Omega-\omega)\times\tilde{\bm{{{f}}}_2}({\bm{q}},\omega)},
\label{defPhi}
\end{equation}
and $\tau_{\rm eff}^{-1} = \eta q^2 + \nu ({\bm{k}}-{\bm{q}})^2$.
We note in passing that the above equation is similar to
the corresponding equation which obtains under $\tau$-approximation in the large
Reynolds number case, except that $\tau_{\rm eff}$ would then correspond
to a relaxation time for triple correlations \citep[cf.][]{BS05}.
So we have,
\begin{equation}
\bm E =
{\tau_{\rm eff}^*}
\left[{\rm i}{{\bm{q}}\cdot\bm{{{B}}}_0}\Phi(\tilde{\bm{{{u}}}},\tilde{\bm{{{u}}}})
+ {\rm i}({\bm{k}}-{\bm{q}})\cdot\bm{{{B}}}_0\Phi(\tilde{\bm{{{b}}}},\tilde{\bm{{{b}}}})
+ \Phi(\tilde{\bm{{{f}}}},\tilde{\bm{{{b}}}}) \right].
\label{newMTAstat}
\end{equation}
where we define
$\tau_{\rm eff}^*=\tau_{\rm eff}/[1-{\rm i}\Omega\;\tau_{\rm eff}]$.\\
Let us define $\alpha = (\mbox{\boldmath ${\cal E}$}\cdot\bm{{{B}}}_0)/B_0^2$ as before. Then
in coordinate space,
\begin{equation}
\alpha = \alpha_{\rm K} + \alpha_{\rm M} + \alpha_{\rm F},
\end{equation}
where
\begin{equation}
\alpha_{\rm K} =
\int {\rm i}{{\bm{q}}\cdot\hat{\bm{{{B}}}}_0} \; \Phi(\tilde{\bm{{{u}}}},\tilde{\bm{{{u}}}})
\cdot\hat{\bm{{{B}}}}_0 \; \tau_{\rm eff}^*\;
e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i}\Omega t}\mathrm{d}{\bm k}\,\mathrm{d}{\bm q}\,\mathrm{d}\Omega\,
\mathrm{d}\omega,
\label{alK1}
\end{equation}
\begin{equation}
\alpha_{\rm M} =
\int {\rm i}{{\bm{q}}\cdot\hat{\bm{{{B}}}}_0}\; \Phi(\tilde{\bm{{{b}}}},\tilde{\bm{{{b}}}})
\cdot\hat{\bm{{{B}}}}_0\; \tau_{\rm eff}^*\;
e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i}\Omega t}\mathrm{d}{\bm k}\,\mathrm{d}{\bm q}\,\mathrm{d}\Omega\,
\mathrm{d}\omega,
\label{alM1}
\end{equation}
\begin{equation}
\alpha_{\rm F} =
\int {\rm i}{{\bm{q}}\cdot\hat{\bm{{{B}}}}_0} \; \Phi(\tilde{\bm{{{f}}}},\tilde{\bm{{{b}}}})
\cdot\hat{\bm{{{B}}}}_0\; \tau_{\rm eff}^*\;
e^{{\rm i}{\bm{k}}\cdot\bm{x} - {\rm i}\Omega t}\mathrm{d}{\bm k}\,\mathrm{d}{\bm q}\,\mathrm{d}\Omega\,
\mathrm{d}\omega
\label{alF1}
\end{equation}
correspond respectively to the terms containing the
$(\bm{{{u}}},\bm{{{u}}})$, $(\bm{{{b}}},\bm{{{b}}})$ and $(\bm{{{f}}},\bm{{{b}}})$ correlations
on the RHS of Eq.~(\ref{newMTAstat}).
Substituting $\tilde{\bm{{{u}}}}$ and $\tilde{\bm{{{b}}}}$ in
terms of $\tilde{\bm f}$ from Eq.~(\ref{uintft}) and (\ref{bintft}), and
integrating over the delta functions in wavenumbers
and frequencies which arises in taking the $(\tilde{\bm f},\tilde{\bm f})$
correlations, we then have in the coordinate space,
\begin{equation}
\label{alK2}
\alpha_{\rm K} =
-\int\!\frac{(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^2\;\chi(q)\;
g(\omega)\;\vert\;\Gamma_{\eta}\;\vert^2\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;(\eta+\nu)q^2},
\end{equation}
\begin{equation}
\label{alM2}
\alpha_{\rm M} =
\int\!\frac{B_0^2 (\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^4\;\chi(q)\;
g(\omega)\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;(\eta+\nu)q^2},
\end{equation}
\begin{equation}
\label{alF2}
\alpha_{\rm F} =
-\!\int\!\frac{(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^2\,\chi(q)\,
g(\omega)\left[\Gamma_{\nu}^*\Gamma_{\eta}^* + (\bm{{{B}}}_0\cdot{\bm{q}})^2\right]
\mathrm{d}{\bm{q}}\,\mathrm{d}\omega}
{4\pi q^4\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2\;(\eta+\nu)q^2}.
\label{alf2}
\end{equation}
On adding Eqs~(\ref{alK2}), (\ref{alM2}) \& (\ref{alF2}), the expression
for $\alpha$ is
\begin{equation}
\label{alcomb}
\alpha =
-\int\!\frac{(\hat{\bm{{{B}}}}_0\cdot{\bm{q}})^2\;\chi(q)\;
g(\omega)\;I_\alpha}
{4\pi q^4\;(\eta+\nu)q^2}\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega,
\end{equation}
where
\begin{equation}
\label{ieq}
I_\alpha =
\frac{\vert\;\Gamma_{\eta}\;\vert^2 - (\bm{{{B}}}_0\cdot{\bm{q}})^2
+ \Gamma_{\nu}^*\Gamma_{\eta}^* + (\bm{{{B}}}_0\cdot{\bm{q}})^2}
{\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2}.
\end{equation}
The numerator of this integrand can be simplified to give\\
$\Gamma_{\eta}^*[\eta q^2 + \nu q^2]$ such that,
\begin{equation}
\label{altau}
\alpha =
-\frac{1}{B_0^2}\!\int\!\frac{(\bm{{{B}}}_0\cdot{\bm{q}})^2\;\chi(q)\;
g(\omega)\;\Gamma_{\eta}^*}{4\pi q^4\;\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2}\;\mathrm{d}{\bm{q}}\,\mathrm{d}\omega.
\end{equation}
It is apparent from Eq.~(\ref{altau}) that the expression
for $\alpha$ turns out to be the same as in Eq.~(\ref{alpKhat}).
Therefore, the $\tau$-approximation type treatment also gives the same $\alpha$
as the other methods.
It is interesting to consider what happens to
the various $\alpha$'s defined in Eqs~(\ref{alK2})--(\ref{alcomb}),
in the limit of steady forcing, where
$g(\omega) \to \delta(\omega)$.
It is straightforward to check that in this steady forcing limit
$\alpha$ defined in Eq.~(\ref{altau}) goes over exactly to
the total $\alpha = \alpha^{\rm(S)}$ given by the steady state expression in
Eq.~(\ref{alpha2}) of Section 3.5.
Also, in the steady forcing limit, we get
\begin{eqnarray}
\label{alpKrel}
\alpha_{\rm K}&\to&[\eta/(\eta+\nu)] \hat{\alpha}_{\rm K}
= [\eta/(\eta+\nu)] \alpha^{\rm(S)},\\
\alpha_{\rm M}&\to&[\nu/(\eta+\nu)] \hat{\alpha}_{\rm M},\\
\label{alpFrel}
\alpha_{\rm F}&\to&[\nu/(\eta+\nu)] \hat{\alpha}_{\rm F}.
\end{eqnarray}
In this limit one has therefore
\begin{equation}
\alpha_{\rm M} + \alpha_{\rm F} = [\nu/(\eta+\nu)](\hat{\alpha}_{\rm M}+
\hat{\alpha}_{\rm F}) = [\nu/(\eta+\nu)]\alpha^{\rm(S)}
\end{equation}
and so, once again,
\begin{equation}
\alpha_{\rm K} + \alpha_{\rm M} + \alpha_{\rm F} = \alpha^{\rm(S)},
\end{equation}
as expected.
It should be emphasized, however, that
for a general time dependent forcing,
there is no simple relation of the form given by
the expressions (\ref{alpKrel})--(\ref{alpFrel}).
Let us consider the case of delta-correlated forcing now in more detail.
It is of interest to check if the $\alpha_{\rm F}$ term
contributes in the $\tau$-approximation type closures,
as it does in the time-independent case.
We have from Eq.~(\ref{alf2}),
\begin{equation}
\alpha_{\rm F} =
-\int \frac{\chi(q)\, (\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}})^2}{(\eta+\nu)q^2}
\,I_{\rm F}({\bm{q}})\,{\mathrm{d}{{\bm{q}}}\over4\pi q^2},
\end{equation}
where $I_{\rm F}$ is the integral over $\omega$ given by
\begin{equation}
I_{\rm F} = \int \frac{\Gamma_\nu^* \Gamma_\eta^* +(\bm{{{B}}}_ 0\cdot{\bm{q}})^2 }
{|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2} \; g(\omega)\,\mathrm{d}\omega.
\label{IevalF}
\end{equation}
Here we can simplify
$\Gamma_\nu^* \Gamma_\eta^* +(\bm{{{B}}}_ 0\cdot{\bm{q}})^2
= -\omega^2 +\nu\eta q^4 + (\bm{{{B}}}_ 0\cdot{\bm{q}})^2 +{\rm i}\omega(\eta q^2 + \nu q^2)$.
The integral over the term odd in $\omega$ again vanishes, leaving
again a real $I_{\rm F}$,
\begin{equation}
I_{\rm F} = \int \frac{ -\omega^2 +\nu\eta q^4 +(\bm{{{B}}}_ 0\cdot{\bm{q}})^2 }
{|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2} \; g(\omega)\,\mathrm{d}\omega.
\label{IevalF2}
\end{equation}
Let us focus on the case $\eta=\nu$ as before.
We rewrite the numerator using the identity
$-\omega^2 +\nu\eta q^4 +(\bm{{{B}}}_ 0\cdot{\bm{q}})^2
= -(\omega + z)(\omega-z^*) + \omega(z-z^*)$, so we have
\begin{equation}
I_{\rm F} = \int \frac{-(\omega + z)(\omega-z^*)+\omega(z-z^*)}
{|\Gamma_\nu \Gamma_\eta +{(\bm{{{B}}}_ 0\cdot{\bm{q}})^2}|^2} \; g(\omega)\,\mathrm{d}\omega.
\label{IevalF3}
\end{equation}
The second term in the numerator of Eq.~(\ref{IevalF3}) does not
contribute to the integral, since it is
odd in $\omega$, while the denominator is even. To simplify the
integral further we use the identity in Eq.~(\ref{factorize})
for its denominator, giving
\begin{eqnarray}
&I_{\rm F}&\!\! = -\int_{-\infty}^{\infty} \frac{ g(\omega)\,\mathrm{d}\omega}{(\omega + z^*)(\omega-z) } \\
&=& - \int_{-\infty}^{\infty} \frac{ g(\omega)\,\mathrm{d}\omega}{z+z^*}
\left[ \frac{1}{\omega-z} - \frac{1}{\omega + z^*}\right]
\nonumber \\
&=& - \int_{-\infty}^{\infty} \frac{ g(\omega)\,\mathrm{d}\omega}{z+z^*}
\left[ \frac{(\omega -x) +{\rm i} y}{(\omega -x)^2 + y^2}
- \frac{(\omega +x) +{\rm i} y}{(\omega +x)^2 + y^2} \right]\!,
\nonumber
\end{eqnarray}
where we have defined $x=\bm{{{B}}}_0\cdot{\bm{q}}$ and $y=\nu q^2$, which are the
real and imaginary parts respectively of $z$.
Now changing variables to $u=\omega - x$ in the first term
and $u=\omega + x$ in the second term we see that
\begin{equation}
I_{\rm F} = - \int_{-\infty}^{\infty} \frac{u +{\rm i} y}{z+z^*}\;
\frac{g(u + x) - g(u-x)}{u^2 + y^2} \; \mathrm{d} u = 0.
\label{if2}
\end{equation}
Note that $I_{\rm F} \to 0$ in the limit of a
delta-correlated forcing, where $g(\omega) = \tau$ is almost constant
through the range where the rest of the integrand contributes
significantly; that is $g(u+x) = g(u-x) = \tau$ where the
integrand contributes significantly, while $g(u+x) \to 0$ and $g(u-x) \to 0$
at large $u$. This can also be checked by doing the integral
for $I_{\rm F}$ numerically.
So, interestingly, $\alpha_{\rm F}=0$. Thus, for a forcing which
is random and delta-correlated in time, there is no contribution
from the $\overline{\bm{{{f}}} \times \bm{{{b}}}}$ type correlation!
Thus, $\alpha$ is the sum of just two terms, a kinetic and a magnetic
contribution which can be shown explicitly as follows.
We note that Eq.~(\ref{alK2}) can be expressed as
\begin{equation}
\label{alK3}
\alpha_{\rm K} =
-\frac{1}{4\pi(\eta+\nu)B_0^2}\int
(\bm{{{B}}}_0\cdot{\bm{q}})^2\; \frac{\chi(q)}{q^6} \;I_{\rm K}({\bm{q}}) \;\mathrm{d}{\bm{q}},
\end{equation}
where
\begin{eqnarray}
\label{ik1}
I_{\rm K} &=& \tau\!\int\!\frac{\omega^2 + \eta^2 q^4}
{\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2}\;\mathrm{d}\omega \nonumber \\
&=& \tau\!\int\!\frac{\omega^2 + \eta^2 q^4 +
(\bm{{{B}}}_0\cdot{\bm{q}})^2 - (\bm{{{B}}}_0\cdot{\bm{q}})^2}
{\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2}\;\mathrm{d}\omega.
\end{eqnarray}
As before, we focus on the case $\eta=\nu$ when the numerator
can be simplified as
$\omega^2 + \eta^2 q^4 +
(\bm{{{B}}}_0\cdot{\bm{q}})^2 - (\bm{{{B}}}_0\cdot{\bm{q}})^2
= (\omega + z)(\omega - z^*)-\omega(z + z^*) - (\bm{{{B}}}_0\cdot{\bm{q}})^2$.
So we have
\begin{equation}
\label{ik2}
I_{\rm K} =
\tau\!\int\!\frac{(\omega + z)(\omega - z^*) - \omega(z + z^*)
- (\bm{{{B}}}_0\cdot{\bm{q}})^2}
{\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2}\;\mathrm{d}\omega.
\end{equation}
It is to be noted that the second term in the squared bracket
in Eq.~(\ref{ik2}) does not contribute to the integral. Using the
identity in Eq.~(\ref{factorize}) for its denominator, we have,
\begin{eqnarray}
\label{ik3}
I_{\rm K} &=&
\tau\!\int\!\frac{\mathrm{d}\omega}{(\omega - z)(\omega - z^*)} \nonumber \\
&-& \tau\!\int\!\frac{\mathrm{d}\omega}
{(\omega + z)(\omega + z^*)(\omega - z)(\omega - z^*)} \nonumber \\
&=& \frac{\pi\tau}{\nu q^2} - \frac{\pi\tau}{2\nu q^2}
\;\frac{(\bm{{{B}}}_0\cdot{\bm{q}})^2}{(\bm{{{B}}}_0\cdot{\bm{q}})^2 + \nu^2 q^4}.
\end{eqnarray}
Substituting $I_{\rm K}$ into Eq.~(\ref{alK3}) and carrying out the
angular integral over $\mu=\hat{\bm{{{B}}}}_0\cdot\hat{{\bm{q}}}$ then gives
\begin{equation}
\label{alK_ang}
\alpha_{\rm K} =
-\frac{\pi}{6}\tau\!\int\!\frac{\chi(q)}{\nu^2 q^4}\;\mathrm{d}{\bm{q}}
+\frac{\pi}{4}\tau B_0^2\!\int\!\frac{\chi(q)}{\nu^2 q^4}\;H(\beta)\;\mathrm{d}{\bm{q}},
\end{equation}
where $\beta=B_0/\nu q$ for the $\nu=\eta$ case, and
\begin{equation}
H(\beta)= {1\over\beta^2}\left[{1\over3} - {1\over\beta^2}
\left(1- {\tan^{-1}\!\beta\over\beta}\right)\right].
\end{equation}
A similar analysis for Eq.~(\ref{alM2}) yields,
\begin{equation}
\label{alM3}
\alpha_{\rm M} =
\frac{1}{8\pi \nu B_0^2}\int (\bm{{{B}}}_0\cdot{\bm{q}})^2\;
\frac{\chi(q)}{q^6} \;I_{\rm M}({\bm{q}}) \;\mathrm{d}{\bm{q}},
\end{equation}
where
\begin{eqnarray}
\label{im1}
I_{\rm M} &=& \tau\!\int\!\frac{(\bm{{{B}}}_0\cdot{\bm{q}})^2\;\mathrm{d}\omega}
{\vert\;\Gamma_{\nu}\Gamma_{\eta}
+ (\bm{{{B}}}_0\cdot{\bm{q}})^2\;\vert^2} \nonumber \\
&=& \frac{\pi\tau}{2\nu q^2}\,
\frac{(\bm{{{B}}}_0\cdot{\bm{q}})^2}{(\bm{{{B}}}_0\cdot{\bm{q}})^2 + \nu^2 q^4}.
\end{eqnarray}
Carrying out the angular integral as earlier gives,
\begin{equation}
\label{alM_ang}
\alpha_{\rm M} =
\frac{\pi}{4}\tau B_0^2\!\int\!\frac{\chi(q)}{\nu^2 q^4}\;H(\beta)\;\mathrm{d}{\bm{q}}.
\end{equation}
If we further assume that
the forcing is at a particular wavenumber, $q_0$, and choose
$\chi(q) = H_{\rm f}\delta(q - q_{0})$, we then have
\begin{equation}
\label{totalphaKt}
\frac{\alpha_{\rm K}}{\alpha_0}= {1\over2} +
{3\over2}\left({B_0\over B_{\rm cr}}\right)^{-2}
\left[1- {\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}
\right],
\end{equation}
\begin{equation}
\label{totalphaMt}
\frac{\alpha_{\rm M}}{\alpha_0}= -{1\over2} +
{3\over2}\left({B_0\over B_{\rm cr}}\right)^{-2}
\left[1- {\tan^{-1}(B_0/B_{\rm cr})\over B_0/B_{\rm cr}}
\right],
\end{equation}
where $\alpha_0$ and $B_{\rm cr}$ were defined in Eq.~(\ref{Defalp0Bcr}).
It is explicitly apparent that $\alpha=\alpha_{\rm K}+\alpha_{\rm M}$,
in agreement with Eq.~(\ref{totalphat}).
The result is plotted in Fig.~\ref{analytdc_vs_B0}.
Note also that $\alpha_{\rm F}=0$, as was assumed in
the minimal $\tau$-approximation (MTA)
type calculations for large fluid and magnetic Reynolds numbers
\citep{BS05}.
It is interesting to note that in the limit $B_0/B_{\rm cr} \gg 1$,
$\alpha_{\rm K} \to +\alpha_0/2 + O(B_0^{-2})$ and $\alpha_{\rm M}
\to -\alpha_0/2 + O(B_0^{-2})$ and
so the total $\alpha = \alpha_{\rm K} + \alpha_{\rm M} \to 0$ as
$B_0^{-2}$. This is reminiscent of the kinetic and magnetic $\alpha$'s nearly
cancelling to leave a small residual $\alpha$-effect in EDQNM or MTA type
closures.
It is also interesting to consider the limit when $B_0/B_{\rm cr} \ll 1$.
In this limit $\alpha_{\rm K} \to \alpha_0$ and $\alpha_{\rm M} \to 0$, and
so the net $\alpha$-effect is just the kinetic contribution.
Finally, for any $B_0$ we note that $(\alpha_{\rm K} - \alpha_{\rm M})/\alpha_0 = 1$.
\begin{figure} \begin{center}
\includegraphics[width=\columnwidth]{analytdc_vs_B0}
\end{center}\caption[]{Variation of $\alpha$, $\alpha_{\rm K}$, and
$-\alpha_{\rm M}$ with $B_0$ from the analytical theory using a delta
correlated forcing.
Note that $\alpha=\alpha_{\rm K}+\alpha_{\rm M}$ and $\alpha_{\rm F}=0$.}
\label{analytdc_vs_B0}
\end{figure}
\subsection{Comparison with simulations}
It is appropriate to compare with the simulations of \cite{BS05b}.
We have produced additional results for low fluid and magnetic Reynolds
numbers ($\mbox{\rm Re}=R_{\rm m}\approx2\times10^{-2}$).
The forcing consists of helical waves with average wavenumber
$k_{\rm f}/k_1=1.5$.
The resulting values of $\alpha$ are fluctuating strongly, so it is important
to average them in time.
Instead of calculating the full integral expressions, we estimate the
contributions to $\alpha$ from the formulae
$\alpha_{\rm K}=-2\tau\langle u_x u_{z,y}\rangle$,
$\alpha_{\rm M}=2\tau\langle b_x b_{z,y}\rangle$,
$\alpha_{\rm F}=\tau\langle\bm{{{f}}}\times\bm{{{b}}}\rangle\cdot\bm{{{B}}}_0/B_0^2$,
where $\bm{{{B}}}_0=(0,B_0,0)$ is the imposed field,
and $\tau^{-1}=(\nu+\eta)k_{\rm f}^2$.
\begin{figure}\begin{center}
\includegraphics[width=\columnwidth]{palpKM_vs_B0}
\end{center}\caption[]{
Variation of $\alpha$, $\alpha_{\rm K}$, and
$-\alpha_{\rm M}$ with $B_0$ for a random flow at low fluid and
magnetic Reynolds numbers ($\mbox{\rm Re}=R_{\rm m}\approx2\times10^{-2}$),
compared with the analytic theory predicted for a fully isotropic flow.
Note that the numerically determined values of $\alpha_{\rm K}$ are
smaller and those of $-\alpha_{\rm M}$ larger than the corresponding
analytic values, which is similar to the results for the ABC-flow forcing.
}\label{palpKM_vs_B0}\end{figure}
The result is shown in Fig.~\ref{palpKM_vs_B0} and compared with the
results of the previous section.
In all cases the resulting values of $\alpha_{\rm F}$ are negligibly
small and will not be considered further.
Like in the case of the ABC flow, the numerically estimated values of
$\alpha$ are smaller than the analytic ones.
This might be explicable if for some reason the relevant normalization
in terms of $\alpha_0$ were to depend on $B_0$.
Alternatively, the discrepancy might be due to us using only simplified
expressions instead of the full integral expressions.
However, the important point is that the main contribution to the quenching
comes from the growing contribution of $-\alpha_{\rm M}$ such that
$\alpha_{\rm K}+\alpha_{\rm M}$ is quenched to values much smaller than
$\alpha_{\rm K}$.
The corresponding results in the case of larger $R_{\rm m}$ are given
by \cite{BS05b}.
\section{Discussion}
\label{discimp}
We have considered here the nonlinear $\alpha$-effect in the
limit of small magnetic and fluid Reynolds numbers,
for both steady and time-dependent (both for general and delta-correlated)
forcings.
In the limit of low $R_\mathrm{m}$ and $\mbox{\rm Re}$,
one can neglect terms nonlinear in the fluctuating fields and hence
explicitly solve for the
small scale magnetic and velocity fields using double FOSA.
We can then calculate the $\alpha$-effect in several different ways.
For both steady and time dependent forcings, one gets similar
results, provided one starts from the explicit solutions
to the induction and momentum equations. Lets us begin
with a summary of the results for the steady forcing case:
To begin with we follow in Method~A, the traditional route of
solving the induction equation for $\bm{{{b}}}$ in terms of $\bm{{{u}}}$,
and then calculating the $\alpha$-effect.
For statistically isotropic velocity fields,
this gives $\alpha$ dependent on the helicity of the velocity potential,
as already known from previous work. In addition since we have
an explicit solution for $\bm{{{u}}}$ in terms of $\bm f$, one
can relate $\alpha$ directly to the helical part of the force correlation.
In Method~B we solved for $\bm{{{u}}}$ and $\bm{{{b}}}$ in terms
of the forcing function $\bm f$ and computed $\mbox{\boldmath ${\cal E}$}$ directly.
This would correspond to what is done when the $\alpha$-effect is
determined from simulations.
However, in general this cannot be done analytically unless one can solve
for the small scale velocity and magnetic field explicitly.
We get $\alpha$ identical to that obtained in Method~A.
More interesting is Method~C, where one takes the momentum equation as
the starting point, instead of the induction equation.
In the limit of small fluid Reynolds numbers one can solve for
$\bm{{{u}}}$ in terms of $\bm{{{b}}}$ and hence compute $\mbox{\boldmath ${\cal E}$}$.
This necessarily involves also the $(\nabla^{-2}\bm f)\times\bm{{{b}}}$ correlation,
between the forcing and the small-scale magnetic field,
in addition to the $(\nabla^{-2}{\bm{j}})\cdot\bm{{{b}}}$ (or $\bm a\cdot\bm{{{b}}}$)
correlation arising from the
Lorentz force. This second term depends on the helicity
of the small scale magnetic fields.
When the Lorentz force is small, the first term contributes
to $\alpha$ in a manner closely
related to the usual kinematic alpha-effect, while the
second term contributes negligibly.
Interestingly, as the Lorentz force gains in importance the first term
is suppressed, while the second term (which has an opposite sign)
gains in importance and cancels the first term,
to further suppress the total $\alpha$-effect (Fig.~\ref{panalyt_vs_B0}).
This is similar to the suppression of
the kinetic alpha due to the addition of a magnetic alpha
(proportional to the helical part of $\bm{{{b}}}$) found in several
closure models \citep{pouq,KR82,GD,BF02,BS05b}.
When one combines the two terms,
the resulting
$\alpha$-effect is identical to that obtained
from the induction equation (Method~A) using FOSA
or the direct computation of Method~C.
However, it also highlights the fact that
in this steady case the $(\nabla^{-2}\bm f)\times\bm{{{b}}}$-type
correlation does not vanish, and that there is no tendency
for this term to balance the viscous term, as one might have expected.
Finally, the results of Method~D show that the formalism used in the
$\tau$-approximation lead to results that are equivalent to the usual
approach taken in the first order smoothing approximation.
However, this requires that the detailed spectral dependence of the
diffusion operator be retained until the point where the steady
state assumption is made.
The resulting equations are solved for the spectral electromotive force
of the form $\bm E({\bm{k}},{\bm{q}})$ in \Eq{tau1}.
Other\-wise, one would not recover the correct low conductivity limit,
as shown by \cite{RR07}.
We emphasize that throughout this paper we have understood the
term $\tau$-approximation only in this more generalized sense.
The above results are also obtained for statistically stationary
but time-dependent forcing. Specifically, we showed that even in the
time dependent case, one gets identical results
for the mean emf if $\alpha$ is computed directly (as in Method~B),
or from the momentum equation, by the addition of a generalized
$(\nabla^{-2}\bm f)\times\bm{{{b}}}$-like
correlation and a purely magnetic correlation. The explicit form of the
$\alpha$-effect differs between delta-correlated and steady forcing cases.
In particular, in the limit of large $B_0$, and when $\nu=\eta$,
we have $\alpha \propto B_0^{-2}$ for delta-correlated
forcing, in contrast to $\alpha \propto B_0^{-3}$, for the case of
a steady forcing.
The former result has already been obtained by \cite{FBC99} and \cite{RK00},
both of whom assumed the force--field correlation to vanish.
However, their result was derived under the assumption of large fluid
and magnetic Reynolds numbers.
The major difference between the time dependent and steady forcing cases
arise when one follows Method~D, the formalism used in the
$\tau$-approximation type approaches to computing $\alpha$
in large $R_\mathrm{m}$ systems.
We recall that in this approach one starts by evaluating
the time derivative of the emf, and then look at the stationary limit.
We showed that in this limit the $\alpha$-effect can be naturally
written as the sum of 3 terms, $\alpha = \alpha_{\rm K} + \alpha_{\rm M}
+ \alpha_{\rm F}$, for a general time-dependent forcing.
Here $\alpha_{\rm K}$ and $\alpha_{\rm M}$ are the kinetic and
magnetic contributions to $\alpha$ corresponding respectively to the
terms containing the
$(\bm{{{u}}},\bm{{{u}}})$ and $(\bm{{{b}}},\bm{{{b}}})$ [see Eqs~(\ref{alK1}) and (\ref{alM1})],
while $\alpha_{\rm F}$ incorporates the $(\bm{{{f}}},\bm{{{b}}})$ correlation;
see Eq.~(\ref{alF1}). Interestingly, we showed that $\alpha_{\rm F}=0$
in the approach of Method~D, for delta-correlated forcing, and therefore
$\alpha=\alpha_{\rm K} + \alpha_{\rm M}$, just the sum of a kinetic and
magnetic terms. We also computed $\alpha_{\rm K}$ and $\alpha_{\rm M}$
explicitly for the case $\eta=\nu$.
In the kinematic limit, $\alpha_{\rm M} \to 0$,
while $\alpha \to \alpha_{\rm K}$.
While in the limit $B_0/B_{\rm cr} \gg 1$,
$\alpha_{\rm K} \to +\alpha_0/2 + O(B_0^{-2})$ and $\alpha_{\rm M}
\to -\alpha_0/2 + O(B_0^{-2})$,
so that the total $\alpha = \alpha_{\rm K} + \alpha_{\rm M} \to 0$ as
$B_0^{-2}$. This is reminiscent of the kinetic and magnetic $\alpha$'s nearly
cancelling to leave a small residual $\alpha$-effect in EDQNM or
$\tau$-approximation type closures.
The results from employing FOSA and $\tau$-approximation type analysis is
summarized in Table~\ref{results} both for steady and random forcings.
\begin{table} \begin{center}
\caption[]{\label{results}
Summary of the results obtained from FOSA and $\tau$-approximation type analysis for steady
and random forcings.}
\begin{tabular}{lll}
\hline
& steady forcing & delta-correlated forcing \\
\hline
FOSA &$\alpha=\hat{\alpha}_{\rm K}$
&$\alpha=\hat{\alpha}_{\rm K}$\\
&$\quad=\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$
&$\quad=\hat{\alpha}_{\rm F}+\hat{\alpha}_{\rm M}$ \\
$\tau$-approximation &$\alpha=\alpha_{\rm K}+\alpha_{\rm F}+\alpha_{\rm M}$
&$\alpha=\alpha_{\rm K}+\alpha_{\rm F}+\alpha_{\rm M}$\\
& &$\quad=\alpha_{\rm K}+\alpha_{\rm M}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
As far as the low Reynolds number case is concerned, our analytic
solutions demonstrate quite clearly that one can look at the nonlinear
$\alpha$-effect in several equivalent ways.
On the one hand, one can express $\alpha$ completely
in terms of the helical properties of the velocity field
(Method~A) as advocated by \citet{proctor} and \citet{RR07}.
At the same time $\alpha$ can be naturally expressed as
a sum of a suppressed kinetic part (first term in Method~C)
and an oppositely signed magnetic part proportional
to the helical part of $\bm{{{b}}}$ (second term in Method~C).
Method~D applied to the delta-correlated forcing is particularly revealing.
As here one can explicitly write
$\alpha = \alpha_{\rm K} + \alpha_{\rm M}$, or as the sum of a kinetic
$\alpha_{\rm K}$, which dominates in the linear regime,
and a magnetic $\alpha_{\rm M}$, which gains in importance
as the field becomes stronger, and cancels $\alpha_{\rm K}$
to suppress the net $\alpha$-effect.
This is similar to the approach that arises from closure models
like EDQNM \citep{pouq} and the $\tau$-approximation
\citep{KMR96,BF,BS05} or the quasilinear models \citep{GD}.
In all these cases the nonlinear $\alpha$-effect, for large $R_{\rm m}$,
is the sum of a kinetic part and an oppositely signed magnetic part.
As noted above, the kinetic $\alpha$-effect is itself suppressed, as
seen in Fig.~\ref{analytdc_vs_B0}, but this happens only for
$B_0>B_{\rm cr}$, and the suppression is milder than the
the strong suppression of the total $\alpha$-effect.
This is also borne out in the simulations of \cite{BS05b},
where the kinetic part of the $\alpha$-effect is suppressed in a manner
that is independent of the magnetic Reynolds number,
even though the total $\alpha$ is catastrophically suppressed.
Finally, we also have shown that $\alpha_{\rm F}$ defined naturally
in the approach of Method~D, vanishes for delta-correlated forcing,
as was assumed in derivations of the $\alpha$-effect
in $\tau$-approximation type closures.
In the special case of periodic domains it is now clear that for forced
turbulence at large $R_{\rm m}$ the steady state $\alpha$-effect is
catastrophically quenched \citep{CH96,B01}.
However, the physical cause of this phenomenon was long controversial.
Is it because Lorentz forces cause a suppression of Lagrangian chaos
\citep{CHK96} or is it due to a nonlinear addition to $\alpha$ due to helical
parts of the small scale magnetic field, as is argued here?
The latter alternative is also supported by the excellent
agreement between model calculations and simulations \citep{FB02,BB02}.
Furthermore, the simulations of \cite{BS05b} demonstrate that this
quenching is accompanied by an increase of $-\alpha_{\rm M}$ toward
$\alpha_{\rm K}$, and that this $\alpha_{\rm K}$ itself is unquenched.
Subsequent analysis of their data shows that $\alpha_{\rm K}$ remains
unquenched regardless of whether or not one uses the proper anisotropic
expression (Brandenburg \& Subramanian, unpublished).
Our present results are of course restricted to the case
of small magnetic and fluid Reynolds numbers.
This means that we have not tested any of the actual closure assumptions,
like the $\tau$-approximation.
Such tests have so far only been done numerically \citep{BKM04,BS05}.
Clearly, at large magnetic and fluid Reynolds numbers the
$\tau$-approximation can no longer yield exact results.
Nevertheless, it provides a very practical tool to estimate the
mean field transport coefficients in a way that captures
correctly some of the effects that enter in the case of
large magnetic and fluid Reynolds numbers.
In that respect, it has been quite successful in reproducing the
catastrophic quenching result for periodic domains, as well as
suggesting ways to alleviate such quenching.
In Section~\ref{MFED} we outlined the conditions under which double FOSA
is valid as being basically the requirement that
$R_{\rm m} \ll 1$ and $\mbox{\rm Re} \ll 1$. A subtle point concerns the
validity of retaining the linear Lorentz force term, $\bm{{{B}}}_0\cdot\nabla\bm{{{b}}}$,
while neglecting the nonlinear advection, $\bm{{{u}}}\cdot\nabla\bm{{{u}}}$, even though
nonlinear advection is small compared to the viscous dissipation for $\mbox{\rm Re}\ll1$.
This assumption is valid provided $B_0 b/l > u^2/l$,
or, using $b \sim R_{\rm m} B_0$, $B_0^2 R_{\rm m} > u^2$.
(We thank Eric Blackman for pointing this out to us.)
Note that in terms of the critical field $B_{\rm cr} = \sqrt{\nu\eta} q_0$,
which divides the regimes where the Lorentz force is important ($B_0 > B_{\rm cr})$
and where it is not ($B_0 < B_{\rm cr}$), this requirement becomes
$B_0/B_{\rm cr}> \mbox{\rm Re}^{1/2}$.
Therefore, for small fluid Reynolds number our assumption of
retaining the linear Lorentz force term
while dropping nonlinear advection is indeed valid for most
regimes of interest for $\alpha$ suppression.
For smaller mean fields, where $B_0/B_{\rm cr} < \mbox{\rm Re}^{1/2}$,
in any case the Lorentz force has no impact. The interesting point seems
to be that for low Reynolds number systems, the typical reference mean field
is not the equipartition field $B_0 = u$, but rather
$B_0 = B_{\rm cr} \sim u/(R_{\rm m}\mbox{\rm Re})^{1/2} > u$.
Throughout this work we have adopted an externally imposed body force
to drive the flow.
This is commonly done in many simulations in order to achieve homogeneous
isotropic conditions that are amenable to analytic treatment.
Clearly, this is not the case for many astrophysical flows that are driven
by convection (e.g.\ in stars) or the magneto-rotational instability
(e.g.\ in accretion discs).
Such flows tend to show long-range spatial correlations, which means
that the alpha tensor should really be treated as a integral kernel
\citep[see, e.g.,][]{BS02}.
It is at present unclear whether such more natural forcings are closer
to steady or to random forcing, and how big is the resulting
$(\bm{{{f}}},\bm{{{b}}})$ correlation.
Given that this correlation represents already a linear effect,
it is likely that the $\alpha_{\rm F}$ term can simply be subsumed
into an expression for a modified kinetic $\alpha_{\rm K}$.
If this is the case, we can continue to write
$\alpha \approx \alpha_{\rm K}+\alpha_{\rm M}$ as the sum of a
mildly suppressed kinetic part depending on the velocity field and
a magnetic part, so that their sum accounts for the tendency toward
catastrophic $\alpha$-quenching in the absence of helicity fluxes.
We recall that such a split is exact in the limit of delta-correlated forcing.
\section{Conclusions}
Our work was motivated in part by the detailed
criticism expressed by \cite{RR07}.
In view of our new results we can now make the following statements
for small magnetic and fluid Reynolds numbers.
Firstly, it is true that the $\hat{\alpha}_{\rm K}$ that is calculated
under FOSA does indeed capture the full nonlinear $\alpha$-effect ---
provided it is based on the actual velocity field.
Secondly, the $\alpha_{\rm K}$ that is calculated in the $\tau$
approximation, is not simply related to $\hat{\alpha}_{\rm K}$,
except in the limit of steady forcing.
Thirdly, in the limit of small magnetic and fluid Reynolds numbers,
both FOSA and the $\tau$-approximation give identical results.
Indeed, all methods of calculating the $\alpha$-effect agree,
as they should, given that the starting equations were the same.
However, the force--field correlation cannot be ignored in general.
The exception is when one analyzes the case of delta-correlated forcing
in a manner akin to the $\tau$-approximation, where
the force--field correlation does vanish and hence
$\alpha_{\rm F} =0$ explicitly.
In this case one can indeed write $\alpha=\alpha_{\rm K} + \alpha_{\rm M}$,
or the sum of a kinetic and magnetic alpha effects.
Furthermore, due to the spatial non-locality of the Greens function
for small magnetic and fluid Reynolds numbers, the $\tau$-approximation
should be carried out at the level of spectral correlation tensors,
as is done here.
Somewhat surprisingly, the delta-correlated forcing case yields an
asymptotic $\alpha \propto B_0^{-2}$ scaling as opposed to the well-known
$\alpha \propto B_0^{-3}$ behaviour for steady forcing.
Although our work is limited to small magnetic and fluid Reynolds numbers,
the calculations of Method~C and Method~D, and its agreement with the results of
Method~A, (for both steady and time-dependent forcings),
do suggest one way of thinking about the effect of
Lorentz forces: they lead to a decrease of $\alpha$ predominantly by
addition of terms proportional to the helical parts of the small
scale magnetic field. Hence getting rid of such small scale magnetic helicity
by corresponding helicity fluxes, may indeed be the way astrophysical
dynamos avoid catastrophic quenching of $\alpha$ to make their dynamos
work efficiently.
\section*{Acknowledgments}
We thank Eric Blackman, Nathan Kleeorin, Karl-Heinz R\"adler,
Igor Rogachevskii, and Anvar Shukurov for valuable comments on the manuscript.
SS would like to thank the Council of Scientific and Industrial Research,
India for providing financial support.
We acknowledge the Danish Center for Scientific Computing
for granting time on the AMD Opteron Linux cluster in Copenhagen.
\newcommand{\ybook}[3]{ #1, {#2} (#3)}
\newcommand{\yjfm}[3]{ #1, {J.\ Fluid Mech.,} {#2}, #3}
\newcommand{\yprl}[3]{ #1, {Phys.\ Rev.\ Lett.,} {#2}, #3}
\newcommand{\ypre}[3]{ #1, {Phys.\ Rev.\ E,} {#2}, #3}
\newcommand{\yapj}[3]{ #1, {ApJ,} {#2}, #3}
\newcommand{\yan}[3]{ #1, {AN,} {#2}, #3}
\newcommand{\yana}[3]{ #1, {A\&A,} {#2}, #3}
\newcommand{\ygafd}[3]{ #1, {Geophys.\ Astrophys.\ Fluid Dyn.,} {#2}, #3}
\newcommand{\ypf}[3]{ #1, {Phys.\ Fluids,} {#2}, #3}
\newcommand{\yproc}[5]{ #1, in {#3}, ed.\ #4 (#5), #2}
\newcommand{\yjour}[6]{, #6, {#2} {#3} (#1) #4--#5.}
|
2,877,628,088,586 | arxiv | \section{Problems and motivation}\label{se:intro}
It is known that in Euclidean space every continuous global optimization problem on a compact set can be reformulated as a d.c. optimization problem, i.e.
a nonconvex problem which can be described in terms of {\em d.c. functions} (difference of convex functions) and {\em d.c. sets} (difference of convex sets) \cite{tuy95}. By the fact that any constraint set can be equivalently relaxed by a nonsmooth indicator function,
general nonconvex optimization problems can be written in the following standard d.c. programming form
\begin{equation}\label{dc}
\min \{ f(x)= g(x)-h(x) \; | \;\; \forall {x\in {\cal X}} \},
\end{equation}
where ${\cal X} = \mathbb{R}^n$, $g(x), h(x) $ are convex proper lower-semicontinuous functions on $\mathbb{R}^n$, and the d.c. function
$f(x)$ to be optimized is usually called the ``objective function'' in mathematical optimization.
A more general model is that $g(x)$ can be an arbitrary function \cite{tuy95}.
Clearly, this d.c. programming problem is artificial. Although it can be used to ``model'' a very wide range of mathematical problems \cite{hiri} and has been studied extensively during the last thirty years (cf. \cite{hor-tho,tao-le,s-s}),
it comes at a price: it is impossible to have an elegant theory and powerful algorithms for solving this
problem without detailed structures on these arbitrarily given functions.
As the result, even some very simple d.c. programming problems are considered as NP-hard \cite{tuy95}.
This dilemma is mainly due to the existing gap between mathematical optimization and mathematical physics.
\subsection{Objectivity and multi-scale modeling}
Generally speaking, the concept of {\em objectivity} used in our daily life means the state or quality of being true even outside of a subject's individual biases, interpretations, feelings, and imaginings (see Wikipedia at \url{https://en.wikipedia.org/wiki/Objectivity\_(philosophy)}). In science, the objectivity is often attributed to the property of scientific measurement, as the accuracy of a measurement can be tested independent from the individual scientist who first reports it, i.e. an objective function does not depend on observers.
In Lagrange mechanics and continuum physics, a real-valued function $W:\mathcal{X} \rightarrow \mathbb{R} $
is said to be objective
if and only if (see \cite{Gao00duality}, Chapter 6)
\[
W(x) = W(Rx) \;\; \forall x \in \mathcal{X}, \;\; \forall R \in {\cal R},
\]
where ${\cal R}$ is a special rotation group such that
$R^{-1} = R^T, \;\; \textrm{det} R = 1, \;\; \forall R \in {\cal R}$.
Geometrically, an objective function does not depend on the rotation, but only on certain measure
of its variable. The simplest measure in $\mathbb{R}^n$ is the $\ell_2$ norm $\| x \|$, which is an objective function since
$\| R x \|^2 = (R x)^T (Rx) = x^T R^T R x= \|x\|^2$ for all special orthogonal matrix $R \in SO(n)$. By Cholesky factorization, any positive definite matrix
has a unique decomposition $C = D^* D$. Thus, any convex quadratic function is objective.
It was emphasized by P.G. Ciarlet in his recent nonlinear analysis book \cite{ciarlet} that the objectivity
is not an assumption, but an axiom. Indeed, the objectivity is also known as
the {\em axiom of frame-invariance} in continuum physics (see page 8 \cite{marsd-hugh} and page 42 \cite{tru-noll}).
Although the objectivity has been well-defined in mathematical physics,
it is still subjected to seriously study due to its importance in mathematical modeling (see \cite{liu,murd,murd05}).
Based on the original concept of objectivity, a multi-scale mathematical model for general nonconvex systems
was proposed by Gao in \cite{Gao00duality,gao-opl16}:
\begin{equation}\label{p:gao}
(\mathcal{P}):~~~~\inf \{ \Pi(x)= W(Dx) - F(x) \; | \;\; \forall {x \in {\cal X} } \},
\end{equation}
where $\mathcal{X} $ is a feasible space;
$F: {\cal X} \rightarrow \mathbb{R} \cup \{ -\infty\}$ is a so-called {\em subjective function},
which is linear on its effective domain $\mathcal{X}_a \subset \mathcal{X}$, wherein,
certain ``geometrical constraints'' (such as boundary/initial conditions, etc) are given;
correspondingly, $W: {\cal Y} \rightarrow \mathbb{R}\cup \{ \infty\} $ is an
{\em objective function} on its effective domain $\mathcal{Y}_a \subset \mathcal{Y}$,
in which, certain physical constraints (such as constitutive laws, etc) are given;
$D: {\cal X} \rightarrow {\cal Y} $ is a linear operator which assign each decision variable
in configuration space ${\cal X}$ to an internal variable $y \in {\cal Y} $ at different scale.
By Riesz representation theorem, the subjective function
can be written as
$F(x) = \langle x, \bar{x}^* \rangle \;\; \forall x \in \mathcal{X}_a$,
where $\bar{x}^* \in \mathcal{X}^*$ is a given input (or source),
the bilinear form $\langle x, x^* \rangle :\mathcal{X} \times \mathcal{X}^* \rightarrow \mathbb{R}$ puts $\mathcal{X} $ and $\mathcal{X}^*$ in duality.
Additionally, the positivity conditions $W(y) \ge 0 \;\;\forall y \in \mathcal{Y}_a $, $F(x) \ge 0 \;\;\forall x \in \mathcal{X}_a$
and coercivity condition $\lim_{\|y\| \rightarrow \infty} W(y) = \infty$
are needed for the target function $\Pi(x)$ to be bounded below on
its effective domain $\mathcal{X}_c = \{ x \in \mathcal{X}_a | \;\; Dx \in \mathcal{Y}_a\}$ \cite{gao-opl16}.
Therefore, the extremality condition $ 0 \in \partial \Pi(x) $ leads to the equilibrium equation \cite{Gao00duality}
\[
0 \in D^* \partial W(Dx) - \partial F (x) \;\; \Leftrightarrow \;\; D^* y^* - x^* = 0 \;\; \forall x^* \in \partial F(x), \;\; y^* \in \partial W(y).
\]
In this model, the objective duality relation $y^* \in \partial W(y)$ is governed by the constitutive law, which depends only
on mathematical modeling of the system;
the subjective duality relation $x^* \in \partial F(x)$ leads to the input $\bar{x}^* $ of the system,
which depends only on each given problem.
Thus, $(\mathcal{P})$ can be used to model general problems in multi-scale complex systems.
\subsection{Real-world problems}
In management science the variable $x \in \mathcal{X}_a \subset \mathbb{R}^n$ could represent the products of a manufacture company.
Its dual variable $\bar{x}^* \in \mathbb{R}^n$ can be considered as market price (or demands). Therefore, the subjective function
$ F(x) = x^T \bar{x}^* $ in this example is the total income of the company.
The products are produced by workers $y \in \mathbb{R}^m$. Due to the cooperation, we have $y = D x$ and
$ D \in \mathbb{R}^{m\times n}$ is a matrix. Workers are paid by salary $y^*= \partial W(y)$,
therefore, the objective function $ W(y)$ in this example is the cost.
Thus, $ \Pi(x) = W( D x) - F(x)$ is the {\em total loss or target} and the minimization problem
$({\cal P})$ leads to
the equilibrium equation
$ D^T \partial_{y} W( Dx) = \bar{x}^*$.
The cost function $ W( y)$ could be convex for a very small company, but
usually nonconvex for big companies.
In Lagrange mechanics, the variable $x \in \mathcal{X} = {\cal C}^1[I; \mathbb{R}^n]$ is a continuous vector-valued function of time $t \in I \subset \mathbb{R}$, its components $\{x_i (t) \} (i=1, \dots, n) $
are known as the Lagrange coordinates.
The subjective function in this case
is a linear functional $ F(x) = \int_I x(t)^T \bar{x}^*(t) dt$, where $ \bar{x}^*(t) $ is a given external force field.
While $ W( D x)$ is the so-called action:
\[
W( D x) = \int_I L(x, \dot{ x} ) dt , \;\; L= T(\dot{x} ) - V(x),
\]
where $ T $ is the kinetic energy density, $ V $ is the potential density, and $L= T - V$ is the standard
{\em Lagrangian density} \cite{l-l}.
The linear operator $ D x = \{ \partial_t, 1 \} x = \{ \dot{x}, \; x\}$ is a vector-valued mapping.
The kinetic energy $T $ must be an objective function of the velocity (quadratic for Newton's mechanics and convex for Einstein's relativistic theory) \cite{Gao00duality},
while the potential density $ V$ could be either convex or nonconvex,
depending on each problem.
Together, $ \Pi(x) = W( D x) - F(x)$ is called {\em total action}.
The extremality condition $\partial \Pi(x) = 0$ leads to the well-known Euler-Lagrange equation
\[
D^* \partial W(D x) = \partial^*_t \frac{d T(\dot{x}) }{d \dot{x} }- \frac{d V(x) }{d x} = \bar{x}^* ,
\]
where $\partial^*_t $ is an adjoint operator of $\partial_t$.
For convex Hamiltonian systems, both $T$ and $V$
are convex, thus, the least action principle leads to a typical d.c. minimization problem
\[
\inf \{\Pi(x) = K(\partial_t x ) - P(x) \} , \;\; K(y) = \int_I T(y) dt , \;\; P(x) = \int_I [ V(x) + x^T \bar{x}^* ] dt ,
\]
where $K(y)$ is the kinetic energy and $P(x)$ is the total potential energy.
The duality theory for this d.c. minimization problem was first studied by J. Toland \cite{toland} with successful application in nonlinear
heavy rotating chain, where
\[
K(y) = \int_0^1 \frac{1}{2 \lambda} y^2 dt , \;\; P(x) = \int_0^1 [ x(t) ^2 + t^2 ]^{{1}/{2}} d t,
\]
and the parameter $\lambda > 0$ depends on angular speed. Clearly, in this application, $K(v)$ is quadratic while $P(x)$ is approaching to linear when $\|x (t) \|$ is sufficiently large. Therefore, the total action $\Pi(x)$ is bounded below and coercive
on $\mathcal{X} = \{ x(t) \in W^{1,2}[0,1] | \;\; x(0) = 0\}$, the problem $({\cal P})$
has a unique stable global minimizer.
However, if both $K(v)$ and $P(x)$ are quadratic functions (for example, the classical linear mass-springer system),
the d.c. minimal problem $({\cal P})$ will have no stable global minimizer.
It was proved in \cite{Gao00duality} that, in addition to the double-min duality
\[
\inf_{x \in \mathcal{X}} \{ \Pi(x) = K(\partial_t x) - P(x) \} = \inf_{y^* \in {\cal Y}^*} \{ \Pi^*(y^*)
= P^*(\partial^*_t y^*) - K^*(y^*) \} ,
\]
the double-max duality
\[
\sup_{x \in \mathcal{X}} \{ \Pi(x) = K(\partial_t x) - P(x) \} = \sup_{y^* \in {\cal Y}^*} \{ \Pi^*(y^*) = P^*(\partial^*_t y^*) - K^*(y^*) \}
\]
holds alternatively, i.e. the system is in stable periodic vibration on its time domain $I$.
Therefore, this double-min duality reveals an important truth in convex Hamiltonian systems:
the least action principle is a misnomer for periodic vibration (see Chapter 2 \cite{Gao00duality}).
Now let us consider another example in buckling analysis of Euler beam:
\[
\inf \{ \Pi(u) = K(\partial_{xx} u ) - P(\partial_x u) \},
\]
where both the bending energy $K$ and the axial strain energy $P$ are quadratic
\[
K(\partial_{xx} u ) = \int_I \frac{1}{2} \alpha u_{xx}^2 d x , \;\; P(\partial_x u) = \int_I \frac{1}{2} \lambda u_{x}^2 d x
\]
and $\alpha > 0$ is a constant, $\lambda > 0$ is a given axial load at the end of the beam.
Clearly, if $\lambda < \lambda_c$, the eigenvalue of the Euler beam defined by
\[
\lambda_c = \inf \frac{\int_I \alpha u_{xx}^2 d x }{\int_I u_x^2 dx},
\]
the d.c. functional $\Pi(u)$ is convex and the problem $({\cal P})$ has a unique solution.
In this case, the Euler beam is in pre-buckling state.
However, $\Pi(u)$ is concave if the axial load $\lambda > \lambda_c$ and in this case, we have $\inf \Pi(u) = - \infty$, which means that the Euler beam is collapsed.
This example shows that the linear Euler beam can be used only for pre-buckling problems.
Generally speaking, unconstrained quadratic d.c. programming problem does not make any physical sense unless it is convex.
In order to study the post-bifurcation problems,
a nonlinear beam model was proposed by Gao \cite{gao-mrc96}. Instead of quadratic function, the stored energy
$K$ in this nonlinear model is a fourth-order polynomial
\begin{equation}
K(\partial_{xx} u ) = \int_I\left[ \frac{1}{2} \alpha u_{xx}^2 + \frac{1}{12} \beta u_{x}^4 \right] d x ,
\end{equation}
where $\beta > 0$ is a material constant. Clearly, if $\lambda_p = \lambda - \lambda_c < 0$,
$\Pi(u) = K(\partial_{xx} u) - P(\partial_x u) $
is strictly convex. In this case, the problem $(\mathcal{P})$ has a unique minimizer and
the beam is in pre-buckling state.
If $\lambda_p > 0$, the total potential $\Pi(u)$ is nonconvex which has two equally valued local minimizers and one local maximizer.
Therefore, this nonlinear beam can be used to model
post-buckling phenomena, which has been subjected to seriously study in recent years
(cf. \cite{ahn,cai-gao-qin,kuttler-etal}).
If the beam is subjected a lateral distributed load $q(x)$,
then we have the following d.c variational problem
\begin{equation}
\inf \{ \Pi(u) = W(Du) - F(u)\} ,
\end{equation}
where
\[
W(Du) = \int_I \left[ \frac{1}{12} \beta u_{x}^4 - \frac{1}{2} \lambda_p u_{x}^2 \right]d x ,\;\;
F(u) = \int_I q u \; d x .
\]
The objective function $W(Du) $ in this problem is the so-called {\em double-well potential}, which
appears extensively in real-world problems, such as phase transitions, shape-memory alloys, chaotic dynamics and theoretical physics \cite{gao-amma03}.
The subjective function $F(u)$ breaks the symmetry of this nonlinear buckling beam model
and leads to one global minimizer, corresponding to a stable buckled state,
one local minimizer, corresponding to one unstable buckled state,
and one local maximizer, corresponding to one unbuckled state \cite{cai-gao-qin,santos-gao}.
By finite element method the domain $I$ is discretized into $m$-elements $\{ I_ e\}$ such that
the unknown function can be
piecewisely approximated as $ u (x) \simeq N_e(x) p_e $ in each element $ I_e $ with $p_e$ as nodal variables.
Then, the nonconvex variational problem (15)
can be numerically reformulated to the
d.c. programming problem (1) with $g(p)$ as a fourth-order polynomial and $h(p)$ a quadratic function so that
$\Pi(p) = g(p) - h(p)$ is bounded below to have a global minimum solution \cite{cai-gao-qin,santos-gao}.
All the real-world applications discussed in this section show a simple fact, i.e.
the functions $g(x)$ and $h(x)$ in the
standard d.c. programming problem (\ref{dc}) can't be arbitrarily given, they
must obey certain fundamental laws in physics in order to model real-world systems.
By the facts that the subjective function $F(x) = \langle x , \bar{x}^* \rangle $
is necessary for any given real-world system in order to have non-trivial solutions (states or outputs) and the
function $g(x)$ in the standard d.c. programming (\ref{dc}) can be generalized to a nonconvex function (see Equation (36)
in \cite{tuy95}), it is reasonable
to assume that $g(x)$ in (\ref{dc}) is a general nonconvex function $W(Dx)$ and
$h(x)$ is a quadratic function
\[
Q(x) = \frac{1}{2} \langle x , C x \rangle + \langle x, f \rangle ,
\]
where $C: \mathcal{X} \rightarrow \mathcal{X}^*$ is a given symmetrical positive definite operator (or matrix)
and $f \in \mathcal{X}^*$
is a given input. Then, the standard d.c. programming (\ref{dc}) can be generalized to the following form
\[
(\mathcal{P}_{dc}): \;\; \min \{W(Dx) - Q(x) \;| \;\; \forall x \in\mathcal{X} \}.
\]
\subsection{Canonical duality theory and goal}
Canonical duality-triality is a breakthrough theory which can be used not only for modeling complex systems within a unified framework, but also for solving real-world problems with a unified methodology \cite{gao-opl16}.
This theory was developed originally from Gao and Strang's work in nonconvex mechanics
\cite{gao-strang1989} and has been applied successfully for solving a large class of challenging problems
in both nonconvex analysis/mechancis and global optimization, such as phase transitions in solids \cite{gao-yu}, post-buckling of large deformed beam \cite{santos-gao},
nonconvex polynomial minimization problems with box and integer constraints \cite{gao2005,gao2007,gao-ruan2009}, Boolean and multiple integer programming \cite{fang-gao2007,wang-fang-gao2008}, fractional programming \cite{fang-gao2009}, mixed integer programming\cite{gao-ruan2010}, polynomial optimization\cite{gao2006}, high-order polynomial with log-sum-exp problem\cite{gao-chen2014}.
A comprehensive review on this theory and breakthrough
from recent challenges are given in \cite{bridgeMMS}.
The goal of this paper is to apply the canonical duality theory for solving the challenging d.c. programming problem
(\ref{dc}).
The rest of this paper is arranged as follows.
Based on the concept of objectivity, a canonical d.c. optimization problem and its canonical dual
are formulated in the next section.
Analytical solutions and triality theory for a general d.c. minimization problem with
sum of nonconvex polynomial and exponential functions are discussed in Sections 3 and 4.
Five special examples are illustrated in Section 5. Some conclusions and future work are given in Section 6.
\section{Canonical d.c. problem and its canonical dual}
It is known that the linear operator $D:\mathcal{X} \rightarrow \mathcal{Y}$ can't change the nonconvex $W(Dx)$ to a convex function.
According to the definition of the objectivity, a nonconvex function $W:\mathcal{Y} \rightarrow \mathbb{R} $ is objective if and only if
there exists a function $V:\mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}$ such that $W(y) = V(y^T y)$ \cite{ciarlet,gao-opl16}.
Based on this fact, a reasonable assumption can be made for the general problem $(\mathcal{P}_{dc})$.
\begin{Assumption}[Canonical Transformation and Canonical Measure] $\;$ \newline
For a given nonconvex function $W:\mathcal{Y} \rightarrow \mathbb{R} \cup \{\infty\}$, there exists a nonlinear mapping
$\Lambda:\mathcal{X} \rightarrow \mathcal{E}$ and a convex, l.s.c function $V:\mathcal{E} \rightarrow \mathbb{R} \cup \{ \infty\} $ such that
\begin{equation}
W(Dx) = V( \Lambda(x)). \label{eq-ct}
\end{equation}
\end{Assumption}
The nonlinear transformation (\ref{eq-ct}) is actually the {\em canonical transformation}, first
introduced by Gao in 2000 \cite{gao-jogo00},
and $\xi = \Lambda(x) $ is called a {\em canonical measure}.
The canonical measure $\xi = \Lambda(x)$ is also called the {\em
geometrically admissible measure} in
the canonical duality theory \cite{gao-jogo00}, which is not necessarily to be objective.
But the most simple canonical measure in $\mathbb{R}^n$ is the quadratic function $\xi = x^T x$,
which is clearly objective. Therefore, the canonical function can be viewed as a generalized objective function.
Thus, based on Assumption 1, the generalized d.c. programming problem $(\mathcal{P}_{dc})$ can be
written in a canonical d.c. minimization problem form ($(\mathcal{P})$ for short):
\begin{equation}
(\mathcal{P}): \;\; \min \left\{ \Pi(x) = V(\Lambda (x) ) - Q (x) | \;\; x \in \mathcal{X} \right\}.
\end{equation}
Since the canonical measure $\xi = \Lambda(x) \in \mathcal{E}$ is nonlinear
and $V(\xi)$ is convex on $\mathcal{E}$, the composition $V(\Lambda(x))$ has a higher order nonlinearity than $Q(x)$. Therefore, the coercivity for the target function $\Pi(x)$ should be naturally satisfied, i.e.
\begin{equation}
\lim_{\| x \| \rightarrow \infty } \{\Pi (x) = V(\Lambda(x) ) - Q(x) \} = \infty , \label{eq-coe}
\end{equation}
which is a sufficient condition for existence of a global minimal solution to $(CDC)$ (otherwise, the set $\mathcal{X}$ should be bounded).
Clearly, this generalized d.c. minimization problem can be used to model a reasonably large class of real-world problems
in mathematical physics \cite{Gao00duality,gao-amma03}, global optimization \cite{gao-cace09},
and computational sciences \cite{bridgeMMS}.
By the fact that $V(\xi)$ is convex, l.s.c. on $\mathcal{E}$, its conjugate can be uniquely defined by the Fenchel transformation
\[
V^*(\xi^*) = \sup \{ \langle \xi ; \xi^* \rangle - V(\xi) | \;\; \xi \in \mathcal{E} \}.
\]
The bilinear form $\langle \xi ; \xi^* \rangle$ puts $\mathcal{E}$ and $\mathcal{E}^*$ in duality.
According to convex analysis (cf. \cite{eke-tem}), $V^*:\mathcal{E}^* \rightarrow \mathbb{R} \cup \{ + \infty\}$ is also convex, l.s.c. on its domain $\mathcal{E}^*$ and the following generalized canonical duality relations
\cite{gao-jogo00} hold on $\mathcal{E} \times \mathcal{E}^*$
\[
\xi^* \in \partial V(\xi) \;\; \Leftrightarrow \;\; \xi \in \partial V^*(\xi^*) \;\; \Leftrightarrow \;\;
V(\xi) + V^*(\xi^*) = \langle \xi ; \xi^* \rangle .
\]
Replacing $ V(\Lambda(x)) $ in the target function $\Pi(x)$ by the Fenchel-Young equality
$V(\xi) = \langle \xi ; \xi^* \rangle - V^*(\xi^*)$,
Gao and Strang's total complementary function (see \cite{gao-jogo00})
$\Xi: \mathcal{X} \rightarrow \mathcal{E}^* \rightarrow \mathbb{R} \cup \{ - \infty\}$ for this
(CDC) can be obtained as
\begin{equation}
\Xi(x, \xi^* ) = \langle \Lambda(x) ; \xi^* \rangle - V^*(\xi^*) - Q(x) .
\end{equation}
By this total complementary function, the canonical dual of $\Pi(x)$ can be obtained as
\begin{equation}
\Pi^d(\xi^*) = \inf \{ \Xi(x, \xi^*) | \;\; x \in \mathcal{X} \} = Q^{\Lambda}(\xi^*) - V^*(\xi^*),
\end{equation}
where $Q^\Lambda:\mathcal{E}^* \rightarrow \mathbb{R} \cup\{ - \infty\}$ is the so-called $\Lambda$-conjugate of $Q(x)$ defined by
(see \cite{gao-jogo00})
\[
Q^\Lambda(\xi^*) = \inf \{ \langle \Lambda(x) ; \xi^* \rangle - Q(x) \; | \;\; x \in \mathcal{X} \}.
\]
If this $\Lambda$-conjugate has a non-empty effective domain, the following
canonical duality
\begin{equation}
\inf_{x \in \mathcal{X}} \Pi(x) = \sup_{\xi^* \in \mathcal{E}^*} \Pi^d(\xi^*)
\end{equation}
holds under certain conditions, which will be illustrated in the next section.
\section{ Application and analytical solution}
Let us consider a special application in $\mathbb{R}^n$ such that
\begin{equation}
g(x)= W(Dx) = \sum_{i=1}^p\textrm{exp}\left(\frac{1}{2}x^T A_i
x-\alpha_i\right) + \sum_{j=1}^r\frac{1}{2}\left(\frac{1}{2}x^T B_jx-\beta_j\right)^2,
\end{equation}
where $ \{ A_i \}_{i=1}^p \in\mathbb{R}^{n\times n}$ are symmetric matrices
and $\{B_j \}_{j=1}^r \in\mathbb{R}^{n\times n}$ are symmetric positive definite matrices,
$\alpha_i$ and $\beta_j$ are real numbers.
Clearly, $g:\mathbb{R}^n \rightarrow \mathbb{R}$ is nonconvex and highly nonlinear. This type of nonconvex function covers many real applications.
The canonical measure in this application can be given as
$$
\xi=\begin{pmatrix} \theta\\ \eta \end{pmatrix}=\Lambda(x)=\begin{pmatrix}
\left\{\frac{1}{2}x^T A_ix\right\}_{i=1}^p\vspace{.2cm}\\
\left\{\frac{1}{2}x^T B_jx\right\}_{j=1}^r
\end{pmatrix}
~:~\mathbb{R}^n\rightarrow\mathcal{E}_a\subseteq\mathbb{R}^m
$$
where $m=p+r$. Therefore, a canonical function can be defined on $\mathcal{E}_a$:
$$
V(\xi)=V_1(\theta)+V_2(\eta)
$$
where
\begin{eqnarray}
&&V_1(\theta)= \sum_{i=1}^p\textrm{exp}\left(\theta_i-\alpha_i\right),\nonumber\\
&&V_2(\eta)=\sum_{j=1}^r\frac{1}{2}(\eta_j-\beta_j)^2.\nonumber
\end{eqnarray}
Here $\theta_i$ and $\eta_j$ denote the $i$th component of $\theta$ and the $j$th component of $\eta$, respectively. Since $V_1(\theta)$ and $V_2(\eta)$ are convex,
$V(\xi)$ is a convex function. By Legendre transformation, we have the following equation
\begin{equation}\label{eq:legendre}
V(\xi)+V^*(\zeta)=\xi^T\zeta,
\end{equation}
where
$$
\zeta=\begin{pmatrix} \tau\\\sigma \end{pmatrix}
=\begin{pmatrix} \nabla V_1(\theta)\\\nabla V_2(\eta) \end{pmatrix}
=\begin{pmatrix} \left\{
\textrm{exp}\left(\theta_i-\alpha_i\right)
\right\}_{i=1}^p\vspace{0.1cm}\\
\left\{\eta_j-\beta_j\right\}_{j=1}^r\end{pmatrix} ~:~\mathcal{E}_a\rightarrow \mathcal{E}_a^* \subset \mathbb{R}^m
$$
and $V^*(\zeta)$ is the conjugate function of $V(\xi)$, defined as
\[
V^*(\zeta)=V_1^*(\tau)+V_2^*(\sigma)
\]
with
\begin{eqnarray}
&&V_1^*(\tau)=\sum_{i=1}^p\left(\alpha_i+\ln(\tau_i)-1\right)\tau_i,\label{eq:funV1star}\nonumber\\
&&V_2^*(\sigma)=\frac{1}{2}\sigma^T\sigma+\beta^T\sigma,\label{eq:funV2star}\nonumber
\end{eqnarray}
where $\beta=\{\beta_j\}$.
Since the canonical measure in this application is a quadratic operator, the total complementary function
$\varXi : \mathbb{R}^n\times \mathcal{E}_a^*\rightarrow\mathbb{R}$ has the following form
\[\label{eq:totalcompf}
\varXi(x,\zeta)
=\frac{1}{2}x^T G(\zeta)x-f^Tx-V_1^*(\tau)-V_2^*(\sigma),
\]
where
$$
G(\zeta)= \sum_{i=1}^p\tau_i A_i+\sum_{j=1}^r\sigma_j B_j-C.
$$
Notice that for any given $\zeta$, the total complementary function $\varXi(x,\zeta)$ is a quadratic function of $x$ and its stationary points are the solutions of the following equation
\begin{equation}\label{eq:partialXi}
\nabla_{x}\varXi(x,\zeta)= G(\zeta)x-f=0.
\end{equation}
If $\textrm{det}(G(\zeta))\neq 0$ for a given $\zeta$, then (\ref{eq:partialXi}) can be solved analytically to have a unique solution
$x= G(\zeta)^{{-1}}f$.
Let
\[
\mathcal{S}_a=\left\{ \zeta\in\mathcal{E}_a^*| \; ~ \textrm{det}(G(\zeta))\neq0 \right\} .
\]
Thus, on $\mathcal{S}_a$ the canonical dual function $\varPi^d(\zeta)$ can then be written explicitly as
\[\label{eq:dualfun}
\varPi^d(\zeta)=&-\frac{1}{2}f^T G(\zeta)^{{-1}}f - V_1^*(\tau)-V_2^*(\sigma).
\]
Clearly, both $\varPi^d(\zeta)$ and its domain $\mathcal{S}_a$ are nonconvex.
The canonical dual problem is to find all stationary points of $\varPi^d(\zeta)$ on its domain, i.e.
\begin{equation}\label{p:dual}
(\mathcal{P}^d):~~~~\textrm{sta}\left\{\varPi^d(\zeta)~|~\zeta\in\mathcal{S}_a\right\}.
\end{equation}
\begin{theorem}[Analytic Solution and Complementary-Dual Principle]\label{th:AnalSolu} $\;$\newline
Problem ($\mathcal{P}^d$) is canonical dual to the problem ($\mathcal{P}$) in the sense that if $\bar{\zeta}\in\mathcal{S}_a$ is a stationary point of $\varPi^d(\zeta)$, then
\begin{equation}\label{eq:solvedx}
\bar{x}=G(\bar\zeta)^{{-1}}f
\end{equation}
is a stationary point of $\varPi(x)$, the pair $(\bar x,\bar\zeta)$ is a stationary point of $\varXi(x,\zeta)$, and we have
\begin{equation}\label{eq:nogap}
\varPi(\bar{x})=\varXi(\bar x,\bar\zeta)=\varPi^d(\bar{\zeta}).
\end{equation}
\end{theorem}
The proof of this theorem is analogous with that in \cite{Gao00duality}.
Theorem \ref{th:AnalSolu} shows that there is no duality gap between the primal problem ($\mathcal{P}$) and the canonical dual problem ($\mathcal{P}^d$).
\section{Triality theory}\label{se:triality}
In this section we will study global optimality conditions for the critical solutions of the primal and dual problems. In order to identify both global and local extrema of both two problems, we let
\begin{eqnarray*}
&&\mathcal{S}_a^+= \left\{\zeta\in\mathcal{S}_a~|~G (\zeta) \succ 0\right\},\label{eq:Saplus}\\
&&\mathcal{S}_a^-= \left\{\zeta\in\mathcal{S}_a~|~G (\zeta) \prec 0\right\}.\label{eq:Saminus}
\end{eqnarray*}
where $G\succ 0$ means that $G$ is a positive definite matrix and where $G\prec 0$ means that $G$ is a negative definite matrix. It is easy to prove that both $\mathcal{S}_a^+$ and $\mathcal{S}_a^-$ are convex sets and
\[
Q^\Lambda(\zeta) = \inf \{ \langle \Lambda(x) ; \zeta \rangle - Q(x) | \;\; x \in \mathbb{R}^n \} = \left\{
\begin{array}{ll}
-\frac{1}{2}f^T G(\zeta)^{{-1}}f \;\; & \mbox{ if } \zeta \in \mathcal{S}_a^+\\
-\infty & \mbox{ otherwise }
\end{array} \right.
\]
This shows that $\mathcal{S}_a^+$ is an effective domain of $Q^\Lambda(\zeta)$.
For convenience, we first give the first and second derivatives of functions $\varPi(x)$ and $\varPi^d(\zeta)$:
\begin{eqnarray}
&& \nabla \varPi(x)= G (\zeta) x - f,\label{eq:der1Pi}\\
&& \nabla^2\varPi(x)= G+Z_0 H Z_0^T,\label{eq:der2Pi}\\
&& \nabla \varPi^d(\zeta)=
\left(\begin{array}{l}
\left\{\frac{1}{2}f^T G^{-1} A_i G^{{-1}}f-\alpha_i-\ln(\tau_i)\right\}_{i=1}^p\vspace{.2cm}\\
\left\{\frac{1}{2}f^T G^{-1} B_j G^{{-1}}f-\sigma_j-\beta_j\right\}_{j=1}^r
\end{array}\right),\label{eq:der1Pid}\\
&& \nabla^2\varPi^d(\zeta)=- Z^T G^{-1} Z-H^{-1},\label{eq:der2Pid}
\end{eqnarray}
where $Z_0,Z\in\mathbb{R}^{n\times m}$ and $H\in\mathbb{R}^{m\times m}$ are defined as
\begin{eqnarray*}
&& Z_0=
\begin{bmatrix}
A_1x,\ldots, A_px, B_1x,\ldots, B_rx
\end{bmatrix},\label{eq:matrixF}\\
&&Z=
\begin{bmatrix}
A_1G^{{-1}}f,\ldots, A_pG^{{-1}}f, B_1G^{{-1}}f,\ldots, B_rG^{{-1}}f
\end{bmatrix},\label{eq:matrixF}\\
&& H=
\begin{bmatrix}
\textrm{diag}(\tau)&0\\
0 &E_n
\end{bmatrix} , \label{eq:matrixH}
\end{eqnarray*}
where $E_n$ is a $n\times n$ identity matrix. By the fact that
$ \tau > 0$,
the matrix $H^{-1}$ is positive definite.
Next we can get the lemma as follows whose proof is trivial.
\begin{lemma}
If $M_1,M_2,\ldots,M_N\in\mathbb{R}^{n\times n}$ are symmetric positive semi-definite matrices, then $M=M_1+M_2+\ldots+M_N$ is also a positive semi-definite matrix.
\end{lemma}
\begin{lemma}
If $\lambda_{G}$ is an arbitrary eigenvalue of $G$, it follows that
$$\lambda_{G}\geq \sum_{i=1}^p\tau_i \lambda^{A_i}_{min}+\sum_{j=1}^r\sigma_j\bar\lambda^{B_j}-\lambda^{C}_{max},$$
in which $\lambda^{A_i}_{min}$ is the smallest eigenvalue of $A_i$, $\lambda^{C_i}_{max}$ is the largest eigenvalue of $C_i$, and
\begin{equation}\label{eq:111}
\bar\lambda^{B_j}=
\left\{\begin{array}{ll}
\lambda^{B_j}_{min},&~~\sigma_j> 0\vspace{.3cm}\\
\lambda^{B_j}_{max},&~~\sigma_j\leq 0,
\end{array}\right.
\end{equation}
where $\lambda^{B_j}_{min}$ and $\lambda^{B_j}_{max}$ are the smallest eigenvalue and the largest eigenvalue of $B_j$ respectively.
\end{lemma}
\begin{proof} Firstly, we need prove $\tau_i(A_i-\lambda^{A_i}_{min}E_n)$, $\lambda^{C}_{max}E_n-C$ and $\sigma_j(B_j-\bar\lambda^{B_j}E_n)$ are all symmetric positive semi-definite matrices.
\begin{enumerate}[(a)]
\item As $\lambda^{A_i}_{min}$ is the smallest eigenvalue of $A_i$, then $A_i-\lambda^{A_i}_{min}E_n$ is symmetric positive semi-definite, so $\tau_i(A_i-\lambda^{A_i}_{min}E_n)$ is symmetric positive semi-definite with $\tau_i=\textrm{exp}\left(\theta_i-\alpha_i\right)>0$.
\item As $\lambda^{C}_{max}$ is the largest eigenvalue of $C$, then $\lambda^{C}_{max}E_n-C$ is a symmetric positive semi-definite matrix.
\item
\begin{enumerate}[(c.1)]
\item As $\lambda^{B_j}_{min}$ is the smallest eigenvalue of $B_j$, then $B_j-\lambda^{B_j}_{min}E_n$ is symmetric positive semi-definite, so when $\sigma_j> 0$ it holds that $\sigma_j(B_j-\lambda^{B_j}_{min}E_n)$ is symmetric positive semi-definite.
\item As $\lambda^{B_j}_{max}$ is the largest eigenvalue of $B_j$, then $B_j-\lambda^{B_j}_{max}E_n$ is symmetric negative semi-definite, so when $\sigma_j\leq 0$ it holds that $\sigma_j(B_j-\lambda^{B_j}_{max}E_n)$ is symmetric positive semi-definite.
\end{enumerate}
From (c.1) and (c.2), we know $\sigma_j(B_j-\bar\lambda^{B_j}E_n)$ is always symmetric positive semi-definite.
\end{enumerate}
Then by (a), (b), (c) and Lemma 1, we have $$\sum_{i=1}^p\tau_i(A_i-\lambda^{A_i}_{min}E_n)+\sum_{j=1}^r\sigma_j(B_j-\bar\lambda^{B_j}E_n)+\lambda^{C}_{max}E_n-C$$ is a positive semi-definite matrix, which is equivalent to
$$G-\left(\sum_{i=1}^p\tau_i\lambda^{A_i}_{min}+\sum_{j=1}^r\sigma_j\bar\lambda^{B_j}E_n-\lambda^{C}_{max}\right) E_n$$
is a positive semi-definite matrix, which implies that for every eigenvalue of $G$, it is greater than or equal to $\sum_{i=1}^p\tau_i \lambda^{A_i}_{min}+\sum_{j=1}^r\sigma_j\bar\lambda^{B_j}-\lambda^{C}_{max}$.\hfill\hspace*{\fill} $\Box$ \vspace{2ex}
\end{proof}
Based on the above lemma, the following assumption is given for the establishment of solution method.
\begin{Assumption} \label{assmp2}
There is a critical point $\zeta=(\tau,\sigma)$ of $\varPi^d(\zeta)$, satisfying $\Delta>0$ where $$\Delta=\sum_{i=1}^p\tau_i\lambda^{A_i}_{min}+\sum_{j=1}^r\sigma_j\bar\lambda^{B_j}-\lambda^{C}_{max}.$$
\end{Assumption}
\begin{lemma}
If $\bar\zeta$ is a stationary point of $\Pi^d(\zeta) $ satisfying Assumption 1, then $\bar{\zeta}\in\mathcal{S}_a^+$.
\end{lemma}
\begin{proof}
From Lemma 3, we know if $\lambda_{G}$ is an arbitrary eigenvalue of $G$, it holds that $\lambda_{G}\geq \Delta$. If $\bar\zeta$ is a critical point satisfying Assumption 1, then $\Delta>0$, so for every eigenvalue of $G$, we have $\lambda_{G}\geq \Delta>0$, then $G$ is a positive definite matrix, i.e., $\bar{\zeta}\in\mathcal{S}_a^+$.\hfill\hspace*{\fill} $\Box$ \vspace{2ex}
\end{proof}
The following lemma is needed here. Its proof is omitted, which is similar to that of Lemma 6 in \cite{Gao-Wu-triality12}.
\begin{lemma}\label{lm:PplusDUD}
Suppose that $ P\in\mathbb{R}^{n\times n}$, $ U\in\mathbb{R}^{m\times m}$ and $ W\in\mathbb{R}^{n\times m}$ are given symmetric matrices with
$$
P=\begin{bmatrix} P_{11} & P_{12}\\ P_{21} & P_{22} \end{bmatrix}\prec 0, ~~
U=\begin{bmatrix} U_{11} & 0\\0 & U_{22} \end{bmatrix}\succ 0, \textrm{ and }
W=\begin{bmatrix} W_{11} & 0\\0 & 0 \end{bmatrix},
$$
where $ P_{11}$, $ U_{11}$ and $ W_{11}$ are $r\times r$-dimensional matrices, and $ W_{11}$ is nonsingular. Then,
\begin{equation}\label{eq:apl m}
- W^T P^{-1} W- U^{-1}\preceq0\Leftrightarrow P+ W U W^T\preceq0.
\end{equation}
\end{lemma}
Now, we give the main result of this paper, triality theorem, which illustrates the relationships between the primal and canonical dual problems on global and local solutions under Assumption 1.
\begin{theorem}\label{th:triality}
{\rm(\textbf{Triality Theorem})} Suppose that $\bar{\zeta}$ is a critical point of $\varPi^d(\zeta)$, and $\bar{x}=G(\bar\zeta)^{{-1}}f$.
\begin{enumerate}
\item Min-max duality: If $\bar{\zeta}$ is the critical point satisfying Assumption 1, then the canonical min-max duality holds in the form of
\begin{equation}\label{eq:th2minmax}
\varPi(\bar{x}) =\min_{x\in\mathbb{R}^n} \varPi(x)=\max_{\zeta\in\mathcal{S}_a^+} \varPi^d(\zeta)=\varPi^d(\bar{\zeta}).
\end{equation}
\item Double-max duality: If $\bar{\zeta}\in\mathcal{S}_a^-$, the double-max duality holds in the form that if $\bar x$ is a local maximizer of $\varPi(x)$ or $\bar\zeta$ is a local maximizer of $\varPi^d(\zeta)$,
we have
\begin{equation}\label{eq:th2maxmax}
\varPi(\bar{x}) =\max_{x\in\mathcal{X}_0} \varPi(x)=\max_{\zeta\in\mathcal{S}_0} \varPi^d(\zeta)=\varPi^d(\bar{\zeta})
\end{equation}
where $\bar x\in \mathcal{X}_0 \subset \mathbb{R}^n$ and $\bar \zeta\in \mathcal{S}_0 \subset \mathcal{S}_a^-$.
\item Double-min duality: If $\bar\zeta\in\mathcal{S}_a^-$, then the double-min duality holds in the form that when $m=n$, if $\bar x$ is a local minimizer of $\varPi(x)$ or $\bar\zeta$ is a local minimizer of $\varPi^d(\zeta)$, we have
\begin{equation}\label{eq:th2minmin}
\varPi(\bar{x}) =\min_{x\in\mathcal{X}_0} \varPi(x)=\min_{\zeta\in\mathcal{S}_0} \varPi^d(\zeta)=\varPi^d(\bar{\zeta})
\end{equation}
where $\bar x\in \mathcal{X}_0 \subset \mathbb{R}^n$ and $\bar \zeta\in \mathcal{S}_0 \subset \mathcal{S}_a^-$.
\end{enumerate}
\end{theorem}
\par \noindent \bf Proof: \hspace{0mm} \rm
\begin{enumerate}
\item Because $\bar\zeta$ is a critical point satisfying Assumption 1, by Lemma 4 it holds $\bar{\zeta}\in\mathcal{S}_a^+$, i.e., $G(\bar{\zeta})\succ0$. As $G(\bar{\zeta})\succ0$ and $ H\succ0$, by (\ref{eq:der2Pid}) we know the Hessian of the dual function is negative definitive, i.e. $\nabla^2\varPi^d(\zeta)\prec0$, which implies that $\varPi^d(\zeta)$ is strictly concave over $\mathcal{S}_a^+$. Hence, we get
\[\label{eq:th2Pidmax}
\varPi^d(\bar\zeta)=\max_{\zeta\in\mathcal{S}_a^+} \varPi^d(\zeta).
\]
By the convexity of $V(\xi)$, we have
$V(\xi)-V(\bar \xi)\geq (\xi-\bar \xi)^T\nabla V(\bar \xi)=(\xi-\bar \xi)^T\bar{\zeta}$
(see \cite{gao-strang1989}),
so $$V(\Lambda(x))-V(\Lambda(\bar x))\geq (\Lambda(x)-\Lambda(\bar x))^T\bar{\zeta},$$
which implies (see page 480 \cite{gao-opt03})
\begin{eqnarray}
\varPi(x)-\varPi(\bar{x}) & \geq & (\Lambda(x)-\Lambda(\bar x))^T\bar{\zeta}-\frac{1}{2}x^T C x+\frac{1}{2}\bar x^T C \bar x -
f^T(x-\bar x) \nonumber\\
&=& \frac{1}{2}(x -\bar x )^T G(\bar \zeta) (x -\bar x )
+ [G(\bar \zeta) \bar x - f]^T ( x- \bar x) . \label{1003}
\end{eqnarray}
By the facts that
$G(\bar \zeta) \bar x = f$ and
$G(\bar \zeta)\succ 0$, we have
$\varPi(x)\geq\varPi(\bar{x})$ for any $x\in\mathbb{R}^n$, which shows that $\bar x$ is a global minimizer and
the equation (\ref{eq:th2minmax}) is true by Theorem \ref{th:AnalSolu}
and (\ref{eq:th2Pidmax}).
\item If $\bar{\zeta}$ is a local maximizer of $\varPi^d(\zeta)$ over $\mathcal{S}_a^-$, it is true that $\nabla^2\varPi^d(\bar{\zeta})=- Z^T G^{-1} Z- H^{-1}\preceq0$ and there exists a neighborhood $\mathcal{S}_0\subset\mathcal{S}_a^-$ such that for all $\zeta\in\mathcal{S}_0$, $\nabla^2\varPi^d(\zeta)\preceq0$. Since the map $x= G^{{-1}}f$ is continuous over $\mathcal{S}_a$, the image of the map over $\mathcal{S}_0$ is a neighborhood of $\bar x$, which is denoted by $\mathcal{X}_0$. Now we prove that for any $x\in\mathcal{X}_0$, $\nabla^2\varPi(x)\preceq0$, which plus the fact that $\bar{x}$ is a critical point of $\varPi(x)$ implies $\bar{x}$ is a maximizer of $\varPi(x)$ over $\mathcal{X}_0$. By singular value decomposition, there exist orthogonal matrices $ J\in\mathbb{R}^{n\times n}$, $ K\in\mathbb{R}^{m\times m}$ and $ R\in\mathbb{R}^{n\times m}$ with
\begin{equation}\label{eq:th2R}
R_{ij}=
\left\{\begin{array}{ll}
\delta_i, &~~ i=j \textrm{ and } i=1,\ldots,r,\\
0,&~~\textrm{otherwise},
\end{array}\right.
\end{equation}
where $\delta_i>0$ for $i=1,\ldots,r$ and $r=\textrm{rank}( F)$, such that $Z H^{\frac{1}{2}}= J R K$, then
\begin{equation}\label{eq:th2FDERK}
Z = J R K H^{-\frac{1}{2}}.
\end{equation}
For any $x\in\mathcal{X}_0$, let $\zeta$ be a point satisfying $x= G^{{-1}}f$. Therefore, $\nabla^2\varPi^d(\zeta)=- Z^T G^{-1} Z- H^{-1}\preceq0$, then it holds that
\begin{equation}\label{eq:th2hessPid}
- H^{-\frac{1}{2}} K^T R^T J^T G^{-1} J R K H^{-\frac{1}{2}}-H^{-1}\preceq0.
\end{equation}
Multiplying above inequality by $ K H^{\frac{1}{2}}$ from the left and $ H^{\frac{1}{2}} K^T$ from the right, it can be obtained that
\begin{equation}\label{eq:th2hessPidequ}
- R^T J^T G^{-1} J R-E_m\preceq0,
\end{equation}
which, by Lemma \ref{lm:PplusDUD}, is further equivalent to
\begin{equation}\label{eq:th2hessPidequfur}
J^T G J+ R R^T\preceq0,
\end{equation}
then it follows that
\begin{equation}\label{eq:th2hessPi}
-G\succeq J R R^T J^T=J R K H^{-\frac{1}{2}} H H^{-\frac{1}{2}} K^T R^T J^T= Z H Z^T.
\end{equation}
Thus, $\nabla^2\varPi(x)= G+ Z H Z^T \preceq 0$, then $\bar{x}$ is a maximizer of $\varPi(x)$ over $\mathcal{X}_0$.
Similarly, we can prove that if $\bar{x}$ is a maximizer of $\varPi(x)$ over $\mathcal{X}_0$, then $\bar{\zeta}$ is a maximizer of $\varPi^d(\zeta)$ over $\mathcal{S}_0$. By the Theorem \ref{th:AnalSolu}, the equation (\ref{eq:th2maxmax}) is proved.
\item Now we prove the double-min duality.
Suppose that $\bar{\zeta}$ is a local minimizer of $\varPi^d(\zeta)$ in $\mathcal{S}_a^-$, then there exists a neighborhood $\mathcal{S}_0\subset\mathcal{S}_a^-$ of $\bar{\zeta}$ such that for any $\zeta\in\mathcal{S}_0$, $\nabla^2\varPi^d(\zeta)\succeq0$. Let $\mathcal{X}_0$ denote the image of the map $x= G^{{-1}}f$ over $\mathcal{S}_0$, which is a neighborhood of $\bar x$.
For any $x\in\mathcal{X}_0$, let $\zeta$ be a point that satisfies $x= G^{{-1}}f$. It follows from $\nabla^2\varPi^d(\zeta)=- Z^T G^{-1} Z- H^{-1}\succeq0$ that $- Z^T G^{-1} Z\succeq H^{-1}\succ0$, which implies the matrix $ F$ is invertible. Then it is true that
\begin{equation}\label{eq:th2iii1hessPid}
- G^{-1}\succeq( Z^T)^{-1} H^{-1} Z^{-1},
\end{equation}
which is further equivalent to
\begin{equation}\label{eq:th2iii1hessPi}
- G\preceq Z H Z^T.
\end{equation}
Thus, $\nabla^2\varPi(x)= G+ Z H Z^T\succeq0$ and $x$ is a local minimizer of $\varPi(x)$. The converse can be proved similarly. By Theorem \ref{th:AnalSolu}, the equation (\ref{eq:th2minmin}) is then true.
\end{enumerate}
The theorem is proved. \hfill\hspace*{\fill} $\Box$ \vspace{2ex}
This theroem show that by the canonical min-max duaity theory, the nonconvex d.c. programming problem $(CDC)$ is equivalent to a concave maximization problem
\begin{equation}
({\cal P}^d) : \;\;\; \max \{ \varPi^d(\zeta) | \;\; \zeta\in\mathcal{S}_a^+ \} ,
\end{equation}
which can be solved by well-developed deterministic methods and algorithms, say \cite{ks,pskz,s-s}.
\section{Examples}
In this section, let $p=r=1$. From the definition of (CDC) problem, $A_1$ is a symmetric matrix, $B_1$ and $C_1$ are two positive definite matrices. According to different cases of $A_1$, following five motivating examples are provided to illustrate the proposed canonical duality method in our paper. By examining the critical points of the dual function, we will show how the dualities in the triality theory are verified by these examples.
\subsection*{Example 1}
We consider the case that $A_1$ is positive definite. Let $\alpha_1=\beta_1=1$ and
$$
A_1=\begin{bmatrix} 1.5 & 0\\ 0 & 2 \end{bmatrix}, ~~
B_1=\begin{bmatrix} 0.5 & 0\\ 0 & 3 \end{bmatrix}, ~~
C_1=\begin{bmatrix} 1.5 & 0\\ 0 & 1 \end{bmatrix},\textrm{ and }
f=\begin{bmatrix} 2 \\ 1 \end{bmatrix},
$$
then the primal problem:
$$\min_{(x,y)\in\mathbb{R}^2}\varPi(x,y)=\textrm{exp}\lpa0.75x^2+y^2-1\right)+0.5\left( 0.25x^2+1.5y^2-1\right)^2-0.75x^2-0.5y^2-2x-y.$$
The corresponding canonical dual function is
$$
\varPi^d(\tau,\sigma)=-0.5\left(\frac{4}{1.5\tau+0.5\sigma-1.5}+\frac{1}{2\tau+3\sigma-1}\right)-\tau\ln(\tau)-0.5\sigma^2-\sigma.
$$
In this problem, $\lambda^{A_1}_{min}=1.5$, $\lambda^{B_1}_{min}=0.5$, $\lambda^{B_1}_{max}=3$, and $\lambda^{C_1}_{max}=1.5$. It is noticed that $(\bar\tau_1,\bar\sigma_1)=(2.01147,-0.223104)$ is a critical point of the dual function $\varPi^d(\tau,\sigma)$(see Figure \ref{fig:ex1pid1}). As $\bar\sigma_1<0$, we have $\bar\lambda^{B_1}=\lambda^{B_1}_{max}$ and $$\Delta=\bar\tau_1 \lambda^{A_1}_{min}+\bar\sigma_1\lambda^{B_1}_{max}-\lambda^{C_1}_{max}=0.8479>0,$$ so Assumption 1 is satisfied, then $(\bar\tau_1,\bar\sigma_1)$ is in $\mathcal{S}_a^+$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_1,\bar y_1)=(1.42283, 0.424878)$. Moreover, we have
\[
&\varPi(\bar x_1,\bar y_1)=\varPi^d(\bar\tau_1,\bar\sigma_1)=-2.8428,\nonumber
\]
so there is no duality gap, then $(\bar x_1,\bar y_1)$ is the global solution of the primal problem, which demonstrates the min-max duality(see Figure \ref{fig:ex1.1}).
\subsection*{Example 2}
We consider the case that $A_1$ is positive semi-definite. Let $\alpha_1=\beta_1=2$ and
$$
A_1=\begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix}, ~~
B_1=\begin{bmatrix} 2 & 0\\ 0 & 1 \end{bmatrix}, ~~
C_1=\begin{bmatrix} 1 & 0\\ 0 & 3 \end{bmatrix},\textrm{ and }
f=\begin{bmatrix} 2 \\ 2 \end{bmatrix},
$$
then the primal problem:
$$\min_{(x,y)\in\mathbb{R}^2}\varPi(x,y)=\textrm{exp}\lpa0.5x^2-2\right)+0.5\left( x^2+0.5y^2-2\right)^2-0.5x^2-1.5y^2-2x-2y.$$
The corresponding canonical dual function is
$$
\varPi^d(\tau,\sigma)=-0.5\left(\frac{4}{\tau+2\sigma-1}+\frac{4}{\sigma-3}\right)-\tau\ln(\tau)-\tau-0.5\sigma^2-2\sigma.
$$
In this problem, $\lambda^{A_1}_{min}=0$, $\lambda^{B_1}_{min}=1$, $\lambda^{B_1}_{max}=2$, and $\lambda^{C_1}_{max}=3$. It is noticed that $(\bar\tau_1,\bar\sigma_1)=(0.142222,3.60283)$ is a critical point of the dual function $\varPi^d(\tau,\sigma)$(see Figure \ref{fig:ex2pid1}). As $\bar\sigma_1>0$, we have $\bar\lambda^{B_1}=\lambda^{B_1}_{min}$ and $$\Delta=\bar\tau_1 \lambda^{A_1}_{min}+\bar\sigma_1\lambda^{B_1}_{min}-\lambda^{C_1}_{max}=0.60283>0,$$ so Assumption 1 is satisfied, then $(\bar\tau_1,\bar\sigma_1)$ is in $\mathcal{S}_a^+$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_1,\bar y_1)=(0.315066, 3.3177)$. Moreover, we have
\[
&\varPi(\bar x_1,\bar y_1)=\varPi^d(\bar\tau_1,\bar\sigma_1)=-17.1934,\nonumber
\]
so there is no duality gap, then $(\bar x_1,\bar y_1)$ is the global solution of the primal problem, which demonstrates the min-max duality(see Figure \ref{fig:ex1.2}).
For showing the double-max duality of Example 2, we find a local maximum point of $\varPi^d(\tau,\sigma)$ in ${S}_a^-$: $(\bar\tau_2,\bar\sigma_2)=(0.151452,-1.68381)$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_2,\bar y_2)=(-0.474364, -0.427002)$. Moreover, we have
\[
&\varPi(\bar x_2,\bar y_2)=\varPi^d(\bar\tau_2,\bar\sigma_2)=2.98579,\nonumber
\]
and $(\bar x_2,\bar y_2)$ is also a local maximum point of $\varPi(x,y)$, which demonstrates the double-max duality(see Figure \ref{fig:ex1.2.max}).
\subsection*{Example 3}
We consider the case that $A_1$ is negative definite. Let $\alpha_1=-4$, $\beta_2=0.5$ and
$$
A_1=\begin{bmatrix} -1 & 0\\ 0 & -1.5 \end{bmatrix}, ~~
B_1=\begin{bmatrix} 2 & 0\\ 0 & 1 \end{bmatrix}, ~~
C_1=\begin{bmatrix} 2 & 0\\ 0 & 3 \end{bmatrix},\textrm{ and }
f=\begin{bmatrix} 5 \\ 2 \end{bmatrix},
$$
then the primal problem:
$$\min_{(x,y)\in\mathbb{R}^2}\varPi(x,y)=\textrm{exp}\left(-0.5x^2-0.75y^2+4\right)+0.5\left( x^2+0.5y^2-0.5\right)^2-x^2-1.5y^2-5x-2y.$$
The corresponding canonical dual function is
$$
\varPi^d(\tau,\sigma)=-0.5\left(\frac{25}{-\tau+2\sigma-2}+\frac{4}{-1.5\tau+\sigma-3}\right)-\tau\ln(\tau)+5\tau-0.5\sigma^2-0.5\sigma.
$$
In this problem, $\lambda^{A_1}_{min}=-1.5$, $\lambda^{B_1}_{min}=1$, $\lambda^{B_1}_{max}=2$, and $\lambda^{C_1}_{max}=3$. It is noticed that $(\bar\tau_1,\bar\sigma_1)=(0.145563,3.95352)$ is a critical point of the dual function $\varPi^d(\tau,\sigma)$(see Figure \ref{fig:ex3pid1}). As $\bar\sigma_1>0$, we have $\bar\lambda^{B_1}=\lambda^{B_1}_{min}$ and $$\Delta=\bar\tau_1 \lambda^{A_1}_{min}+\bar\sigma_1\lambda^{B_1}_{min}-\lambda^{C_1}_{max}=0.7352>0,$$ so Assumption 1 is satisfied, then $(\bar\tau_1,\bar\sigma_1)$ is in $\mathcal{S}_a^+$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_1,\bar y_1)=(0.867833, 2.72044)$. Moreover, we have
\[
&\varPi(\bar x_1,\bar y_1)=\varPi^d(\bar\tau_1,\bar\sigma_1)=-13.6736,\nonumber
\]
so there is no duality gap, then $(\bar x_1,\bar y_1)$ is the global solution of the primal problem, which demonstrates the min-max duality(see Figure \ref{fig:ex1.3}).
For showing the double-max duality of Example 3, we find a local maximum point of $\varPi^d(\tau,\sigma)$ in ${S}_a^-$: $(\bar\tau_2,\bar\sigma_2)=(54.3685,-0.492123)$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_2,\bar y_2)=(-0.0871798, -0.023517)$. Moreover, we have
\[
&\varPi(\bar x_2,\bar y_2)=\varPi^d(\bar\tau_2,\bar\sigma_2)=54.9641,\nonumber
\]
and $(\bar x_2,\bar y_2)$ is also a a local maximum point of $\varPi(x,y)$, which demonstrates the double-max duality(see Figure \ref{fig:ex1.3.max}).
\subsection*{Example 4}
We also consider the case that $A_1$ is indefinite. Let $\alpha_1=1$, $\beta_1=2$ and
$$
A_1=\begin{bmatrix} -3 & 0\\ 0 & 1 \end{bmatrix}, ~~
B_1=\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, ~~
C_1=\begin{bmatrix} 4 & 0\\ 0 & 4.4 \end{bmatrix},\textrm{ and }
f=\begin{bmatrix} 1 \\ 1 \end{bmatrix},
$$
then the primal problem:
$$\min_{(x,y)\in\mathbb{R}^2}\varPi(x,y)=\textrm{exp}\left(-1.5x^2+0.5y^2-1\right)+0.5\left( 0.5x^2+0.5y^2-2\right)^2-2x^2-2.2y^2-x-y.$$
The corresponding canonical dual function is
$$
\varPi^d(\tau,\sigma)=-0.5\left(\frac{1}{-3\tau+\sigma-4}+\frac{1}{\tau+\sigma-4.4}\right)-\tau\ln(\tau)-0.5\sigma^2-2\sigma.
$$
In this problem, $\lambda^{A_1}_{min}=-3$, $\lambda^{B_1}_{min}=\lambda^{B_1}_{max}=1$, and $\lambda^{C_1}_{max}=4.4$. It is noticed that $(\bar\tau_1,\bar\sigma_1)=(0.0612941,4.67004)$ is a critical point of the dual function $\varPi^d(\tau,\sigma)$(see Figure \ref{fig:ex5pid1}). As $\bar\sigma_1>0$, we have $\bar\lambda^{B_1}=\lambda^{B_1}_{min}$ and $$\Delta=\bar\tau_1 \lambda^{A_1}_{min}+\bar\sigma_1\lambda^{B_1}_{min}-\lambda^{C_1}_{max}=0.0862>0,$$ so Assumption 1 is satisfied, then $(\bar\tau_1,\bar\sigma_1)$ is in $\mathcal{S}_a^+$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_1,\bar y_1)=(2.05695, 3.01812)$. Moreover, we have
\[
&\varPi(\bar x_1,\bar y_1)=\varPi^d(\bar\tau_1,\bar\sigma_1)=-22.6111,\nonumber
\]
so there is no duality gap, then $(\bar x_1,\bar y_1)$ is the global solution of the primal problem, which demonstrates the min-max duality(see Figure \ref{fig:ex1.5}).
For showing the double-max duality of Example 4, we find a local maximum point of $\varPi^d(\tau,\sigma)$ in ${S}_a^-$: $(\bar\tau_2,\bar\sigma_2)=(0.361948,-1.97615)$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_2,\bar y_2)=(-0.141603, -0.166273)$. Moreover, we have
\[
&\varPi(\bar x_2,\bar y_2)=\varPi^d(\bar\tau_2,\bar\sigma_2)=2.52149,\nonumber
\]
and $(\bar x_2,\bar y_2)$ is also a a local maximum point of $\varPi(x,y)$, which demonstrates the double-max duality(see Figure \ref{fig:ex1.5.max}).
For showing the double-min duality of Example 4, we find a local minimum point of $\varPi^d(\tau,\sigma)$ in ${S}_a^-$: $(\bar\tau_3,\bar\sigma_3)=(0.149286,3.90584)$. By Theorem \ref{th:AnalSolu}, we get $(\bar x_3,\bar y_3)=(-1.84496, -2.89962)$. Moreover, we have
\[
&\varPi(\bar x_3,\bar y_3)=\varPi^d(\bar\tau_3,\bar\sigma_3)=-12.7833,\nonumber
\]
and $(\bar x_3,\bar y_3)$ is also a a local minimum point of $\varPi(x,y)$, which demonstrates the double-min duality(see Figure \ref{fig:ex1.5.min}).
From above double-min duality in Example 4, we can find our proposed canonical dual method can avoids a local minimum point $(\bar x_3,\bar y_3)$ of the primal problem. In fact, by the canonical dual method, the global solution is obtained, so any local minimum point is avoided. For instance, the point $(0.534285,-2.83131)$ is a local minimum point of the primal problem in Example 2(see Figure \ref{fig:3Dex2localmin}), and the local minimum value is -4.78671, but our proposed canonical dual method obtains the global minimum value -17.1934; the point $(1.29672,-2.09209)$ is a local minimum point of the primal problem in Example 3(see Figure \ref{fig:3Dex3localmin}), and the minimum value is -3.98411, but our proposed canonical dual method obtains the global minimum value -13.6736.
\section{Conclusions and further work}\label{se:concl}
Based on the original definition of objectivity in continuum physics,
a canonical d.c. optimization problem is proposed, which can be used to model general nonconvex optimization problems in
complex systems.
Detailed application is provided by solving a challenging problem in $\mathbb{R}^n$.
By the canonical duality theory, this nonconvex problem is able to reformulated as a concave maximization dual problem in a convex domain.
A detailed proof for the triality theory is provided under a reasonable assumption. This theory can be used to identify both global and local extrema, and to develop a powerful algorithm for solving this general d.c. optimization problem.
Several examples are given to illustrate detailed situations.
All these examples support the Assumption \ref{assmp2}. However, we should emphasize that this assumption is only a sufficient condition for
the existence of a canonical dual solution in $\mathcal{S}_a^+$.
How to relax this assumption and to obtain a necessary condition for $\mathcal{S}_a^+ \neq \emptyset$ are still open questions.
We believe that this condition should be directly related to the coercivity condition (\ref{eq-coe}) of the target function $\Pi(x)$
and deserves detailed study in the future. \\
\noindent{\bf Acknowledgement}:
We are grateful to anonymous referees and associate editor for their valuable comments and suggestions.
The research was supported by
US Air Force Office of Scientific Research under the grant AFOSR FA9550-10-1-0487. Dr. Jin Zhong was
supported by National Natural Science Foundation
of China (no. 11401372), Innovation Program of Shanghai Municipal Education Commission (no. 14YZ114) and Science
\& Technology Commission of Shanghai Municipality (no. 12510501700).
|
2,877,628,088,587 | arxiv | \section*{Abstract}
Female moths attract male moths by emitting to the atmosphere a series of pheromone filaments propagating downwind. The filaments appear to be aggregated into clusters (patches) while moving. In order to find the female moth, the male moth senses the concentration of the pheromone (e.g. odour) dispersed within the patches. Using the odour as a cue, the male moth may reach the vicinity of the female moth. The spatial and temporal variations of the odour advected by turbulence are random. This work addresses the problem of a male moth searching a female moth by tracing the randomly varying odour. We suggest an algorithm of searching the odour source. The algorithm uses a single instantaneous parameter that a navigator can measure: the time the moth crossed the odour patch during the last search. We describe mathematically a turbulent meandering patchy plume and the odour inside the plume. Using the proposed navigation algorithm, we simulate the path of the male moth searching the female moth. Numerical simulations illustrate a strong similarity between the simulated paths and those reported in the literature. Using the suggested algorithm, we estimate the probability of a male moth to reach close to a female moth. This simulated probability is in satisfactory agreement with that observed in reported results of experiments.
\section{Introduction \label{sec:introduction}}
Female moths release discrete pulses of pheromone to attract male moths
\citep{Baker1985,Conner1980,Quero2001,Vetter2006}. The pheromone
filaments form a plume spreading downwind, which contains information
about the female moth \citet{Harari2011}. The ability of a male moth to
find a female moth following a pheromone trail is innate: it does
not require learning \citep{Willis1991}. Moths sense the concentration
of the pheromone (odour) by chemo-receptors on their antennae. \citep{Vickers2000}.
If the chemical content of the pheromone fits to the olfactory
receptor neuron in the moth antennae \citet{Vickers2006}, the male moth will
navigate towards the female moth. The mating success depends on the ability
of the male moth to find the female moth using odour as a cue. Male moths
can reach conspecific females from a long distance, overcoming obstacles
such as forests or canopies \citep{Elkinton1987}. The male moth is capable
of visually estimating the flight direction relative to the ground (about
optomotor anemotaxis see e.g. \citealt{Kennedy1939,Wright1958,Belanger1998})
yet it uses additional sensors to find its path. Along the path, the
male moth has to find its way in a windy environment with changing turbulent
conditions. The presence of turbulence causes the dispersion of the
pheromone and random variations of its concentration \citep{Murlis1992,Seinfeld2006}.
Because of this randomness and of the discrete release of the pheromone,
its plume may contain filaments alternated with gaps where the concentration
is extremely low (Fig. \ref{fig:trajectories_Willis_Baker}a). A moth
is insensitive to the odour if it is below or exceeds certain lower
and upper thresholds \citep{Baker1985,Baker1988,Baker1996}. The gaps
where the concentration is below the detectable limit are absent from
odour from the point of view of a moth, (\citealp{Kaissling1997,Carde2008}
and Fig.\ref{fig:trajectories_Willis_Baker}a). Thus, one can assume that for a moth, the
patches of the detectable odour and of clean air, alternate. When a
male moth moves away from a patch with finite concentration of the
pheromone and enters a clean area, the odour changes for him abruptly.
Under such circumstances, the upwind flight of the moth is governed
not by the constant gradient of pheromone concentration, but rather
by its random fluctuations in patches, mixed with areas without concentration
\citep{Kaissling1989,Kramer1992}.
In the gaps of clean air a male moth can loose the trail leading to
the female moth. To regain the lost contact with the odour, the male moth performs
zigzag motion transverse to the wind direction until it finds a
new patch of pheromone \citep{Kennedy1974,David1983,Mafra-Neto1994}.
It was suggested that zigzagging relative to the wind and the mean
angles of zigzagging associated with this motion are internally programmed. The mean angles between
the direction of flight and the wind direction increase, whereas the
velocities of the moth relative to the wind decrease \citep{Kennedy1974,Kennedy1983,Carde1984,Baker1984,Willis1991}.
\begin{figure*}[t]
\begin{centering}
\subfloat[]{\protect\protect\includegraphics[height=5cm]{figures/concentration_signal}
}\subfloat[]{\protect\protect\includegraphics[height=5cm]{figures/moth_trajectories_Willis_Baker_1994}
}\subfloat[]{\protect\protect\includegraphics[height=5cm]{figures/trajectories_examples_Kaissling1997}
}
\par\end{centering}
\protect\caption{(a) Concentration of a pheromone, reproduced from \citet{Carde2008}
(b) Moth flight trajectories as observed in wind tunnel experiments,
reproduced from \citet{Willis1994}. (c) switching of a searching
trajectory from upwind anemotaxis to casting after the pheromone stimulus
is removed, reproduced from \citet{Kaissling1997}. \label{fig:trajectories_Willis_Baker}}
\end{figure*}
There exists a large body of works where the search algorithms are
applied to continuous laminar and turbulent plumes \citep{Kennedy1974,Kaissling1997,Balkovsky2002,Carde2008}.
Search of the odour source in a continuous plume is beyond the scope
of our work. In our work we are concerned with the search algorithm
in a patchy plume, as observed by \citet{Li2001} in a wind tunnel,
where the centre points of the pheromone parcels were distant one
from another, due to the female moth's discrete pulse frequency. \citet{Li2001}
suggested that in a patchy plume, the zigzagging behaviour depends
on the ``last pheromone detection time'', which is defined as the
moment of time when the concentration above a certain threshold can
be detected by a single chemical sensor \citep{Li2001,Li2010}. The
concept of \citet{Li2001,Li2010} implies that the decision to change
the motion direction does not depend on the previous history of
the search, and the only information the moth carries is the
presence of odour in the male moth's location.
Different navigation strategies were proposed to mimic moth searching
behaviour \citep[see reviews of][and references therein]{Kennedy1983,Murlis1992,Carde2008}.
It is generally accepted that to parametrize a mathematical model
of a moth's navigation, it is sufficient to use the odour concentration
and the general wind direction as the most important parameters \citep{Carde1984,Baker1984,Baker1996,Vickers1996,Murlis1977,Witzgall1997}.
Herein, we use the concept of \citet{Li2001,Li2010} and one instantaneous
parameter which a navigator can measure: the time it crossed the last
patch of odour.
\section{Model formulation}
\subsection{Discrete puffs model\label{sub:Discrete-puffs-model}}
A female moth is located on the ground in the origin $O$ of the Earth
fixed Cartesian coordinate system $Oxyz$ with the axis $Oz$ directing
upward and the axis $Ox$ pointing downwind. At the time $t=t_{i}\;(i=1,2,\ldots)$
the female moth releases a pheromone with equal time intervals $T$ of
order of several seconds. The duration of each release takes about
several microseconds. Therefore, it can be assumed that each release
represents an instantaneous puff, which emits into the atmosphere
the mass of the pheromone $m$. The concentration $C({\bf r},t)$
of the pheromone in the atmosphere depends on the coordinates ${\bf r}=(x,\,y,\,z)$
and time. The male moth attempts to reach the female moth within a finite time
$t_{max}\gg T$ using odour as a cue. It is assumed that because of
the turbulent nature of the wind, the odour dispersion is random.
Similarly to \citet{Zannetti1981}, we introduce the time increment
$\triangle t$ and consider the wind velocity ${\bf U}({\bf r},t)$
the centre of the pheromone cloud ${\bf {\bf r}}_{p}=[x_{p}(t),\,y_{p}(t),\,z_{p}(t)${]},
and the concentration of the pheromone $C({\bf r},t)$ as functions
of time. Denote them as $f(t)$ and represent $f(t)$ as a discrete
series of values $\bar{f(t)}$, where each of them is averaged over
the corresponding time interval $(t,t+\triangle t)$. Assuming $f(t)$ is
sufficiently smooth over the interval of averaging than the discrete values,
$\bar{f}(t)$ is equal to the continuous values $f(t)$. Following these
assumptions, the advection of the emitted odour can be described approximately
by the motion of the puff centre:
\[
{\bf r}_{p}(t+\triangle t)={\bf r}_{p}(t)+{\bf U}[{\bf r}_{p},t]\triangle t
\]
Correspondingly, the average odour concentration, due to a series of puffs starting
to act at $t=t_{i}\;(i=1,2,\ldots)$ can be described in the interval
$(t,t+\triangle t)$ by the coordinates of the centre of the puff,
and the Gaussian horizontal $\sigma_{x,y}$ and vertical $\sigma_{z}$
variances. For the intended purposes, it is sufficient to consider
two dimensional propagation of patches with equal horizontal variances
$\sigma_{x}=\sigma_{y}=\sigma_{h}$ and constant $z_{0}$:
\begin{equation}
\bar{C}(x,y,t)=\frac{m}{(2\pi)^{3/2}\sigma_{h}^{2}\sigma_{z}}\sum_{i}^ {}H(t-t_{i})\exp\left[-\frac{|{\bf r}_{p}(t-t_{i})|^{2}}{2\sigma_{h}^{2}}-\frac{z_{0}}{2\sigma_{z}^{2}}\right]\text{ }\label{eq:series}
\end{equation}
where $H(t-t_{i})$ is a Heaviside step function. The variances $\sigma_{h,z}$
presented in (1) depend on the puff's age as $\sigma_{h,z}=A_{h,z}(t-t_{i})]^{B_{h,z}}$, where $A_{h,z}$ and $B_{h,z}$ are constants depending on the atmospheric
conditions and Pasquill-Gifford atmospheric stability classes. It
can be assumed for many practical purposes that $B_{h,z}\approx1$
\citep{Boeker:2011,Turner1971}. Note that the mathematical form of
(1) is close to the solution of the diffusion equation with constant
coefficient \citep{Boeker:2011}.
Equation \ref{eq:series} is used here to simulate qualitatively the navigation
of a moth within clouds of pheromone created by a series of puffs \citep[see also][]{Farrell2002}.
The discrete puff model described in Eq. \ref{eq:series} is presented
schematically by a series of patches in Fig. \ref{fig:sketch}
\begin{figure*}
\centering{}\includegraphics[width=1\textwidth]{figures/discrete_puff_plume_model}
\protect\protect\caption{A sketch of a series of discrete puffs and the trajectories
of a male moth (view from above). A female moth is denoted as a small circle
in the origin. The lowest detectable concentrations of each puff is
marked by a dashed contour line. A dash-dotted envelope denotes the
limits of an average long term concentration distribution of a virtual
plume. This figure manifests the major feature of the patchy plume
- the size of patches of pheromone, the distance between them and
the width of the virtual plume grow proportionally in any given turbulent
flow. $\times$ denotes schematically the turning point where the
new casting search starts. The lateral spread of the search is equivalent
to the size of the last patch, marked by two small circles on the moth
path line. \label{fig:sketch}}
\end{figure*}
The wind vector ${\bf U}$ is oriented randomly around its primary
wind direction with the streamwise and cross-wind turbulent velocity
fluctuations modelled as weakly Gaussian processes. Typical simulated
random wind velocity field and advected puffs are plotted in Fig.
\ref{fig:simulation_result}. Turbulent intensity (the ratio of a
fluctuating part to the wind speed) defines the amplitude
of turbulent fluctuations.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{figures/simulation_snapshot}
\end{centering}
\caption{Velocity and concentration fields. Arrows -- instantaneous two dimensional
velocity field $\mathbf{U}(x,y)$; circles -- puff locations. Velocity
field is a sum of the wind velocity and random, turbulent fluctuations,
defined by turbulent intensity. \label{fig:simulation_result}}
\end{figure}
\subsection{Navigation model\label{sub:Navigation-model}}
A moth is modelled as a self-propelled navigator flying at velocity
${\bf \mathbf{V}}$ sufficiently large to overpower the wind speed.
The goal of the navigator is to reach the origin $O$ at such a distance
$R$, that the female moth can be seen. To locate the female moth, the navigator
is capable to manoeuvre its flight direction (the upwind angle)
defined with respect to the wind direction. The ratio of the number
of times the navigator reaches a circle of radius $R$ to the total
number of attempts is defined here as the probability of successful
events. A mathematical model of the moth navigation suggested here
is based on the following assumptions:
\begin{enumerate}
\item A moth is insensitive to the odour, which is below or exceeds a certain
upper and lower threshold.
\item Inside the odour patch the navigator moves upwind.
\item There exists a finite time $t_{c}$ of motion of the moth within the
cloud of pheromone, while it still senses odour, the so called time
of the last contact with the odour.
\item The navigator can estimate the direction of its motion relative to
the ground visually \citep[e.g.][among others]{Kaissling1997,Carde2008}.
\item After loosing a trail, a male moth makes a turn to continue the search by
changing the direction of motion at a certain angle $\alpha_{s}$
with respect to the wind direction; for the sake of simplicity we
consider such angles as a random number distributed uniformly over a
certain range $($$\alpha_{smin}\div\alpha_{smax})$ .
\end{enumerate}
It was suggested in previous studies that there exists an internal
timer triggering casting/zigzag motion of a moth \citep[e.g.][among others]{Carde2012}.
We suggest that the decision of a moth to change the direction of
motion is triggered by the level of the odour which is measured by
the flyer during time interval $t_{c}$: one if the odour is within
detectable limits, zero otherwise. We consider a moth as a mechanistic
flyer whose direction of motion is governed by a control system, shown
in Fig. \ref{fig:control_system}. The input parameter of the control
system is the odour (1/0), the output parameter of the control system
is the direction of flight. To make a decision to change the direction
of motion, the control system uses the additional information about
the environment: the general wind direction and $t_{c}$. The block
diagram of the control system is shown in Fig. \ref{fig:control_system}.
\begin{figure}
\begin{centering}
\includegraphics[width=1\textwidth]{figures/MothMotion_FlowChart}
\end{centering}
\caption{Block diagram of a control system \label{fig:control_system}}
\end{figure}
\section{Numerical results and discussion}\label{sec:simulation}
The trajectories of a male moth performing a search were simulated using
Matlab\texttrademark{} (MathWorks Inc.) according to the flow chart
in Fig. \ref{fig:Block-diagram}.
\begin{figure}
\centering\includegraphics[width=0.8\columnwidth]{figures/algorithm}
\caption{Flow chart of the search navigation algorithm.\label{fig:Block-diagram}}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=1\textwidth]{figures/low_high_turbulence}
\end{centering}
\caption{Typical paths of a navigator for different turbulent intensities and
representative sets of moth coordinates $(x_{m},y_{m})$, $\alpha_{s}\in$
(30$^{\mbox{o}}$--150$^{\mbox{o}}$). (left) low 5\% turbulence intensity;
(right) 30\% high turbulence intensity. The initial coordinates of
the flyer are marked as $\times$. \label{fig:turbulence_intensity} }
\end{figure}
\begin{table}[ht]
\begin{centering}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Parameter & \multicolumn{3}{c|}{5\% turbulent intensity} & \multicolumn{3}{c|}{30\% turbulent intensity}\tabularnewline
\hline
\hline
& Median & Mean & STD & Median & Mean & STD\tabularnewline
\hline
Trajectory length {[}m{]} & 104.2 & 105.0 & 3.5 & 114.6 & 115.5 & 6.8\tabularnewline
\hline
Flying time {[}s{]} & 392.2 & 402.6 & 30.8 & 430.9 & 438.8 & 42.9\tabularnewline
\hline
r.m.s. of deviations {[}m{]} & 0.43 & 0.57 & 0.44 & 0.49 & 0.60 & 0.55\tabularnewline
\hline
\end{tabular}
\end{centering}
\caption{Simulation results for the two levels of turbulent intensity: trajectory
length, time of flight and root-mean-square (r.m.s) of the lateral
deviations from the general flight direction. The 5\% (low) turbulence
intensity pertains to field or wind tunnel conditions. The 30\% (high)
turbulent intensity pertains to the air conditions in dense canopy
layers, in forests for example. \label{tab:Success-rates}}
\end{table}
The following conclusions can be drawn from our numerical simulations,
illustrated in Fig.~\ref{fig:turbulence_intensity} and in Table~\ref{tab:Success-rates}).
Comparison of the simulation results for the two levels of turbulent
intensity suggests that:
\begin{enumerate}
\item The variance of the transverse deviation of a moth from the straight
path grows.
\item The number of search cycles reduces for the high turbulence intensity
case because of the larger spread of the pheromone patches, therefore
the higher rate of a male moth's interception with a patch of the detectable
odour.
\item The number of times a male moth crosses filaments of odour is significantly
larger because of more frequently changing wind direction.
\item The range of search is wider but the number of zigzags is lower.
\item The flight time along the search trajectory increases, therefore,
it can become larger than the time permitted for a navigator to find
the source of odour.
\end{enumerate}
The results of systematic numerical simulations suggest that the probability
of the successful events does not depend significantly on both $\alpha_{smin}$
or $\alpha_{smax}$. The probability of the successful attempts of
the flyer to locate the source of the odour is about 80\%. This quantity
does not depend on the turbulent intensity adopted in our simulations
and is close to the probability of the successful events observed
in relevant field experiments, e.g. \citet{Carde2008}.
\section{Summary}\label{sec:summary}
A mechanistic approach to a male moth searching for a female moth for mating purposes is considered. A female moth attracts a male moth by emitting in the
atmosphere discrete periodic pulses of a sex pheromone. The patches
of the pheromone disperse in the atmosphere and propagate downwind.
The moth is considered as a self-propelled flyer, which senses the
odour of the pheromone in the patches. The sensitivity of the flyer
to the odour is limited. The corresponding lower detectability limit
defines the contours of the odour patches and their representative geometry.
From the point of view of the flyer, the atmosphere consists of patches
of constant odour and of clean air between the patches. The navigator
makes a decision to change the direction of its motion by sensing
the abrupt change of the odour from constant to zero (or vice versa).
After the decision is made, the flyer uses additional
information, that is, the wind direction and the time of crossing
the last patch of an odour. An algorithm of navigation in a patchy
plume, which incorporates such a minimal number of control parameters,
is proposed.
A patchy turbulent plume of an odour was simulated using a mathematical
model of the Gaussian puff. The trajectories of a flyer were simulated
numerically using here the suggested algorithm of the flyer navigation.
According to our numerical experiments, the algorithm is robust. However,
the problem considered involves a large number of biotic and abiotic
variables influencing the success of the navigator. Among such variables
are: the vector of wind speed, the level of turbulent intensity, the
frequency of the pheromone emission, the random initial coordinates
of the flyer, the thresholds of detectability of the odour by the
moth, the initial distance between the male and female, and the direction
of motion immediately after the flyer decided to change it. The aim
of our work was to suggest a novel biologically inspired algorithm
to search the source of an odour in a turbulent atmosphere, when the
chemical substance creating the odour is emitted as periodical pulses.
Additional study is necessary to apply the suggested algorithm
of search to control systems of autonomous vehicles designed to detect
and to locate the source of an odour.
\section*{Acknowledgements}
This study is partially supported by the U.S.-Israel Binational Science
Foundation (BSF) under grant 2013399. The authors are thankful to Nimrod Daniel for the simulation results.
\bibliographystyle{apalike2}
|
2,877,628,088,588 | arxiv | \section{Introduction}
\label{intro}
The electromagnetic emissivity of strongly interacting matter is a
subject of longstanding interest \cite{FeinbShur,ChSym}
and is explored in particular in
relativistic nucleus-nucleus collisions, where the photons (and
dileptons) measured experimentally provide a time-integrated picture
of the collision dynamics. The recent observation by the PHENIX
Collaboration~\cite{PHENIX2010} that the elliptic flow $v_2(p_T)$ of
'direct photons' produced in minimal bias Au+Au collisions at
$\sqrt{s_{NN}}=200$~GeV is comparable to that of the produced pions
was a surprise and in contrast to the theoretical expectations and
predictions. We will analyse this photon $v_2$ "puzzle" within
scope of the parton-hadron string dynamics (PHSD) transport approach \cite{PHSD}
with a focus on the centrality dependence of the different production sources.
Furthermore, the PHSD approach will be used to study dilepton production
in nucleus-nucleus collisions from SIS to LHC energies in comparison to available data
in order to extract information about
the modification of hadron properties in the dense and hot hadronic
medium which might shed some light on chiral symmetry restoration
(cf. \cite{ChSym} and references therein). On the other hand we intend to
identify those spectral regimes where we see a clear dominance of partonic channels
that might allow to determine their transport properties via their
electromagnetic emissivity.
\section{Photon/dilepton emission rates}
\noindent In hydrodynamical calculations for the time evolution of
the bulk matter the equilibrium emission rate of electromagnetic
probes enters which in thermal field theory can be expressed as \cite{Rate1,Rate2}: \\
\begin{eqnarray}
q_0 {d^3R\over d^3q}= -{\dfrac{g_{\mu\nu}}{(2\pi)^3}}
Im \Pi^{\mu\nu} (q_0=|\vec{q}|) f(q_0,T);
\label{RatePh}
\end{eqnarray}
for photons with 4-momentum $q=(q_0,\vec q)$ and
\begin{eqnarray}
E_+E_- {d^3R \over {d^3p_+d^3p_-}}= \dfrac{2e^2}{(2\pi)^6} \dfrac{1}{q^4}
L_{\mu\nu} Im \Pi^{\mu\nu} (q_0,|\vec{q}|)f(q_0,T).
\label{RateDil}
\end{eqnarray} for dilepton pairs
with 4-momentum $q=(q_0,\vec q)$, where $q=p_+ + p_-$ and
$p_+=(E_+,\vec p_+), p_-=(E_-,\vec p_-)$. Here the Bose distribution
function is $f(q_0,T) =1/(e^{q_0/T}-1)$; $L_{\mu\nu}$ is the
electromagnetic leptonic tensor, $\Pi^{\mu\nu}$ is the retarded
photon self-energy at finite temperature $T$ related to the
electromagnetic current correlator $\Pi^{\mu\nu} \sim i\int d^4x
e^{ipx}\left\langle[J_\mu(x),J_\nu(0)]\right\rangle|_T $. Using the
Vector-Dominance-Model (VDM) $Im\Pi^{\mu\nu}$ can be related to the
in-medium $\rho$-meson spectral function from many-body approaches
\cite{RappWam97} which, thus, can be probed by dilepton measurements
directly. The photon rates for $q_0\to 0$ are related to the
electric conductivity $\sigma_0$ which allows to probe the electric
properties of the QGP \cite{Cass13PRL}. We point out that
Eqs.(\ref{RatePh}),(\ref{RateDil}) are strictly applicable only for
systems in thermal equilibrium whereas the dynamics of heavy-ion
collisions is generally of non-equilibrium nature.
\noindent The non-equilibrium emission rate from relativistic
kinetic theory \cite{Rate2,RateKT2}, e.g. for the process $1+2\to
\gamma +3$, is
\begin{equation}
q_0 {d^3R\over d^3q}= \int \dfrac{d^3p_1}{2(2\pi)^3 E_1}
\dfrac{d^3p_2}{2(2\pi)^3 E_2} \dfrac{d^3p_3}{2(2\pi)^3 E_3} \
(2\pi)^4 \ \delta^4(p_1+p_2-p_3-q) \ |M_{if}|^2 \
\dfrac{f(E_1) f(E_2) (1\pm f(E_3))}{2(2\pi)^3},
\label{RateKT}
\end{equation}
where $f(E_i)$ is the distribution function of particle $i=1,2,3$,
which can be hadrons (mesons and baryons) or partons. In Eq.
(\ref{RateKT}) $M_{if}$ is the matrix element of the reaction which
has to be evaluated on a microscopical level. In the case of
hadronic reactions One-Boson-Exchange models or chiral models are
used to evaluate $M_{if}$ on the level of Born-type diagrams.
However, for a consistent consideration of such elementary process
in the dense and hot hadronic environment, it is important to
account for the in-medium modification of hadronic properties, i.e.
many-body approaches such as self-consistent $G$-matrix calculations
have to be applied (e.g. \cite{Gmatrix} for anti-kaons or
\cite{RappWam97} for $\rho$ mesons).
\section{Photons}
\subsection{Production sources}
There are different production sources of photons in $p+p$ and $A+A$ collisions:\\
1) {\it Decay photons } - most of the photons seen in $p+p$ and
$A+A$ collisions stem from the hadronic decays:
$m \to \gamma + X, m = \pi^0, \eta, \omega, \eta^\prime, a_1, ...$. \\
2) {\it Direct photons} - obtained by subtraction of the decay photon contributions from
the inclusive (total) spectra measured experimentally.\\
(i) The are a few sources of direct photons at large transverse
momentum $p_T$ denoted by {\it 'hard'} photons: the 'prompt'
production from the initial hard $N+N$ collisions and the photons
from the jet fragmentation reactions, which are the standard pQCD
type of processes. The latter, however, might be modified in $A+A$
contrary to $p+p$ due to the parton energy loss in the medium. \\
(ii)
At low $p_T$ the photons come from the thermalized QGP, so called {\it 'thermal'} photons,
as well as from {\it hadronic} interactions: \\
$\bullet$
The {\it 'thermal'} photons from the QGP arise mainly from $q\bar q$ annihilation
($q+\bar q \to g +\gamma$) and Compton scattering ($q(\bar q) + g \to q(\bar q) + \gamma$)
which can be calculated in leading order pQCD \cite{AMY01}.
However, the next-to-leading order corrections turn out to be also important \cite{JacopoQM14}.\\
$\bullet$
{\it Hadronic} sources of photons are related to \\
1) secondary mesonic interactions as $\pi + \pi \to \rho + \gamma, \
\rho + \pi \to \pi + \gamma, \ \pi + K \to \rho + \gamma, ....$ The
binary channels with $\pi, \rho$ have been evaluated in effective
field theory \cite{Kapusta91} and are used in transport model
calculations \cite{HSD08,Linnyk:Photon} within the extension for the
off-shellness of $\rho$-mesons due to the broad spectral function.
Alternatively, the binary hadron rates (\ref{RateKT}) have been
derived in the massive Yang-Milles approach in Ref. \cite{TRG04} and
been often used in hydro calculations . \\
2) hadronic bremsstrahlung, such as meson-meson ($mm$) and
meson-baryon ($mB$) bremsstrahlung $m_1+m_2\to m_1+m_2+\gamma, \ \
m+B\to m+B+\gamma$, where $m=\pi,\eta,\rho,\omega,K,K^*,...$ and
$B=p,\Delta, ...$. Here the leading contribution corresponds to the
radiation from one charged hadron. The importance of bremsstrahlung
contributions to the photon production will be discussed below.
\subsection{Direct photons and the $v_2$ 'puzzle'}
The photon production has been measured early in relativistic
heavy-ion collisions by the WA98 Collaboration in S+Au and Pb+Pb
collisions at SPS energies \cite{WA98}. The model comparisons with
experimental data show that the high $p_T$ spectra are dominated by
the hard 'prompt' photon production whereas the 'soft' low $p_T$
spectra stem from hadronic sources since the thermal QGP radiation
at SPS energies is not large. Moreover, the role of hadronic
bremsstrahlung turns out to be very important for a consistent
description of the low $p_T$ data as has been found a couple of
years ago in expanding fireball model calculations \cite{LiuRapp07}
and in the HSD (Hadron-String-Dynamics) transport approach
\cite{HSD08}. Unfortunately, the accuracy of the experimental data
at low $p_T$ did not allow to draw further solid conclusions.
The measurement of photon spectra by the PHENIX Collaboration
\cite{PHENIX2010} has stimulated a new wave of interest for direct
photons from the theoretical side since at RHIC energies the thermal
QGP photons have been expected to dominate the spectra. A variety of
model calculations based on fireball, Bjorken hydrodynamics, ideal
hydrodynamics with different initial conditions and
Equations-of-State (EoS) turned out to show substantial differences
in the slope and magnitude of the photon spectra (for a model
comparison see Fig. 47 of \cite{PHENIX2010} and corresponding
references therein). Furthermore, the recent observation by the
PHENIX Collaboration \cite{PHENIX1} that the elliptic flow $v_2(p_T)
$ of 'direct photons' produced in minimal bias Au+Au collisions at
$\sqrt{s_{NN}}=200$~GeV is comparable to that of the produced pions
was a surprise and in contrast to the theoretical expectations and
predictions. Indeed, the photons produced by partonic interactions
in the quark-gluon plasma phase have not been expected to show a
considerable flow because - in a hydrodynamical picture - they are
dominated by the emission at high temperatures, i.e. in the initial
phase before the elliptic flow fully develops. Since the direct
photon $v_2(\gamma^{dir})$ is a 'weighted average' ($w_i$) of the
elliptic flow of individual contributions $i$
\begin{eqnarray}
\label{v2dir}
v_2 (\gamma^{dir}) = \sum _i v_2 (\gamma^{i})
w_i = \frac{\sum _i v_2 (\gamma^{i}) N_i }{\sum_i N_i },
\end{eqnarray}
a large QGP contribution gives a smaller $v_2(\gamma^{dir})$. A
sizable photon $v_2$ has been observed also by the ALICE
Collaboration in Pb+Pb collisions at the LHC \cite{ALICE_v2}. None
of the theoretical models could describe simultaneously the photon
spectra and $v_2$ which may be noted as a 'puzzle' for theory.
Moreover, the PHENIX and ALICE Collaborations have reported recently
the observation of non-zero triangular flow $v_3$ (see
\cite{RuanQM14,BockQM14}). Thus, the consistent description of the
photon experimental data remains a challenge for theory.
\subsection{Transport analysis of the photon $v_2$ 'puzzle'}
It is important to stress that state-of-the art hydro models
reproduce well the hadronic 'bulk' observables (e.g. rapidity
distributions, $p_T$ spectra and $v_2, v_3$ of hadrons). However, in
spite of definite improvements of the general dynamics by including
the fluctuating initial conditions (IP-Glasma or MC-Glauber type)
and viscous effects, the hydro models underestimate the spectra and
$v_2$ of photons at RHIC and LHC energies. For a recent overview we
refer the reader to Ref. \cite{UHeinz}.
As a 'laboratory' for a detailed theoretical analysis we use the
microscopic Parton-Hadron-String Dynamics (PHSD) transport approach
\cite{PHSD}, which is based on the generalized off-shell transport
equations derived in first order gradient expansion of the
Kadanoff-Baym equations, and applicable for strongly interacting
systems. The approach consistently describes the full evolution of a
relativistic heavy-ion collision from the initial hard scatterings
and string formation through the dynamical deconfinement phase
transition to the strongly-interacting quark-gluon plasma as well
as dynamical hadronization and the subsequent interactions in the
expanding hadronic phase as in the HSD transport approach
\cite{CBRep98}. The partonic dynamics is based on the Dynamical
Quasi-Particle Model (DQPM), that is constructed to reproduce
lattice QCD (lQCD) results for a quark-gluon plasma in thermodynamic
equilibrium. The DQPM provides the mean felds for gluons/quarks and
their effective 2-body interactions that are implemented in the PHSD
(for the details see Ref.~\cite{Cassing:2008nn} and
\cite{Linnyk:Photon,PHSD}). The PHSD model reproduces a large
variety of observables from SPS to LHC energies, e.g. transverse
mass and rapidity spectra of charged hadrons, dilepton spectra,
collective flow coefficients etc. \cite{PHSD,Linnyk:Photon}. Since
the QGP radiation in PHSD occurs from the massive off-shell
quasi-particles with spectral functions, the corresponding QGP rate
has been extended beyond the standard pQCD rate \cite{AMY01} - see
Ref. \cite{Linnyk11}.
\begin{figure}
\begin{center}
\includegraphics*[width=0.85\textwidth]{photonspectra}
\caption{Direct photon $p_T$-spectrum from the PHSD approach in
comparison to the PHENIX data \cite{PHENIX2010} at midrapidity for
different centralities in Au+Au collisions at $\sqrt{s_{NN}}=200$
GeV. The channel description is given in the legend. The figure is
taken from Ref. \cite{Linnyk:Photon}. } \label{fig1}
\end{center}
\end{figure}
\begin{figure}
\hspace{0.2cm}
\includegraphics*[width=0.4\textwidth]{scaling}\includegraphics*[width=0.5 \textwidth]{Photonv2}
\caption{(l.h.s.) Integrated spectra of thermal photons produced in
Au + Au collisions at $\sqrt{s_{NN}}=200$ GeV versus the number of
participants $N_{part}$. The scaling with $N_{part}$ from the QGP
contributions (full dots) and the bresstrahlungs channels (full
triangles) are shown separately. (r.h.s.) The elliptic flow
$v_2(p_T)$ of direct photons produced by binary processes in Au +
Au collisions at $\sqrt{s_{NN}}=200$ GeV for different
centralities versus the photon transverse momentum $p_T$. The
hatched area displays the statistical uncertainty (for the most
central bin). } \label{fig2}
\end{figure}
The result of the PHSD approach \cite{Linnyk:Photon} for the direct
photon $p_T$-spectrum at midrapidity for Au+Au collisions at
$\sqrt{s}=200$ GeV is shown in Fig. \ref{fig1} for different
centralities in comparison to the PHENIX data \cite{PHENIX2010}. The
upper solid lines give the total direct photon spectra whereas the
various lines show the contributions from indidividual channels (see
legend). While the 'hard' $p_T$ spectra are dominated by the
'prompt' (pQCD) photons, the 'soft' spectra are filled by the
'thermal' sources: the QGP gives up to $~50\%$ of the direct photon
yield between 1 and 2 GeV/$c$ for the most central bin (0-20 \%), a
sizable contribution stems from hadronic sources such as meson-meson
($mm$) and meson-Baryon ($mB$) bremsstrahlung; the contributions
from binary $mm$ reactions are of subleading order. Thus, according
to the present PHSD results the $mm$ and $mB$ bremsstrahlung turn
out to be an important source of direct photons. We note, that the
bremsstrahlung channels are not included in the $mm$ binary 'HG'
rate \cite{TRG04} used in the hydro calculations mentioned above. We
stress, that $mm$ and $mB$ bremsstrahlung cannot be subtracted
experimentally from the photon spectra and has to be included in
theoretical considerations. As has been pointed out earlier its
importance for 'soft' photons follows also from the WA98 data at
$\sqrt{s}=17.3$ GeV \cite{HSD08,LiuRapp07}.
However, some words of caution have to be given here related to the
uncertainties in the bremsstrahlung channels in the present PHSD
results. The implementation of photon bremsstrahlung from hadronic
reactions in transport approaches \cite{HSD08,Linnyk:Photon} is
based on the 'soft-photon' approximation (SPA) \cite{Rate2} which
implies the factorization of the amplitude for the $a+b\to
a+b+\gamma$ processes to the strong and electromagnetic parts
assuming that the radiation from internal lines is negligible and
the strong interaction vertex is on-shell. In this case the strong
interaction part can be approximated by the on-shell elastic cross
section for the reaction $a+b \to a+b$. Thus, the resulting yield of
the bremsstrahlung photons depends on the validity of the SPA for
large $p_T$ itself and assumptions on the cross sections for the
meson-meson and meson-baryon elastic scattering which are little (or
not at all) known experimentally. For a more detailed discussion on
uncertainties we refer the reader to Ref. \cite{Linnyk:Photon}. In
this respect we consider the PHSD results on bremsstrahlung as an
'upper estimate'.
The question: "what dominates the photon spectra - {\it QGP
radiation or hadronic contributions}" can be addressed
experimentally by investigating the centrality dependence of the
photon yield: the QGP contribution is expected to decrease when
going from central to peripheral collisions where the hadronic
channels should be dominant. The centrality dependence of the direct
photon yield, integrated over different $p_T$ ranges, has been
measured by the PHENIX Collaboration, too
\cite{PHENIXcd14,MizunoQM14}. It has been found that the midrapidity
'thermal' photon yield scales with the number of participants as
$dN/dy \sim N_{part}^\alpha$ with $\alpha =1.48\pm 0.08$ and only
very slightly depends on the selected $p_T$ range (which is still in
the 'soft' sector, i.e. $< 1.4$ GeV/$c$). Note that the 'prompt'
photon contribution (which scales as the $pp$ 'prompt' yield times
the number of binary collisions in $A+A$) has been subtracted from
the data. The PHSD predictions \cite{Linnyk:Photon} for Au+Au
collisions at different centralities give $\alpha (total) \approx
1.5$, which is dominated by hadronic contributions, while the QGP
channels scale with $\alpha (QGP) \sim 1.7$ (see Fig. 2 (l.h.s.)). A
similar finding has been obtained by the viscous (2+1)D VISH2+1 and
(3+1)D MUSIC hydro models \cite{VISH_McG}: $\alpha(HG) \sim 1.46, \
\ \alpha(QGP) \sim 2, \ \ \alpha(total) \sim 1.7$. Thus, the QGP
photons show a centrality dependence significantly stronger than
that of hadron-gas (HG) photons.
In Fig.~\ref{fig2} (r.h.s.) we provide predictions for the
centrality dependence of the direct photon elliptic flow $v_2(p_T)$
within the PHSD approach. The direct photon $v_2$ is seen to be
larger in the peripheral collisions compared to the most central
ones. The predicted centrality dependence of the direct photon flow
results from the interplay of two independent factors: First, the
channel decomposition of the direct photon yield changes (cf. Fig.
1): the admixture of photons from the hadronic phase increases for
more peripheral collisions. Since the PHSD approach predicts a very
small $v_2$ of photons produced in the initial hot deconfined phase
by partonic channels of the order of 2\% the photon flow $v_2$
shows about the same signal for the most central bin. On the other
hand, the photons from the hadronic sources show a strong elliptic
flow (up to 10\%), on the level of the $v_2$ of final hadrons.
Accordingly, since the channel decomposition of the direct photons
changes with centrality, the elliptic flow of the direct photons
increases with decreasing centrality and becomes roughly comparable
with the elliptic flow of pions in peripheral collisions. The
elliptic flow in the most peripheral bin is also low in
Fig.~~\ref{fig2} (r.h.s.) because all the colliding particles have
little flow at this high impact parameter $b$.
\section{Dileptons}
\subsection{Production sources}
Dileptons ($e^+e^-$ or $\mu^+\mu^-$ pairs) can be emitted from all
stages of the reactions as well as a photons. One of the advantages
of dileptons compared to photons is an additional 'degree of
freedom' - the invariant mass $M$ which allows to disentangle
various sources. The following production sources of dileptons in
$p+p, p+A$ and $A+A$ collisions are leading:\\
1) Hadronic sources:\\
(i) at low invariant masses ($M < 1$ GeV$c$) -- the Dalitz decays of mesons and
baryons $(\pi^0,\eta,\Delta, ...)$ and the direct decay of
vector mesons $(\rho, \omega, \phi)$ as well as hadronic bremsstrahlung; \\
(ii) at intermediate masses ($1< M < 3$ GeV$c$) --
leptons from correlated $D+\bar D$ pairs, radiation
from multi-meson reactions
($\pi+\pi, \ \pi+\rho, \ \pi+\omega, \ \rho+\rho, \ \pi+a_1, ... $) -
so called $'4\pi'$ contributions; \\
(iii) at high invariant masses ($M > 3$ GeV$c$) -- the direct decay of
vector mesons $(J/\Psi, \Psi^\prime)$ and
initial 'hard' Drell-Yan annihilation to dileptons
($q+\bar q \to l^+ +l^-$, where $l=e,\mu$).\\
2) 'thermal' QGP dileptons radiated from the partonic interactions
in heavy-ion ($A+A$) collisions that contribute dominantly to
the intermediate masses. The leading processes
are the 'thermal' $q\bar q$ annihilation ($q+\bar q \to l^+ +l^-$, \ \
$q+\bar q \to g+ l^+ +l^-$) and Compton scattering
($q(\bar q) + g \to q(\bar q) + l^+ +l^-$).
\subsection{Transport results from SIS to LHC energies}
At energies around 1 $A$GeV dileptons have been measured in
heavy-ion collisions at the BEVALAC in Berkeley by the DLS
Collaboration by more than two decades ago. These data led to the so
called 'DLS puzzle' because the DLS dilepton yield in C+C and
Ca+Ca collisions at 1 $A$GeV in the invariant mass range from 0.2 to
0.5 GeV was about five times higher than the results from different
transport models at that time using the 'conventional' dilepton
sources such as bremsstrahlung, $\pi^0, \eta, \omega$ and $\Delta$
Dalitz decays and direct decay of vector mesons ($\rho, \omega,
\phi$) \cite{Bratkovskaya:1996bv}. To solve this puzzle was one of
the main motivations to build the HADES (High Acceptance Dilepton
Spectrometer) detector at GSI \cite{Agakishiev:2011vf}.
Indeed the HADES Collaboration could confirm the DLS measurements at
1 $A$GeV when passing their events for C+C through the DLS filter
\cite{xxhades}. From the theory side it was argued that the pn
bremsstrahlung channel should be sizeably enhanced as compared to
the early soft photon calculations \cite{Bratkovskaya:1996bv}.
Indeed, a good reproduction of various spectra a different energies
could be achieved within the HSD calculations in Ref.
\cite{xxbraca}. Note, however, that even the bremsstrahlung from pn
reactions at these low energies is discussed controversally in the
community and not available experimentally. We here report on the
actual status of the transport calculations in comparison to the
HADES data \cite{BratAich}.
\begin{figure}[th!]
\includegraphics*[width=6.8cm]{M-CC20.eps}\includegraphics*[width=6.8cm]{M-ArKac.eps}
\vspace*{5mm}
\caption{The mass differential dilepton spectra - normalized to the
$\pi^0$ multiplicity
- from HSD calculations for C+C at 2 $A$GeV (l.h.s.) and Ar+KCl
at 1.76 $A$GeV (r.h.s.) in
comparison to the HADES data
\cite{Agakishiev:2009yf,Agakishiev:2011vf}. The upper parts (a)
shows the case of 'free' vector-meson spectral functions while the
lower parts (b) give the result for the 'collisional broadening'
scenario. The different colour lines display individual channels in
the transport calculation (see legend). The theoretical
calculations passed through the corresponding HADES acceptance
filter and mass/momentum resolutions. } \label{Fig_CC20}
\end{figure}
Fig. \ref{Fig_CC20} (l.h.s.) shows the mass differential dilepton
spectra - normalized to the $\pi^0$ multiplicity - from HSD
calculations for C+C at 2 $A$GeV in comparison to the HADES data
\cite{Agakishiev:2009yf}. The theoretical calculations passed
through the corresponding HADES acceptance filters and mass/momentum
resolutions which leads to a smearing of the spectra at high
invariant mass and particularly in the $\omega$ peak region. The
upper part shows the case of 'free' vector-meson spectral functions
while the lower part presents the result for the 'collisional $\rho$
broadening' scenario. Here the difference between in-medium
scenarios is of minor importance and partly due to the limited mass
resolution which smears out the spectra. Fig. \ref{Fig_CC20}
(r.h.s.) displays the mass differential dilepton spectra -
normalized to the $\pi^0$multiplicity - from HSD calculations for
the heavier system Ar+KCl at 1.76 $A$GeV in comparison to the HADES
data \cite{Agakishiev:2011vf}. The upper part shows again the case
of 'free' vector-meson spectral functions while the lower part gives
the result for the 'collisional broadening' scenario. Also in this
data set the enhancement around the $\rho$ mass is clearly visible.
For the heavier system the 'collisional broadening' scenario shows a
slightly better agreement with experiment than the 'free' result and
we expect that for larger systems the difference between the two
approaches increases. We note that with increasing mass A+A of the
system the low mass dilepton regime from roughly 0.15 to 0.5 GeV
increases due to multiple $\Delta$-resonance production and Dalitz
decay. The dileptons from intermediate $\Delta$'s, which are part of
the reaction cycles $\Delta \to \pi N ; \pi N \to \Delta$ and $NN\to
N\Delta; N\Delta \to NN$, escape from the system while the decay
pions do not \cite{BratAich}. With increasing system size more generations of
intermediate $\Delta$'s are created and the dilepton yield enhanced
accordingly. In inclusive C+C collisions there is only a moderate
enhancement relative to scaled p+p and p+n collisions due to the
small size of the system while in Ar+KCl reactions already several
(3-4) reactions cycles become visible. On the other hand the effects
from a broadened vector-meson spectral function is barely visible
for both systems and calls for Au+Au collisions. Indeed, the
respective data have been taken and are currently analyzed. For
detailed predictions we refer the reader to Ref. \cite{BratAich}.
Dileptons from heavy-ion collisions at SPS energies have been
measured in the last decades by the CERES \cite{CERES} and NA60
\cite{NA60} Collaborations. The high accuracy dimuon NA60 data
provide a unique possibility to subtract the hadronic cocktail from
the spectra and to distinguish different in-medium scenarios for
the $\rho$-meson spectral function such as a collisional broadening
and dropping mass \cite{ChSym,Li:1995qm}. The main messages obtained
by a comparison of the variety of model calculations (see e.g.
\cite{ChSym,dilHSD,dilSPStheor}) with experimental data can be
summarized as \\ (i) the low mass spectra \cite{CERES,NA60} provide
a clear evidence for the collisional broadening of the $\rho$-meson
spectral function in the hot and dense medium; \\ (ii) the
intermediate mass spectra above $M>1$ GeV/$c^2$ \cite{NA60} are
dominated by partonic radiation; \\ (iii) the rise and fall of the
inverse slope parameter of the dilepton $p_T$-spectra (effective
temperature) $T_{eff}$ \cite{NA60} provide evidence for the thermal
QGP radiation; \\ (iv) isotropic angular distributions \cite{NA60}
are an indication for a thermal origin of dimuons. \\
An increase in energy from SPS to RHIC has opened new possibilities
to probe by dileptons a possibly different matter at very high
temperature, i.e. dominantly in the QGP stage, created in central
heavy-ion collisions. The dileptons ($e^+e^-$ pairs) have been
measured first by the PHENIX Collaboration for $pp$ and $Au+Au$
collisions at $\sqrt{s}=200$ GeV \cite{PHENIXdil}. A large
enhancement of the dilepton yield relative to the scaled $pp$
collisions in the invariant mass regime from 0.15 to 0.6 GeV/$c^2$
has been reported for central Au+Au reactions. This observation has
stimulated a lot of theoretical activity (see the model comparison
with the data in Ref. \cite{PHENIXdil}). The main messages - which hold
up-to-now - can be condensed such that the theoretical models, which
provide a good description of $pp$ dilepton data and peripheral
$Au+Au$ data, fail in describing the excess in central collisions
even with in-medium scenarios for the vector-meson spectral
function \cite{dilHSD}. The missing strengths might be attributed to low $p_T$
sources \cite{dilPHSDRHIC}. On the other hand the intermediate mass
spectra are dominated by the QGP radiation as well as leptons from
correlated charm pairs ($D+\bar D$) \cite{dilHSD,dilPHSDRHIC,Rapp13}.
\begin{figure}[t]
\hspace{1cm}
\includegraphics*[width=0.9\textwidth]{star}
\caption{Centrality dependence of the midrapidity dilepton yields
(left) and its ratios (right) to the 'cocktail' for 0-10\%, 10-40\%,
40-80\%, 0-80\% central Au+Au collisions at $\sqrt{s}=200$ GeV: a
comparison of STAR data with theoretical predictions from the PHSD
('PHSD' - dashed lines) and the expanding fireball model ('Rapp' -
solid lines). The figure is taken from Ref. \cite{HuckQM14}.}
\label{fig:dilSTAR200cd}
\end{figure}
In this respect it is very important to have independent
measurements which have been carried out by the STAR Collaboration
\cite{dilSTAR}. Fig. \ref{fig:dilSTAR200cd} shows the comparison of
STAR data of midrapidity dilepton yields (l.h.s.) and its ratios
(r.h.s.) to the 'cocktail' for 0-10\%, 10-40\%, 40-80\%, 0-80\%
central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV in comparison
to the theoretical model predictions from the PHSD approach and the
expanding fireball model of Rapp and collaborators. As seen from
Fig. \ref{fig:dilSTAR200cd} the excess of the dilepton yield over
the expected cocktail is larger for very central collisions and
consistent with the model predictions including the collisional
broadening of the $\rho$-meson spectral function at low invariant
mass and QGP dominated radiations at intermediate masses. Moreover,
the recent STAR dilepton data for Au+Au collisions from the Beam
Energy Scan (BES) program for $\sqrt{s_{NN}}=19.6, 27, 39$ and 62.4
GeV \cite{RuanQM14,dilSTAR2,HuckQM14} are also in line with the
expanding fireball model (as well as PHSD) predictions with a $\rho$
collisional broadening \cite{HuckQM14}. According to the PHSD
calculations the excess is increasing with decreasing energy due to
a longer $\rho$-propagation in the high baryon density phase (see
Fig. 3 in \cite{RuanQM14}).
The upcoming PHENIX data for central
Au+Au collisions - obtained after an upgrade of the detector -
together with the BES-II RHIC data should provide finally a
consistent picture on the low mass dilepton excess in relativistic
heavy-ion collisions.
On the other hand, the upcoming ALICE data \cite{dilALICE} for
heavy-ion dileptons for Pb+Pb at $\sqrt{s}$ = 2.76 TeV will give a
clean access to the dileptons emitted from the QGP
\cite{Rapp13,dilPHSDLHC}. In Fig. \ref{fig5} (l.h.s.) we present
the PHSD predictions for central Pb+Pb collisions \cite{dilPHSDLHC}
in the low mass sector for a realistic lepton $p_T$ cut of 1 GeV/c. It is
clearly seen that the QGP sources and contribution from correlated
$D{\bar D}$ pairs are subleading in the low mass regime where we find
the conventional hadronic sources. For a lepton $p_T$ cut of 1 GeV/c (l.h.s.)
one practically cannot identify an effect of the $\rho$ collisional
broadening in the dilepton spectra in the PHSD calculations. Only
when applying a low $p_T$ cut of 0.15 GeV/c a small enhancement of
the dilepton yield from 0.3 to 0.7 GeV becomes visible (r.h.s. of
Fig. \ref{fig5}). This low sensitivity to hadronic in-medium effects
at LHC energies is due to the fact that the hadrons come out late in central Pb+Pb
collisions and are boosted to high velocities due to the high pressure in
the early partonic phase.
\begin{figure}[t]
\hspace{0.1cm}
\includegraphics[width=0.45\textwidth]{LHCdil_1}\includegraphics[width=0.48\textwidth]{LCHdil_2}
\vspace{4mm}
\caption{Midrapidity dilepton yields for Pb+Pb at $\sqrt{s_{NN}}$ =
2.76 TeV (l.h.s.) for a lepton $p_T$ cut of 1 GeV/c. The channel
decomposition is explained in the legend. (r.h.s.) Same as for the
l.h.s. but for a lepton $p_T$ cut of 0.15 GeV/c for a 'free' $\rho$
spectral function (dashed line) and the collisional broadening
scenario (solid line). The figures are taken from Ref.
\cite{dilPHSDLHC}.} \label{fig5}
\end{figure}
In the end, we mention that promising perspectives with dileptons
have been suggested in Ref. \cite{v3dil} to measure the anisotopy
coefficients $v_n, \ n=2,3$ similar to photons. The calculations
with the viscous (3+1)d MUSIC hydro for central Au+Au collisions at
RHIC energies show that $v_2, v_3$ are sensitive to the dilepton
sources and to the EoS and $\eta/s$ ratio. The main advantage of
measuring flow coefficients $v_n$ with dileptons compared to photons
is the fact that an extra degree of freedom $M$ might allow to
disentangle the sources.
\section{Conclusions}
In conclusion, our calculations show that the photon production in
the QGP is dominated by the early phase (similar to hydrodynamic
models) and is localized in the center of the fireball, where the
collective flow is still rather low, i.e. on the 2-3 \% level, only.
Thus, the strong $v_2$ of direct photons - which is comparable to
the hadronic $v_2$ - in PHSD is attributed to hadronic channels,
i.e. to meson binary reactions, meson-meson and meson-baryon bremsstrahlung
which are not subtracted in the data. On the other hand, the strong $v_2$ of the
'parent' hadrons, in turn, stems from the interactions in the QGP.
We have argued that a precise measurement of the centrality dependence of the
elliptic flow of direct photons together with their differential spectra should
help in clarifying the the photon $v_2(p_T)$ "puzzle". Note however, that the
hadronic bremsstrahlungs channels are not well under control and our present
results should be taken as 'upper limits'. Some more work will have to be done
in this direction.
The main messages from our dilepton campaign may be formulated as follows: i) the
low mass ($M=0.2-0.6$ GeV/$c^2$) dilepton spectra show sizable changes due to
hadronic in-medium effects, i.e. multiple hadronic resonance formation (at SIS energies)
or a modification of the properties of
vector mesons (such as collisional broadening) in the hot and dense
hadronic medium (partially related to chiral symmetry
restoration); these effects can be observed at all energies up to LHC
(preferentially in heavy systems) but are most pronounced in the FAIR/NICA energy regime; (ii) at
intermediate masses the QGP ($q\bar q$ thermal radiation) dominates
for $M>1.2$ GeV/$c^2$, it grows with
increasing energy and becomes dominant at the LHC energies.
The dilepton measurements within the future experimental energy and
system size scan ($pp, pA, AA$) from low to top RHIC energies as well as
new ALICE data at LHC energies will extend our knowledge on the
properties of hadronic and partonic matter via its electromagnetic
radiation.
\vspace*{2mm}
The authors acknowledge financial support through the 'HIC for FAIR'
framework of the 'LOEWE' program and like to thank all their
coauthors for their help and valuable contributions.
|
2,877,628,088,589 | arxiv | \section{Introduction}
Soliton formation is an important and rich nonlinear phenomenon in various branches of physics. In many exactly solvable models, both classical and quantum mechanical ones, soliton plays a unique role. It is well known that the interacting bosons in one dimension (the Lieb-Liniger model) show an unexpected branch in its excitation spectrum, usually referred to as the type-II excitations \cite{PhysRev.130.1605,PhysRev.130.1616}. Later it was found that the interacting fermions in one dimension (the Yang-Gaudin model) have a similar phenomenon \cite{RevModPhys.85.1633,1367-2630-18-7-075004}. The fact that they originate from solitons can be clearly seen in the semiclassical analysis, where solitons serve as an alternative solution to the semiclassical equation of motion apart from the spatially homogeneous solution \cite{Kulish:1976ek,PhysRevA.91.023616}.
It is even more interesting, as we will show, that these soliton-like solutions can further affect the spin excitations in a striking way that they will fix the minimum energy of the spin excitations exactly at momentum $k_F=\pi n/2$, where $nm_F$ ($m_F$ is the mass of the fermionic atom) is the conserved total mass density of the system and it remains unchanged along the whole crossover.
This is in sharp contrast to the situation in higher dimensions, whereby tuning interaction along the BCS-BEC crossover we can move this momentum from $k_F$ on the deep BCS side to zero on the deep BEC side \cite{2015qgee.book..179P}. In this paper, we present a comprehensive semiclassical theory of solitons in one dimensional systems at BCS-BEC crossover, where we explain the soliton interpretation of the type-II excitations and the fixing of the momentum for the minimum energy of spin excitations. Our theory explains the semiclassical origin of the excitation spectrum of the Yang-Gaudin model, where existing semiclassical proposals fail to reconcile with the exact solutions \cite{PhysRevA.91.023616,1367-2630-18-7-075004}. Our theory also serves as yet another example of the dramatic effect solitons can have on low dimensional physics.
In the next section, we will review the exact solutions of the Lieb-Linger model, the Yang-Gaudin model and the model of BCS-BEC crossover in one dimension. From there, we raise the questions mentioned above and we further analyze them in the sections to follow. We first outline the general formalism of the semiclassical analysis in presence of solitons across the BCS-BEC crossover. We then apply it to the $S=1/2$ and $S=0$ excitations respectively, where we present analytic analysis on both deep BCS and deep BEC side and qualitative analysis for the crossover. Finally, we summarize the main results and make the conclusion.
\section{Review of Exact Solutions and their Relation to Solitons}
The model of interacting bosons and fermions in one dimension can be both solved exactly via the technique of Bethe ansatz \cite{Korepin_1993}, the former is known as the Lieb-Liniger model \cite{PhysRev.130.1605,PhysRev.130.1616}, and the latter is known as the Yang-Gaudin model \cite{GAUDIN196755,PhysRevLett.19.1312}. An exactly solvable model connecting them to describe BCS-BEC crossover in one dimension can also be constructed \cite{PhysRevLett.93.090408,PhysRevLett.93.090405,Ren}. In this section, we present the excitation spectra of these exactly solvable models. In the $S=0$ excitations (where $S$ is the total spin) for all these models, there is an extra soliton-like branch apart from the usual Bogoliubov quasiparticle branch. In the $S=1/2$ excitations, one finds the minimum of the energy lying exactly at the Fermi momentum $k_F=\pi n/2$. These are the key features we would like to explain when later developing the corresponding semiclassical theory.
We start with the Lieb-Liniger model, described by the Hamiltonian
\begin{equation}
\label{eq:HLL}
\hat{\mathcal{H}}=\int dx\left[\partial_x\hat{\varphi}^{\dagger}(x)\partial_x\hat{\varphi}(x)+c_B\hat{\varphi}^{\dagger}(x)\hat{\varphi}^{\dagger}(x)\hat{\varphi}(x)\hat{\varphi}(x) \right],
\end{equation}
where $\hat{\varphi}$ represents the spinless bosons with mass $m_B=1/2$, and $c_B>0$ corresponds to the repulsion between bosons. Also we adopt the convention that $\hbar=1$ in this paper.
A typical excitation spectrum of Lieb-Liniger model is shown in Fig. \ref{fig:LL}.
\begin{figure}
\includegraphics[scale=0.5]{LL.pdf}
\caption{\footnotesize The typical excitation spectrum of the Lieb-Liniger model, calculated for coupling strength $\gamma=c_B/n_s=0.43$. There are two branches, type-I for Bogoliubov quasiparticles and type-II for soliton-like excitations. Also shown in the figure is the sound velocity $v_c$, which scale as $\sqrt{c_Bn}$.}
\label{fig:LL}
\end{figure}
It is composed of two branches, the usual Bogoliubov quasiparticle (Lieb-Liniger type-I) branch, and the Lieb-Liniger type-II branch. At long wavelength, both branches reduce to a linear dispersion as phonons, with the same sound velocity $v_c=\sqrt{c_Bn}$, whose magnitude decreases with the coupling strength. The key features of the type-II excitations are that it has $\epsilon(2\pi n_s) \to 0$ as the system size goes to infinity, $L\to\infty$, and it has its maximum energy achieved at momentum $k=\pi n_s$. This periodicity of the type-II branch is a consequence of translational invariance, where the shift of momentum for each boson by the amount of $2\pi/L$ costs $(n_sL)(2\pi/L)^2\to 0$ in energy but changes the total momentum by $(n_sL)(2\pi/L)=2\pi n_s$ \cite{Ren}. Similarly, the total energy also remains invariant under the momentum reflection $k\to 2\pi/L-k$ for each boson, which means the spectrum has an additional symmetry of reflection about total momentum $\pi n_s$. As a result, the maximum of the spectrum is fixed at momentum $\pi n_s$. It is known that this point corresponds to a motionless (dark) soliton, and all the Lieb-Liniger type-II branch has the physical interpretation as the dispersion relation $E(P)$ for the moving (grey) soliton with velocity $v_s=\partial E(P)/\partial P$ \cite{Kulish:1976ek,PhysRevA.78.053630}.
Now we move on to the attractive Yang-Gaudin model, which is defined by the following Hamiltonian:
\begin{equation}
\label{eq:HYG}
\hat{\mathcal{H}}=\int dx\left[\partial_x\hat{\psi}^{\dagger}(x)\partial_x\hat{\psi}(x)-c_F\hat{\psi}^{\dagger}(x)\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\hat{\psi}(x) \right],
\end{equation}
where $\hat{\psi}=\begin{pmatrix}\hat{\psi}_{\uparrow}\\ \hat{\psi}_{\downarrow}\end{pmatrix}$ represents the $S=1/2$ fermions with mass $m_F=1/2$, and $c_F>0$ corresponds to the attraction between fermions. This attraction, however weak, produces bound pairs in one dimension. A typical spectrum of $S=0$ excitations of the Yang-Gaudin model is shown in Fig. \ref{fig:YG1},
\begin{figure}
\includegraphics[scale=0.5]{YG1.pdf}
\caption{\footnotesize The typical $S=0$ excitation spectrum of the Yang-Gaudin model, calculated for coupling strength $\gamma=c_F/n=0.15$. There are also two branches, type-I for Bogoliubov quasiparticles and type-II for soliton-like excitations. Also shown in the figure is the sound velocity and the Fermi energy $\epsilon_F$, we can see in the weak coupling limit, the dark soliton has an energy on the scale of $\epsilon_F$ and the sound velocity is on the scale of $v_F$.}
\label{fig:YG1}
\end{figure}
which is pretty similar to the one we obtain in the Lieb-Liniger model. The notable differences here are the scale of the maximum energy of type-II excitations and the sound velocity. In the weak coupling limit $c_F/n \ll 1$, the maximum energy is on the scale of the Fermi energy $\epsilon_F=\pi^2n^2/4$ and the sound velocity is on the scale of the Fermi velocity $v_F=\pi n$. Since the velocity is large when $k\to 0$, there is no semiclassical description for the dispersion relation, but near the maximum of the spectrum where the velocity is small, a semiclassical description is still possible. The recent attempt by ~\citet{PhysRevA.91.023616} to develop such a description led to incorrect energy scale and curvature near the maximum of the spectrum \cite{1367-2630-18-7-075004}. We are going to reconcile this discrepancy in this paper.
In the strong coupling limit $c_F/n \gg 1$ where the fermions are tightly bounded, instead of behaving like a system of weakly coupled bosons, the Yang-Gaudin model produces a system of hardcore bosons know as the fermionic super Tonks-Girardeau gas \cite{RevModPhys.85.1633}. As a result, the sound velocity is still on the scale of the Fermi velocity, and the spectrum of Fig. \ref{fig:YG1} preserves qualitative shape for any value of $c_F$.
A typical spectrum of $S=1/2$ excitations of the Yang-Gaudin model is shown in Fig. \ref{fig:YG2},
\begin{figure}
\includegraphics[scale=0.5]{YG2.pdf}
\caption{\footnotesize The typical $S=1/2$ excitation spectrum of the Yang-Gaudin model, calculated for coupling strength $\gamma=c_F/n=1.13$. The minimum energy is obtained at the Fermi momentum $k_F=\pi n/2$, with a small region of quadratic spectrum around it. Also shown in the figure is the binding energy $\epsilon_b$ for the singlet pairs, which is bigger than the spin gap.}
\label{fig:YG2}
\end{figure}
where the minimum energy is achieved exactly at the Fermi momentum $k_F=\pi n/2$, irrespective of the coupling strength. This exactness is unusual, since it is without the correction on the scale of $\delta k\sim \Delta_0/v_F$ that would be introduced by the conventional BCS theory in the weak coupling limit (where $\Delta_0$ is the gap width), and it is contrary to the usual conclusion that the minimum energy should be achieved at zero momentum in deep BEC regime in higher dimensions \cite{2015qgee.book..179P}. At first sight, this could be caused by the fact that the strong coupling limit $c_F/n\gg 1$ of Yang-Gaudin model is not a system of weakly coupled bosons, which invalidates it as a proper model for BCS-BEC crossover. To test this idea, we recently proposed a new model of BCS-BEC crossover subject to exact solutions by Bethe ansatz \cite{Ren}. The fermionic version of this model is described by the Hamiltonian:
\begin{equation}
\label{eq:fermionicmodel}
\begin{split}
\hat{\mathcal{H}}= & \int dx \Big{\{} \partial_x\hat{\psi}^{\dagger}\partial_x \hat{\psi}+\frac{1}{2}\partial_x\hat{\bm{a}}^{\dagger}\cdot\partial_x\hat{\bm{a}} +\frac{1}{2}\partial_x\hat{b}^{\dagger}\partial_x\hat{b} \\
& -\epsilon_a\hat{\bm{a}}^{\dagger}\cdot\hat{\bm{a}}-\epsilon_b\hat{b}^{\dagger}\hat{b}+\lambda_{\psi}\hat{\psi}^{\dagger}\hat{\psi}^{\dagger}\hat{\psi}\hat{\psi}\\
&+\left[\frac{t_a}{2}\left(i\partial_x\hat{\psi}^T\bm{\sigma}\sigma_y\hat{\psi}\right)\cdot\hat{\bm{a}}^{\dagger}+h.c.\right]\\
&+\left[\frac{t_b}{2}\left(i\hat{\psi}^T\sigma_y\hat{\psi}\right)\cdot\hat{b}^{\dagger}+h.c.\right] \Big{\}},
\end{split}
\end{equation}
where $\bm{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ is the Pauli matrix, $\hat{\psi}=\begin{pmatrix}\hat{\psi}_{\uparrow}\\ \hat{\psi}_{\downarrow}\end{pmatrix}$ represents the fermions with mass $m_F=1/2$ and $\lambda_{\psi}$ is the repulsive coupling between them. $\hat{\bm{a}}$ represents the vector resonance at energy $-\epsilon_a$ and $\hat{b}$ represents the scalar resonance at energy $-\epsilon_b$, both of which are of mass $m_a=m_b=1$. Both of the resonances are needed for the exact solvability, which can be achieved by fine tuning the position of the resonant levels. The behavior of this model is then controlled by two parameters:
\begin{equation}
c_1=|t_a|^2/4, ~~~ c_2=c_1+|t_b|^2/(2\epsilon_b).
\end{equation}
This model has the Lieb-Liniger model and the Yang-Gaudin model as its two limits in the parameter range $c_1\sim c_2$ and $c_1\gg c_2$ respectively, thus providing a model of BCS-BEC crossover in one dimension that is subject to exact solutions. On the side where it reduces to the Yang-Gaudin model with $c_F=c_2$, the excitation spectrum is basically the same as shown in Fig. \ref{fig:YG1} and Fig. \ref{fig:YG2}; On the side where it reduces to the Lieb-Liniger model with $c_B=c_1-c_2$, the $S=0$ spectrum is basically the same as shown in Fig. \ref{fig:LL}. In addition to that, we also have $S=1/2$ excitations now, whose typical behavior is shown in Fig. \ref{fig:BEC}.
\begin{figure}
\includegraphics[scale=0.5]{BEC.pdf}
\caption{\footnotesize The typical $S=1/2$ excitation spectrum on the BEC side, calculated for coupling strength $\gamma_1=c_1/n=0.34$ and $\gamma_2=c_2/n=0.27$. In the plotting scale we have used $n_s=n/2$. The minimum energy is again obtained at the Fermi momentum $k_F=\pi n/2$. Also shown in the figure is the binding energy $\epsilon_b$ for the singlet pair, which is bigger than the spin gap.}
\label{fig:BEC}
\end{figure}
We can see that the spectrum has the same feature as that on the BCS side, with the minimum energy still obtained exactly at the Fermi momentum $k_F=\pi n/2$. even though the $S=0$ sector corresponds to weakly interacting bosons with $v_c\ll v_F$.
In all the exactly solvable models presented above, the fixing of minimum spin excitation energy at $k_F$ is a phenomenon robust against variations of coupling constants across the whole range, which is in sharp contrast to the situation in higher dimensions \cite{2015qgee.book..179P}. It leads us to the conclusion that this is most probably a general feature not limited to exact solvability. One may suspect that the fixing is a consequence of the Luttinger theorem, but this is not true due to the fact that the system here is gapped and there is no conservation of the number of fermions (since there is tunneling between atoms and molecules back and forth). On the other hand, the maximum of the $S=0$ excitations can be interpreted as a dark soliton, with the spectrum near it as a moving grey soliton. We propose that the minimum of the $S=1/2$ excitations is also a dark soliton with one extra fermion bounded on it and $k_F$ is just the momentum of this dark soliton, whereas the fermion sitting bounded on top of it doesn't bring any new momentum. This will be done in the next sections.
\section{General Formalism}
For the purpose of semiclassical analysis, let's consider the following simplified model of BCS-BEC crossover at the mean field level:
\begin{equation}
\label{eq:BCS-BEC}
\begin{split}
\hat{\mathcal{H}}=& \int dx~\left\{\partial_x\hat{\psi}^{\dagger}\partial_x\hat{\psi}+\frac{1}{2}\partial_x\hat{b}^{\dagger}\partial_x\hat{b}-\epsilon_b\hat{b}^{\dagger}\hat{b} \right.\\
&\left.+\left[\frac{t_b}{2}\left(i\hat{\psi}^T\sigma_y\hat{\psi} \right)\hat{b}^{\dagger}+h.c. \right]\right\}-\mu\hat{\mathcal{N}},\\
\hat{\mathcal{N}}=&\int dx \left(\hat{\psi}^{\dagger}\hat{\psi}+2\hat{b}^{\dagger}\hat{b}\right),\\
\hat{\mathcal{P}}=&~\frac{1}{2i}\int dx\left( \hat{\psi}^{\dagger}\partial_x\hat{\psi}+\hat{b}^{\dagger}\partial_x\hat{b}-h.c. \right),
\end{split}
\end{equation}
where $\hat{\psi}=\begin{pmatrix}\hat{\psi}_{\uparrow}\\ \hat{\psi}_{\downarrow}\end{pmatrix}$ represents the $S=1/2$ fermions with mass $m_F=1/2$, $\hat{b}$ with mass $m_b=1$ represents a scalar resonance with resonant energy $-\epsilon_b$ when $\epsilon_b<0$ or a molecule with binding energy $\epsilon_b$ when $\epsilon_b>0$. The coupling constant $t_b$ is chosen to be real. Operator $\hat{\mathcal{N}}$ is a conserved quantity and the expectation value of $m_F\hat{\mathcal{N}}=\hat{\mathcal{N}}/2$ gives out the total mass of the system. Although not subject to exact solutions, this model grasps the essence of the BCS-BEC crossover and is more friendly to semiclassical analysis.
A conventional way to analyze the semiclassical origin of the excitations is to treat the operators as classical fields and to solve the semiclassical equations of motion for them. Its validity can be justified via the saddle point approximation in the path integral formalism. The symmetry-broken ground state of the system is then represented by the expectation value $\lrangle{\hat{b}}=b_0$, where $b_0$ is a constant, and the excitations are represented by a space-time varying expectation value $b(x,t)\equiv \lrangle{\hat{b}}$, where we use the periodic boundary condition such that $b(x,t)=b(x+L,t)$. As we treat the operator $\hat{b}$ as a classical field $b(x,t)$, the part of $\hat{\mathcal{H}}$ that involves fermionic operators can be diagonalized via the Bogoliubov-Valatin transformation
\begin{equation}
\begin{pmatrix}\hat{\psi}_{\uparrow}\\ \hat{\psi}^{\dagger}_{\downarrow}\end{pmatrix}=\sum_n\begin{pmatrix}u_n(x,t) & -v^*_n(x,t)\\ v_n(x,t) & u^*_n(x,t)\end{pmatrix}\begin{pmatrix}\hat{\gamma}_{n\uparrow} \\ \hat{\gamma}^{\dagger}_{n\downarrow}\end{pmatrix}
\end{equation}
to the following Hamiltonian
\begin{equation}
\label{eq:meanpsi}
\hat{\mathcal{H}}_{\psi}=\kern-0.5em\sum_{\epsilon_n>0}\left[-\epsilon_n(\Delta,\Delta^*)+\epsilon_n\Big{|}_{t_b=0} \right]+\kern-0.5em\sum_{\epsilon_n>0,\sigma}\kern-0.5em\epsilon_n\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma},
\end{equation}
where we have defined $\Delta(x,t)\equiv t_bb(x,t)$ and the classical fields $u_n(x,t), v_n(x,t)$ satisfy the Bogoliubov-de Gennes equation \cite{Schrieffer_1983} with periodic boundary conditions:
\begin{equation}
\label{eq:eqforuv}
\begin{split}
& \begin{pmatrix}
-\partial^2_x-\mu & \Delta \\ \Delta^* & \partial^2_x+\mu
\end{pmatrix}\begin{pmatrix}
u_n \\ v_n
\end{pmatrix}=\epsilon_n\begin{pmatrix}
u_n \\ v_n
\end{pmatrix}, \\
& ~~\begin{cases}u_n(x+L,t)=u_n(x,t)\\v_n(x+L,t)=v_n(x,t)\end{cases}.
\end{split}
\end{equation}
Using these classical fields $b(x,t)$, $u_n(x,t)$ and $v_n(x,t)$, the energy and momentum of the system under a particular filling configuration of \req{eq:meanpsi} can then be expressed as
\begin{equation}
\label{eq:exforEP1}
\begin{split}
& E=\int dx\left( \frac{1}{2}|\partial_xb|^2-(2\mu+\epsilon_b)|b|^2 \right)+E_{\psi},\\
&E_{\psi}=\kern-0.5em\sum_{\epsilon_n>0}\left[-\epsilon_n(\Delta,\Delta^*)+\epsilon_n\Big{|}_{t_b=0} \right]+\kern-0.5em\sum_{\epsilon_n>0,\sigma}\kern-0.5em\epsilon_n\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}},\\
&P= \int dx\left(\sum_{\epsilon_n>0}\frac{u^*_n\overleftrightarrow{\partial_x}u_n+v^*_n\overleftrightarrow{\partial_x}v_n}{2i}\sum_{\sigma}\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}} \right)\\
&~~~+\int dx\left(\sum_{\epsilon_n>0}(-i)v_n\overleftrightarrow{\partial_x}v^*_n+\frac{b^*\overleftrightarrow{\partial_x}b}{2i}\right),
\end{split}
\end{equation}
where $E_{\psi}$ is the eigenvalue of the mean field Hamiltonian $\hat{\mathcal{H}}_{\psi}$ in \req{eq:meanpsi} under this particular filling configuration, and the double arrow derivative is defined as
\begin{equation}
f\overleftrightarrow{\partial_x}g\equiv f(\partial_xg)-(\partial_xf)g.
\end{equation}
The solutions to \req{eq:eqforuv} have a special particle-hole symmetry that if $(u_n,v_n)^\text{T}$ is a solution with eigenvalue $\epsilon_n$, then $(-v^*_n,u^*_n)^\text{T}$ must be a solution with eigenvalue $-\epsilon_n$. As a result, nonzero eigenvalues appear in pairs. Moreover, if \req{eq:eqforuv} possesses zero eigenvalue, it must be degenerate, otherwise we would have
\begin{equation}
\label{eq:fanzheng}
\begin{pmatrix}
u_0 \\ v_0
\end{pmatrix}=c\begin{pmatrix}
-v^*_0 \\ u^*_0
\end{pmatrix},
\end{equation}
where $(u_0,v_0)^{\text{T}}$ is the solution to \req{eq:eqforuv} for $\epsilon=0$ and $c$ is constant complex number of modulus one $|c|=1$. Equation (\ref{eq:fanzheng}) would then lead to $|c|^2u_0=-u_0$, which cannot be true unless $u_0$ is trivially zero (This argument is analogous to that for Kramers degeneracy). In later sections where $\Delta(x,t)$ is identified as a soliton, we find that the degenerate zero modes appear only in the deep BCS limit, where the spectrum is linearized around the Fermi points. But this turns out to be an artifact of the linearization, and there will be no zero mode when the nonlinear effect of the spectrum is taken into account.
It is clear from the above analysis that the solutions to \req{eq:eqforuv} always appear in pairs, the state $S=0$ then corresponds to a zero (or even) occupation of Bogoliubov fermions $\hat{\gamma}_{n\sigma}$ and the state $S=1/2$ is made out of odd occupation. Also as we will see in later sections, the state of the $S=0$ soliton corresponding to the exact solution is not necessarily a ground state of $\hat{\mathcal{H}}_{\psi}$.
\subsection{Dark Soliton}
The dark soliton is characterized by a twist in the configuration of $b(x)$ where its value changes sign rapidly from $x<0$ to $x>0$. Taking into consideration the periodic boundary condition, $b(x)$ then has the following asymptotic behavior at spatial boundaries:
\begin{equation}
b(x\to \pm L/2) \sim e^{ i\pi x/L},
\end{equation}
where we are taking the infinite system limit that $L\to \infty$. It would be helpful to perform the following gauge transformation:
\begin{equation}
\label{eq:pigauge}
b(x) = e^{ i \pi x/L}\tilde{b}(x),
\end{equation}
then the dark soliton can be presented as
\begin{equation}
\tilde{b}(x)=-ib_0f\left(\frac{x}{l_s}\right),
\end{equation}
where $l_s\ll L$ is the size of the soliton sitting at $x=0$, the constant number $b_0$ is chosen to be real, and the shape function $f(x)$ has the asymptotic behavior that $f(x\to\pm \infty)=\pm 1$. Under this gauge transformation, the periodic boundary condition of $b(x)$ becomes $\tilde{b}(x+L)=-\tilde{b}(x)$. As a result, $\tilde{b}(x)$ can be chosen purely imaginary, or equivalently, $f(x)$ can be chosen purely real.
To get rid of the phase in \req{eq:pigauge}, we perform the following gauge transformation on the classical fields $u_n(x), v_n(x)$:
\begin{equation}
\begin{cases}
u_n(x)=e^{ i\pi x/L}\tilde{u}(x) \\
v_n(x)=\tilde{v}_n
\end{cases},
\end{equation}
then \req{eq:eqforuv} is transformed into
\begin{equation}
\label{eq:gaugeequv}
\begin{split}
& \begin{pmatrix}
-\partial^2_x-\mu & t_b\tilde{b} \\ (t_b\tilde{b})^* & \partial^2_x+\mu
\end{pmatrix}\begin{pmatrix}
\tilde{u}_n \\ \tilde{v}_n
\end{pmatrix}=\epsilon_n\begin{pmatrix}
\tilde{u}_n \\ \tilde{v}_n
\end{pmatrix}, \\
& ~~\begin{cases}\tilde{u}_n(x+L,t)=-\tilde{u}_n(x,t)\\ \tilde{v}_n(x+L,t)=\tilde{v}_n(x,t)\end{cases},
\end{split}
\end{equation}
where we have neglected both $L^{-1}$ and $L^{-2}$ correction to the eigenenergy $\epsilon_n$. The former can be neglected because it contributes to the total energy in \req{eq:exforEP1} a term proportional to $P/L$, which goes to zero in the limit $L\to \infty$ for finite momentum $P$. The latter can be neglected because it contributes to the total energy a term proportional to $NL^{-2}$, which also goes to zero in the limit $L\to \infty$. Using these gauge transformed classical fields, the energy $E$, the momentum $P$ and the conserved quantity $N$ of the system can be expressed as:
\begin{equation}
\label{eq:exforEP2}
\begin{split}
E=&\int dx\left( \frac{1}{2}|\partial_x\tilde{b}|^2-(2\mu+\epsilon_b)|\tilde{b}|^2 \right)+E_{\psi},\\
P=& \int dx\left(\sum_{\epsilon_n>0}\frac{\tilde{u}^*_n\overleftrightarrow{\partial_x}\tilde{u}_n+\tilde{v}^*_n\overleftrightarrow{\partial_x}\tilde{v}_n}{2i}\sum_{\sigma}\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}} \right)\\
&+\int dx\left(\sum_{\epsilon_n>0}(-i)\tilde{v}_n\overleftrightarrow{\partial_x}\tilde{v}^*_n+\frac{\tilde{b}^*\overleftrightarrow{\partial_x}\tilde{b}}{2i}\right)+\frac{N}{2L}\pi,\\
N=&\int dx\sum_{\epsilon_n>0}\left[(\tilde{u}^*_n\tilde{u}_n-\tilde{v}^*_n\tilde{v}_n)\sum_{\sigma}\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}}\right]\\
&+\int dx\left(
\sum_{\epsilon_n>0}2\tilde{v}^*_n\tilde{v}_n+2\tilde{b}^*\tilde{b}\right),
\end{split}
\end{equation}
where the effect of gauge transformation is taken into account in the limit $L\to \infty$. The contribution to the energy is vanishingly small ($\sim NL^{-2}$) while the contribution to the momentum remains finite, which appears in the expression for $P$ as the last term proportional to $n=N/L$.
To be consistent with the choice that $\tilde{b}(x)$ is purely imaginary, $\tilde{u}_n(x)$ and $\tilde{v}_n(x)$ can be chosen purely real and purely imaginary respectively. Since the classical fields $\tilde{b}(x)$, $\tilde{u}_n(x)$ and $\tilde{v}_n(x)$ are chosen to be either purely real or purely imaginary, the contribution from the integral for the momentum $P$ in \req{eq:exforEP2} is zero, then we arrive at the result that the momentum of the dark soliton is exactly the Fermi momentum:
\begin{equation}
P=k_F=\pi n/2,
\end{equation}
whether it is for a $S=0$ state or a $S=1/2$ state.
Now we have to determine the actual form of the dark soliton profile $f(x)$. It is obtained by solving the equation of motion for the classical field $\tilde{b}(x)$. Because the dark soliton corresponds to a local minimum or a local maximum of the energy for $S=0$ or $S=1/2$ spectrum respectively, the desired equation of motion for $\tilde{b}(x)$ can be derived by extremizing the energy $E$ in \req{eq:exforEP2}:
\begin{equation}
\label{eq:semi0}
-\frac{1}{2}\partial^2_x \tilde{b}-\left( 2\mu+\epsilon_b \right)\tilde{b}+\frac{\delta E_{\psi}}{\delta \tilde{b}^*}=0.
\end{equation}
Together with \req{eq:gaugeequv}, we now have a complete set of equations to determine all the relevant classical fields.
As mentioned at the end of section II, our proposal for $S=1/2$ excitations is based upon the assumption that one extra fermion can be bounded on the dark soliton, which is equivalent to the assumption that there is at least one localized state solution to \req{eq:gaugeequv}, so we present below a simple one-parameter variational approach to verify this assumption.
The Hamiltonian operator corresponding to \req{eq:gaugeequv} is as follows:
\begin{equation}
\hat{\mathcal{H}}_b=\begin{pmatrix}
-\partial^2_x-\mu & t_b\tilde{b} \\ (t_b\tilde{b})^* & \partial^2_x+\mu
\end{pmatrix},
\end{equation}
and it has a positive as well as a negative sector, due to the particle-hole symmetry discussed after \req{eq:eqforuv}. Accordingly, the existence of the localized state can be proved by the fact that the expectation value $I(\kappa)$ of $\hat{\mathcal{H}}_b^2$ on a normalized trial wave function $\psi_{\kappa}(x)$ is below the boundary of the continuous spectrum for $\hat{\mathcal{H}}^2_b$, where $\kappa$ is the variational parameter:
\begin{equation}
\begin{split}
& I(\kappa)=\int dx\left(\mathcal{H}\psi_{\kappa}(x) \right)^*\mathcal{H}\psi_{\kappa}(x), \\
& \int dx ~\psi^*_{\kappa}(x)\psi_{\kappa}(x)=1.
\end{split}
\end{equation}
Here we make the choice that $I(0)$ corresponds to the boundary of the continuous spectrum and $\kappa>0$ corresponds to the localized state. Then the existence of localized state corresponds to $I'(0)<0$.
For $\mu>0$, the boundary of the continuous spectrum for $\hat{\mathcal{H}}_b^2$ is $\Delta_0^2=(t_bb_0)^2$, and the normalized trial wave function can be chosen as
\begin{equation}
\psi_{\kappa}=\sqrt{\kappa}e^{-\kappa|x|}\begin{pmatrix}
\cos k_Fx \\ \sin k_Fx
\end{pmatrix},
\end{equation}
where $k_F^2=\mu$. Then we have
\begin{equation}
\!\!I(\kappa)=\Delta^2_0+\kappa^4+4\kappa^2k^2_F-\Delta^2_0\kappa l_s\kern-0.5em\int e^{-2\kappa l_s|y|}f^2(y)dy,
\end{equation}
which has the required property that
\begin{equation}
I(0)=\Delta^2_0, ~~~ I'(0)<0.
\end{equation}
For $\mu\leqslant 0$, the boundary of the continuous spectrum for $\hat{\mathcal{H}}^2_b$ is $\Delta^2_0+\mu^2$, and the following normalized trial wave function is chosen:
\begin{equation}
\psi_{\kappa}=\sqrt{\kappa}e^{-\kappa|x|}\begin{pmatrix}
1 \\ 0
\end{pmatrix},
\end{equation}
Then we have
\begin{equation}
\kern-0.5emI(\kappa)=(-\kappa^2\!+|\mu|)^2+\Delta^2_0-\Delta^2_0\kappa l_s\kern-0.5em\int e^{-2\kappa l_s|y|}f^2(y)dy,
\end{equation}
which again has the required property that
\begin{equation}
I(0)=\Delta^2_0+\mu^2,~~~ I'(0)<0.
\end{equation}
Taking also into consideration that the solutions to \req{eq:gaugeequv} always appear in pairs and belong to the negative and positive sectors respectively, we then proved here that there is at least one localized state for each sector for the whole range of $\mu$ across the BCS-BEC crossover.
In later sections, we will show that the number of localized state is exactly one for each sector in both the deep BCS and the deep BEC limit, and we didn't find any evidence for the existence of a second localized state (appearance of such state would not violate any further consideration).
\subsection{Grey Soliton}
In order to transform the dark soliton into a moving grey soliton, we need to generalize the above construction to the following asymptotic behavior at spatial boundaries:
\begin{equation}
\label{eq:boundarycon}
b(x\to \pm L/2,t) \sim e^{ i\theta_s x/L},
\end{equation}
where the phase parameter $\theta_s\in [0,2\pi)$ and we take the limit $L\to \infty$. We will show in later sections that the moving grey soliton can be presented in the following form:
\begin{equation}
\label{eq:solsolu}
b(x,t)=\left[ \cos\frac{\theta_s}{2}-i\sin\frac{\theta_s}{2}f\left(\frac{x-v_st}{l_s}\right)\right]e^{ i\theta_s x/L},
\end{equation}
where $v_s$ is the velocity of the grey soliton. The velocity $v_s$ and phase parameter $\theta_s$ are not independent variational variables. As we will show now, they are related to each other via the semiclassical velocity formula $v_s=\partial E(\theta_s)/\partial P(\theta_s)$.
Considering the transformation of the variables from $(x,t)$ to $(z,t)$ such that $z=x-v_st$, we will have
\begin{equation}
\label{eq:veltrans}
\begin{split}
& \hat{\mathcal{H}} \to \hat{\Omega}=\hat{\mathcal{H}}+\frac{iv_s}{2}\int dz\left(\hat{\psi}^{\dagger}\overleftrightarrow{\partial_z}\hat{\psi}+\hat{b}^{\dagger}\overleftrightarrow{\partial_z}\hat{b} \right),\\
& \hat{\mathcal{P}} \to \hat{\mathcal{P}}=\frac{(-i)}{2}\int dz\left(\hat{\psi}^{\dagger}\overleftrightarrow{\partial_z}\hat{\psi}+\hat{b}^{\dagger}\overleftrightarrow{\partial_z}\hat{b} \right),
\end{split}
\end{equation}
where we have variable $x$ on the lefthand side and variable $z$ on the righthand side. We can see that in \req{eq:veltrans} new terms are added to the Hamiltonian operator, while the momentum operator remains unchanged. This implies that the variable transformation introduced here is not a Galilean transformation, for which the momentum would have been changed by the amount proportional to $v_sN\to \infty$. From \req{eq:veltrans} we obtain the following operator relations:
\begin{equation}
\frac{\partial \hat{\Omega}}{\partial v_s}=-\hat{\mathcal{P}}, ~~~ \hat{\Omega}=\hat{\mathcal{H}}-v_s\hat{\mathcal{P}}.
\end{equation}
By taking the expectation values of both sides on the soliton with phase parameter $\theta_s$, we can see that the change of variables from $x$ to $z$ is equivalent to a Legendre transformation:
\begin{equation}
\label{eq:legleg}
\frac{\partial \Omega(\theta_s)}{\partial v_s}=-P(\theta_s), ~~~ \Omega(\theta_s)=E(\theta_s)-v_sP(\theta_s).
\end{equation}
By taking derivative with respect to $\theta_s$ of both sides of the second equation in (\ref{eq:legleg}), we obtain
\begin{equation}
\label{eq:leg1}
\frac{\partial \Omega}{\partial \theta_s}=\frac{\partial E}{\partial\theta_s}-\frac{\partial v_s}{\partial \theta_s}P-v_s\frac{\partial P}{\partial\theta_s}.
\end{equation}
Then using the first equation in (\ref{eq:legleg}) we also obtain
\begin{equation}
\label{eq:leg2}
\frac{\partial \Omega}{\partial \theta_s}=\frac{\partial \Omega}{\partial v_s}\frac{\partial v_s}{\partial \theta_s}=-\frac{\partial v_s}{\partial \theta_s}P.
\end{equation}
Combining \req{eq:leg1} and \req{eq:leg2}, we arrive at the following equation
\begin{equation}
\label{eq:vel}
v_s=\frac{\partial E/\partial \theta_s}{\partial P/\partial \theta_s}=\frac{\partial E}{\partial P}.
\end{equation}
When the soliton is interpreted as a proper excitation, \req{eq:vel} is just the semiclassical velocity formula mentioned above, which determines the soliton velocity $v_s$ as a function of $\theta_s$. Then the fact that the dark soliton corresponds to either the maximum ($S=0$) or minimum energy ($S=1/2$) follows from the condition that $v_s(\theta_s=\pi)=0$.
As in derivation for the dark soliton, it would be helpful to do the following gauge transformation of the classical fields:
\begin{equation}
\label{eq:gaugetrans}
\begin{split}
& b(x,t)=e^{ i\frac{\theta_s x}{L}}\tilde{b}(z),\\
& u_n(x,t)=e^{ i\frac{\theta_s x}{L}}\tilde{u}_n(z), \\
& v_n(x,t)=\tilde{v}_n(z),
\end{split}
\end{equation}
where $z=x-v_st$. This leaves us with the analysis of classical fields $\tilde{b}(z)$ or $\tilde{\Delta}(z)=t_b\tilde{b}$(z), $\tilde{u}_n(z)$ and $\tilde{v}_n(z)$, for which we will omit the tilde in the following whenever there is no confusion. Also, the gauge transformation modifies the boundary conditions of the classical fields:
\begin{equation}
\label{eq:boundCon}
\begin{split}
& b(z+L)=e^{-i\theta_s}b(z), \\
& u_n(z+L)=e^{-i\theta_s}u_n(z), \\
& v_n(z+L)=v_n(z).
\end{split}
\end{equation}
Using these classical fields, again we can write down the expressions for the energy $E$, momentum $P$ and the conserved quantity $N$:
\begin{equation}
\label{eq:EPN}
\begin{split}
E=&\int dz\left( \frac{1}{2}|\partial_zb|^2-(2\mu+\epsilon_b)|b|^2 \right)+E_{\psi},\\
P=& \int dz\left(\sum_{\epsilon_n>0}\frac{u^*_n\overleftrightarrow{\partial_z}u_n+v^*_n\overleftrightarrow{\partial_z}v_n}{2i}\sum_{\sigma}\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}} \right)\\
&+\int dz\left(\sum_{\epsilon_n>0}(-i)v_n\overleftrightarrow{\partial_z}v^*_n+\frac{b^*\overleftrightarrow{\partial_z}b}{2i}\right)+\frac{N}{2L}\theta_s,\\
N=&\int dz\sum_{\epsilon_n>0}\left[(u^*_nu_n-v^*_nv_n)\sum_{\sigma}\lrangle{\hat{\gamma}^{\dagger}_{n\sigma}\hat{\gamma}_{n\sigma}}\right]\\
&+\int dz\left(
\sum_{\epsilon_n>0}2v^*_nv_n+2b^*b\right),
\end{split}
\end{equation}
where the energy and momentum are understood by taking the reference point that $E(\theta_s=0)=0$ and $P(\theta_s=0)=0$. Also, the chemical potential is determined by the usual thermodynamic relation that $\mu=\partial E/\partial N$.
For a particular filling configuration of the mean field Hamiltonian $\hat{\mathcal{H}}_{\psi}$ in \req{eq:meanpsi}, we now derive the semiclassical equations of motion for the classical fields $b(z), u_n(z)$ and $v_n(z)$. Unlike the dark soliton, the grey soliton only extremizes the energy $E$ under certain constraints. Usually we would extremize the energy $E$ under the constraint of fixed momentum $P$, but this approach may not respect the desired boundary condition in \req{eq:boundCon}. To overcome this difficulty, we use a modified extremization process. Firstly, we partition the momentum $P$ in \req{eq:EPN} into two parts: the contribution $P_{\psi}$ from the fermion fields and the contribution $P_b$ from the $b$ field:
\begin{equation}
P_b=\int dz\left(\frac{(-i)}{2}b^*\overleftrightarrow{\partial_z}b+\frac{b^*b}{L}\theta_s\right), ~~~ P_{\psi}=P-P_b
\end{equation}
Then instead of keeping $P$ fixed, we keep both $P_{\psi}$ and $P_{b}$ fixed, and this introduces two Lagrangian multiplier $v_{\psi}$ and $v_b$ into the free energy $F$ we want to extremize:
\begin{equation}
\label{eq:freeenergyF}
E \to F=E-v_{\psi}P_{\psi}-v_bP_b.
\end{equation}
We can visualize this modified extremization in the functional space spanned by $P_{\psi}$ and $P_{b}$ (see Fig. \ref{fig:extremize}). Each point on the hyperline $P_{\psi}+P_b=P$ corresponds to an extreme of the free energy $F$, and one point among them (the starred point in Fig. \ref{fig:extremize}) is picked out by adjusting the Lagrangian multiplier pair $(v_{\psi},v_b)$ to satisfy the boundary condition in \req{eq:boundCon}. This modified extremization process is morally equivalent to the method of constrained instanton used in field theories \cite{AFFLECK1981429}. Also, following the derivation from \req{eq:legleg} to \req{eq:vel}, we obtain
\begin{equation}
\label{eq:vel1}
dE=v_{\psi}dP_{\psi}+v_bdP_b=v_s(dP_{\psi}+dP_b).
\end{equation}
This allows a trivial solution that $v_{\psi}=v_b=v_s$ or a nontrivial solution such that
\begin{equation}
\label{eq:vel2}
\frac{v_s-v_{\psi}}{v_b-v_s}=\frac{\partial P_b}{\partial P_{\psi}}.
\end{equation}
We will see in later sections that the nontrivial solution is crucial on the deep BCS side.
\begin{figure}[htp!]
\includegraphics[scale=0.3]{Minimization.pdf}
\caption{\footnotesize The functional space for the extremization spanned by $P_{\psi}$ and $P_{b}$. The thick line is the collection of extreme points and the starred point is the one that satisfy the required boundary condition in \req{eq:boundCon}.}
\label{fig:extremize}
\end{figure}
Applying the modified extremization process, we obtain the following equations of motion for the classical fields in the limit $L\to \infty$:
\begin{equation}
\label{eq:semi1}
-\frac{1}{2}\partial^2_z b+iv_b\partial_zb-\left( 2\mu+\epsilon_b \right)b+\frac{\delta E_{\psi}}{\delta b^*}=0,
\end{equation}
\begin{equation}
\label{eq:semi2}
\begin{pmatrix}-\partial^2_z-\mu+iv_{\psi}\partial_z & \Delta(z) \\ \Delta^*(z) & \partial^2_z+\mu+iv_{\psi}\partial_z\end{pmatrix}\begin{pmatrix}u_n \\ v_n\end{pmatrix}=\bar{\epsilon}_n \begin{pmatrix}u_n \\ v_n\end{pmatrix},
\end{equation}
where $\Delta(z)$ field is related to $b(z)$ field through the definition $\Delta(z)=t_bb(z)$ and the eigenvalue $\bar{\epsilon}_n$ differs from $\epsilon_n$ in \req{eq:meanpsi} and \req{eq:eqforuv} in that $\bar{\epsilon}_n$ contributes to the free energy $F$ in \req{eq:freeenergyF} while $\epsilon_n$ contributes to the energy $E$ in \req{eq:exforEP1}. We should keep this in mind when later calculating the energy $E$. Also, the proof of the existence of the localized state for a dark soliton can be easily generalized here to \req{eq:semi2} for a grey soliton.
\section{Theory of $S=1/2$ Soliton}
In this section, we apply the general formalism outlined above to the $S=1/2$ soliton, which turns out to be simpler than the $S=0$ soliton. The two weak coupling limits - the deep BCS side and the deep BEC side - permit analytical treatment, because on either side, one of the degrees of freedom lies high in energy compared to the other such that we are left with a decoupled theory with weak interaction.
\subsection{Deep BCS Side}
On the deep BCS side, we tune the resonant level of $b$ field far above the Fermi sea such that $\epsilon_b<0, |\epsilon_b|\gg \mu$. Since the $b$ field now only acts as a virtual state to effect the low energy physics, we can ignore its dynamics, and the equation of motion for it reduces to a self-consistent equation:
\begin{equation}
\label{eq:selfcon}
\Delta=\lambda\sum_{\epsilon_n>0}u_{n}v_{n}^*\left(1-\sum_{\sigma}\lrangle{\hat{\gamma}_{n\sigma}^{\dagger}\hat{\gamma}_{n\sigma}}\right)+\tau(i\partial_z\Delta),
\end{equation}
where $\lambda=\frac{|t_b|^2}{-(2\mu+\epsilon_b)}>0$ serves as the effective coupling constant and $\tau=v_b/(2\mu+\epsilon_b)$. Also for the dark soliton, we should bear in mind that we need to set $v_{\psi}=v_b=0$ and $\tau=0$. Combined with the equation of motion for the fermion fields, we can reconstruct the Hamiltonian as
\begin{equation}
\label{eq:BCS1}
\begin{split}
\hat{\mathcal{H}}=&\int dz\left( \sum_{\sigma}\hat{\psi}^{\dagger}_{\sigma}\left( -\partial^2_z-\mu \right)\hat{\psi}_{\sigma}\right)\\
&+\int dz\left(\Delta^*\hat{\psi}_{\downarrow}\hat{\psi}_{\uparrow}+\Delta\hat{\psi}^{\dagger}_{\uparrow}\hat{\psi}^{\dagger}_{\downarrow}+\frac{|\Delta|^2}{\lambda}\right).
\end{split}
\end{equation}
This is just the BCS mean field Hamiltonian for the conventional superconductivity and the $\Delta$ field is just the gap parameter. The system is made up with loosely bounded Cooper pairs, and we have a large chemical potential $\mu=k^2_F$, where $k_F=\pi n/2$ and $n=N/L$. Since the low energy physics happens only near the two Fermi points, we can linearize the spectrum around them:
\begin{equation}
\label{eq:LRD}
\begin{pmatrix}u_n\\ v_n\end{pmatrix}=\sum_{\alpha}\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix}e^{i\alpha k_Fz},
\end{equation}
where $\alpha=-1$ and $\alpha=1$ denotes the left and right moving modes respectively. Correspondingly, \req{eq:semi2} can be linearized to the following form:
\begin{equation}
\label{eq:LinearEq}
\!\!\begin{pmatrix}-i\alpha v_F\partial_z-\alpha v_{\psi}k_F & \kern-1em \Delta(z)\\ \Delta^*(z) & \kern-1em i\alpha v_F\partial_z-\alpha v_{\psi}k_F \end{pmatrix}\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix}=\bar{\epsilon}^{\alpha}_n\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix},
\end{equation}
where the bar notation of the eigenvalue again reminds us that $\bar{\epsilon}^{\alpha}_n$ contributes to the free energy $F$ instead of the energy $E$. Moreover, due to the linearization made here, we can further determine the eigenvalue $\epsilon^{\alpha}_n$ that contributes to energy $E$ as $\epsilon^{\alpha}_n=\bar{\epsilon}^{\alpha}_n+\alpha v_{\psi}k_F$.
The solution to this linearized Bogoliubov-de Gennes equation under soliton profile in the context of polyacetylene and charge density waves is well established in the literature \cite{PhysRevB.22.2099,PhysRevB.21.2388,Brazovskii_1989,1980JETP...51..342B}. Essentially, the solvability comes from the fact that \req{eq:LinearEq} has the form of Dirac equation in one dimension and it can be associated with a nonlinear Schr$\ddot{\text{o}}$dinger equation for the $\Delta(z)$ field via the inverse scattering method \cite{Faddeev_ISM}. There then exists the following soliton solution:
\begin{equation}
\label{eq:FSoliton}
\Delta(z)=\Delta_0\left[ \cos\frac{\theta_s}{2}-i\sin\frac{\theta_s}{2}\tanh\left( \frac{z}{l_s} \right) \right],
\end{equation}
where the size of the soliton is $l^{-1}_s=\left(\Delta_0\sin\frac{\theta_s}{2}\right)/v_F$. The eigenmodes of \req{eq:LinearEq} can be classified into two categories. The first category includes the delocalized states labelled by left-right moving index $\alpha=\pm$, band index $\iota=\pm$ and momentum $k$:
\begin{equation}
\label{eq:deEigen}
\begin{split}
& \begin{cases}
u^{\alpha}_{\iota k}=\frac{1}{2}\frac{1}{\sqrt{N^{\alpha}_{\iota k}L}}\left[ 1+\alpha\frac{v_Fk+i\Delta_2\tanh\left(\frac{\Delta_2}{v_F}z \right)}{\epsilon_{\iota k}-\alpha \Delta_1} \right]e^{ikz} \\
v^{\alpha}_{\iota k}=\frac{1}{2}\frac{1}{\sqrt{N^{\alpha}_{\iota k}L}}\left[ -\alpha+\frac{v_Fk+i\Delta_2\tanh\left(\frac{\Delta_2}{v_F}z \right)}{\epsilon_{\iota k}-\alpha \Delta_1} \right]e^{ikz}
\end{cases}, \\
& \Delta_1=\Delta_0\cos\frac{\theta_s}{2}, \Delta_2=\Delta_0\sin\frac{\theta_s}{2}, N^{\alpha}_{\iota k}=\frac{\epsilon_{\iota k}}{\epsilon_{\iota k}-\alpha \Delta_1}.
\end{split}
\end{equation}
The corresponding eigenvalues are
\begin{equation}
\label{eq:deEigenE}
\bar{\epsilon}^{\alpha}_{\iota k}=\epsilon_{\iota k}-\alpha v_{\psi}k_F, ~ \epsilon_{\iota k}=\iota\epsilon_k,~\epsilon_k=\sqrt{\Delta^2_0+v^2_Fk^2}.
\end{equation}
so the band $\iota=+$ corresponds to the excitations defined in \req{eq:meanpsi}. The second category is the localized states on the soliton core, labelled only by the left-right moving index $\alpha$:
\begin{equation}
\label{eq:local1}
\begin{pmatrix}u^{\alpha}_0\\ v^{\alpha}_0\end{pmatrix}=\frac{1}{2}\sqrt{\frac{\Delta_2}{v_F}}\text{sech}\left( \frac{\Delta_2z}{v_F} \right)\begin{pmatrix}1 \\ \alpha\end{pmatrix},
\end{equation}
and the corresponding eigenvalues are:
\begin{equation}
\label{eq:local2}
\bar{\epsilon}^{\alpha}_0=\epsilon^{\alpha}_0-\alpha v_{\psi}k_F, ~~~ \epsilon^{\alpha}_0=\alpha\Delta_0\cos\frac{\theta_s}{2}.
\end{equation}
According to the above expression for the eigenvalues, the localized states corresponding to the dark soliton ($\theta_s=\pi, v_{\psi}=0$) are degenerate zero modes, but this degeneracy is an artifact of the linearization in \req{eq:LRD}, while the correction from the quadratic spectrum up to leading order will lift this degeneracy:
\begin{equation}
\label{eq:localeigen}
\epsilon^{\alpha}_0=\alpha \sqrt{\Delta^2_0-\left[\Delta_0\sin\frac{\theta_s}{2}-\frac{\pi v_Fk_F}{2}\text{csch}\left(\frac{\pi l_sk_F}{2} \right)\right]^2}.
\end{equation}
The actual localized states are linear combinations of the left and right moving localized states, so the superscript $\alpha=\pm$ in \req{eq:localeigen} labels positive and negative modes instead of left and right moving modes.
To complete the construction of the soliton, we still need to satisfy the self-consistent requirement in \req{eq:selfcon}. In the present classification of the eigenmodes, it is expressed as
\begin{equation}
\label{eq:selfcon22}
\begin{split}
\Delta=\lambda\sum_{\alpha,k}u^{\alpha}_{+,k}{v^{\alpha}_{+,k}}^*\left(1-\sum_{\sigma}\lrangle{\hat{\gamma}^{\alpha\dagger}_{+,k,\sigma}\hat{\gamma}^{\alpha}_{+,k,\sigma}}\right)\\+\lambda u^{+}_0{v^{+}_0}^*\left(1-\sum_{\sigma}\lrangle{\hat{\gamma}^{+\dagger}_{0,\sigma}\hat{\gamma}^{+}_{0,\sigma}} \right)+\tau (i\partial_z\Delta).
\end{split}
\end{equation}
The $S=1/2$ soliton is obtained by setting $\sum_{\sigma}\lrangle{\hat{\gamma}^{\alpha\dagger}_{+,k,\sigma}\hat{\gamma}^{\alpha}_{+,k,\sigma}}=0$ and $\sum_{\sigma}\lrangle{\hat{\gamma}^{+\dagger}_{0,\sigma}\hat{\gamma}^{+}_{0,\sigma}}=1$, then the above equation reduces to
\begin{equation}
\label{eq:selfcon2}
\Delta=\lambda \int\frac{dk}{2\pi}\frac{\Delta}{\epsilon_k}+\frac{\lambda}{4}\frac{\Delta_0}{v_F}\frac{\theta_s-\pi}{\pi}\frac{\sin\frac{\theta_s}{2}}{\cosh^2\left(\frac{\Delta_2}{v_F}z \right)}+\tau(i\partial_z\Delta).
\end{equation}
For the dark soliton, the second part on the righthand side vanishes and we need to set $\tau=0$, then the resulting equation is exactly the one we have in conventional BCS theory with a homogenous gap parameter:
\begin{equation}
\label{eq:normalself}
1=\lambda \int\frac{dk}{2\pi}\frac{1}{\epsilon_k} \Rightarrow \Delta_0\propto \exp\left(-\frac{1}{\lambda\nu(\epsilon_F)} \right),
\end{equation}
where $\nu(\epsilon_F)$ is the density of states on the Fermi level.
For the moving grey soliton, the second part on the righthand side of \req{eq:selfcon2} has a finite value, but it can be canceled by the third term under the choice that
\begin{equation}
\label{eq:tautau}
\tau=\frac{\lambda}{4\Delta_0}\frac{\pi-\theta_s}{\pi}\sin^{-1}\frac{\theta_s}{2},
\end{equation}
which then determines $v_b$ as
\begin{equation}
\label{eq:vbbb}
v_b=\frac{|t_b|^2}{4\Delta_0}\frac{\theta_s-\pi}{\pi}\sin^{-1}\frac{\theta_s}{2}.
\end{equation}
We can see that determination of parameters $\tau,v_b$ in the above equation is consistent with $\tau=0,v_b=0$ for $\theta_s=\pi$ in the case of dark soliton.
Having specified the $S=1/2$ soliton, we can proceed to calculate its energy and momentum near the dark soliton up to leading order in $\xi=\theta_s-\pi$ using the formula in \req{eq:EPN} and \req{eq:meanpsi}. The calculation consists of first determining the phase shift $\delta(k)$ for the continuous spectrum from the boundary conditions in \req{eq:boundCon} and then changing the summations over $k$ into integrations while taking into account the correction due to the phase shift $\delta(k)$ in the limit $L\to \infty$ \cite{PhysRevB.21.2388,PhysRevA.91.023616}. Also, we need to keep in mind that we should use $\epsilon^{\alpha}_n$ instead of $\bar{\epsilon}^{\alpha}_n$ in the calculation of energy $E$. The final result is:
\begin{equation}
\label{eq:Dispersion1}
\begin{split}
& E^{\text{BCS}}_{1/2}(\theta_s)=\frac{2\Delta_0}{\pi}\left(1+\frac{1}{8}\xi^2\right),\\
& P^{\text{BCS}}_{1/2}(\theta_s)=k_F-\frac{\Delta_0}{2v_F}\xi.
\end{split}
\end{equation}
This translates into the following dispersion relation and soliton velocity up to leading order in $\xi$:
\begin{equation}
\label{eq:velocity}
\begin{split}
& E_{1/2}=\frac{2\Delta_0}{\pi}\left(1+\frac{v^2_F(P_{1/2}-k_F)^2}{2\Delta^2_0} \right),\\
& v^\text{BCS}_s=\frac{\partial E_{1/2}}{\partial P_{1/2}}=-\frac{\xi}{\pi}v_F.
\end{split}
\end{equation}
It is clear that the minimum energy is achieved exactly at the Fermi momentum $k_F=\pi n/2$, as observed in the exact solutions. Also, the soliton velocity now is characterized by the Fermi velocity $v_F$, which is also consistent with the exact solutions. A comparison of the current semiclassical result with the exact solution is shown in Fig. \ref{fig:fermion2}, where the agreement is good in the vicinity of the dark soliton.
\begin{figure}
\includegraphics[scale=0.5]{fermion2.pdf}
\caption{\footnotesize The typical $S=1/2$ excitation spectrum in the semiclassical result and exact solution. The latter is plotted for $\gamma=c_F/n=1.13$, and correspondingly the former is plotted taking the spin gap at the same coupling strength as the input parameter.}
\label{fig:fermion2}
\end{figure}
To complete the analysis, we still need to determine $v_{\psi}$ and $v_b$ from \req{eq:vel2}. In order to do that, we need the expressions for $P_{\psi}$ and $P_b$ respectively:
\begin{equation}
P_b=\frac{\Delta^2_0}{|t_b|^2}(\pi+2\xi), ~~~ P_{\psi}=P^{\text{BCS}}_{1/2}(\theta_s)-P_b.
\end{equation}
Substituting them into \req{eq:vel2} and using \req{eq:vbbb}, we obtain up to leading order:
\begin{equation}
\label{eq:vbbb2}
v_b=\frac{|t_b|^2}{4\pi\Delta_0}\xi, ~~~ v_{\psi}=v^{\text{BCS}}_s-\frac{v^{\text{BCS}}_s-v_b}{1+|t_b|^2/(4\Delta_0v_F)}\approx v^{\text{BCS}}_s,
\end{equation}
where the expression for $v_{\psi}$ will be of use in later section when we analyze the $S=0$ soliton. This closes our analysis of the $S=1/2$ soliton on the deep BCS side.
\subsection{Deep BEC Side}
On the BEC side, we tune the resonant level to a tightly bounded molecule with binding energy $\epsilon_b>0$. Then we have a negative chemical potential $\mu<0$ characterizing the absence of a Fermi sea, and we need to consider the quadratic Bogoliubov-de Gennes equation in \req{eq:semi2}. For delocalized states characterized by momentum $k$, we formally obtain the spectrum of Bogoliubov quasiparticles as:
\begin{equation}
\epsilon_k=\sqrt{(k^2-\mu)^2+|\Delta|^2}.
\end{equation}
For large negative chemical potential $\mu$, we can expanded the spectrum as
\begin{equation}
\epsilon_k=(k^2-\mu)+\frac{|\Delta|^2}{2(k^2-\mu)}-\frac{|\Delta|^4}{8(k^2-\mu)^3}+\cdots.
\end{equation}
Substituting this into \req{eq:semi1}, we can bring the equation of motion for $b=\Delta/t_b$ to the following form known as the Gross-Pitaevskii equation:
\begin{equation}
\label{eq:GP}
-\frac{1}{2}\partial^2_zb+iv_b\partial_zb+2g(|b|^2-n_s)b=0,
\end{equation}
where the parameters are defined via
\begin{equation}
g=\frac{3|t_b|^4}{128|\mu|^{5/2}}, ~~~ n_s=\frac{\frac{|t_b|^2}{4|\mu|^{1/2}}+2\mu+\epsilon_b}{\frac{3|t_b|^4}{64|\mu|^{5/2}}}.
\end{equation}
The Gross-Pitaevskii equation as a nonlinear Schr$\ddot{\text{o}}$dinger equation has been extensively studied in the literature \cite{Tsuzuki1971,1972JETP...34...62Z,1973JETP...37..823Z,Kulish:1976ek,PhysRevA.78.053630}. It also supports a soliton solution:
\begin{equation}
\label{eq:Soliton}
b(z)=\sqrt{n_s}\left( \cos\frac{\theta_s}{2}-i\sin\frac{\theta_s}{2}\tanh\frac{z}{l_s} \right),
\end{equation}
where the size of the soliton is $l_s=[v_c\sin(\theta_s/2)]^{-1}$, the Lagrangian multiplier is $v_b=v_c\cos(\theta_s/2)$ and $v_c=\sqrt{gn}$ is the sound velocity. By calculating the total mass of the system we can also determine $n_s=n/2$.
The $S=1/2$ soliton is constructed by adding an extra fermion into the system, then we can effectively describe the system as follows: There is a weakly interacting background (the bound pairs) with the effective coupling constant $g$. The extra fermion added into the system interacts with the background locally by an effective coupling constant $g'$, which can be calculated perturbatively from \req{eq:BCS-BEC} in the narrow resonance limit. For this purpose, we consider the scattering process $\psi b\to \psi b$, whose Feynman diagrams are shown in Fig. \ref{fig:Feynman}.
\begin{figure}
\includegraphics[scale=0.4]{Feynman.pdf}
\caption{\footnotesize The Feynman diagrams (left) for leading contribution to the scattering process $\psi b\to \psi b$ (right), where the solid line denotes the fermion propagator, the wiggled line denotes the boson propagator, the fermion-boson vertex denotes the resonant coupling $t_b$, and the dotted vertex on the right denotes the effective coupling $g'$.}
\label{fig:Feynman}
\end{figure}
The scattering amplitude up to leading order is then
\begin{equation}
g'(\omega,k)=-\frac{|t_b|^2}{\omega-\epsilon_k}\approx \frac{|t_b|^2}{-2\mu}>0.
\end{equation}
As a result, the added fermion $\psi$ can be described as a quantum particle moving in the potential created by the background:
\begin{equation}
\label{eq:GP2}
(-\partial^2_z-\mu+iv_{\psi}\partial_z)\psi+g'(|b|^2-n_s)\psi=\bar{\epsilon}\psi,
\end{equation}
where in the second term on the lefthand side, we have adjusted for the interaction of the fermion with the uniform background (the constant term $g'n_s$), since it can be incorporated into the chemical potential. Performing the gauge transformation $\psi\to\psi e^{iv_{\psi}z/2}$ which shifts the momentum by $v_{\psi}/2$, and substituting \req{eq:Soliton} into \req{eq:GP2}, we end up with a Schr$\ddot{\text{o}}$dinger equation for a particle moving in the P$\ddot{\text{o}}$schl-Teller potential \cite{Poschl_Teller}:
\begin{equation}
-\partial^2_z\psi-\alpha^2\frac{\zeta(\zeta-1)}{\cosh^2\alpha z}\psi=\left(\bar{\epsilon}+\mu+\frac{v_{\psi}^2}{4}\right)\psi,
\end{equation}
where the two parameter $\alpha$ and $\zeta>1$ are determined by
\begin{equation}
\alpha=v_c\sin(\theta_s/2), ~~~ \alpha^2\zeta(\zeta-1)=g'n_s\sin^2(\theta_s/2).
\end{equation}
The P$\ddot{\text{o}}$schl-Teller potential produces a bound state with the following energy:
\begin{equation}
\begin{split}
\bar{\epsilon}_0& =-\alpha^2(\zeta-1)^2-\frac{v_{\psi}^2}{4}-\mu\\
&=-v_c^2\sin^2\frac{\theta_2}{2}\left( \frac{\sqrt{1+2g'/g}-1}{2} \right)^2-\frac{v_{\psi}^2}{4}+|\mu|.
\end{split}
\end{equation}
Also the momentum of this bound state is simply $k_0=v_{\psi}/2$, and we can determine the eigenvalue $\epsilon_0$ that contributes to the energy $E$ as
\begin{equation}
\epsilon_0=\bar{\epsilon}_0+v_{\psi}k_0.
\end{equation}
Then the total energy $E_{1/2}(\theta_s)$ and momentum $P_{1/2}(\theta_s)$ of the system can be determined according to \req{eq:EPN}:
\begin{equation}
\label{eq:S2}
\begin{split}
E^{\text{BEC}}_{1/2}(\theta_s) &=\int dz~\left(\frac{1}{2}|\partial_zb|^2+g(|b|^2-n_s)^2\right)+\epsilon_0\\
& =n_sv_c\left[\frac{4}{3}\sin^3\frac{\theta_s}{2}-2u\sin^2\frac{\theta_s}{2} \right]+\frac{1}{4}v^2_{\psi}+|\mu|,\\
P^{\text{BEC}}_{1/2}(\theta_s) & =\int dz~\frac{1}{2i}(b^*\partial_zb-b\partial_zb^*)+n_s\theta_s+k_0\\
& = n_s(\theta_s-\sin\theta_s)+\frac{1}{2}v_{\psi},
\end{split}
\end{equation}
where $u=\sqrt{\frac{g}{n}}\left( \frac{\sqrt{1+2g'/g}-1}{2} \right)^2\gg 1$ in the narrow resonance limit. Thus the minimum of $E^{\text{BEC}}_{1/2}(\theta_s)$ is at $\theta_s=\pi$ with the following minimum energy:
\begin{equation}
E^\text{BEC}_{1/2}(\theta_s=\pi)-|\mu|=n_sv_c\left(\frac{4}{3}-2u\right)<0,
\end{equation}
so $E^{\text{BEC}}_{1/2}(\theta_s=\pi)$ is lower than the energy of adding one particle with zero momentum to the uniform background of bound pairs. Again, we arrive at the conclusion that the minimum energy is achieved exactly at the Fermi momentum $k_F=\pi n_s=\pi n/2$.
We are then left with the determination of the velocities $v_{\psi},v_s$ in addition to $v_b=v_c\cos(\theta_s/2)$, which should be obtained by solving \req{eq:vel1}. Here the trivial solution will do the work:
\begin{equation}
\label{eq:velspin}
v_{\psi}=v_b=v^\text{BEC}_s=v_c\cos\frac{\theta_s}{2}.
\end{equation}
A comparison of the current semiclassical result and the exact solution is shown in Fig. \ref{fig:fermion1}, they agree well in the vicinity of $P=k_F$.
\begin{figure}
\includegraphics[scale=0.5]{fermion1.pdf}
\caption{\footnotesize The typical $S=1/2$ excitation spectrum in the semiclassical result and the exact solution. The former is plotted for $\gamma=g/n=0.07$, and correspondingly the latter is plotted for $\delta \gamma=\gamma_1-\gamma_2=0.07$.}
\label{fig:fermion1}
\end{figure}
In between the deep BCS and BEC sides, the physical picture of the $S=1/2$ excitations remain the same - they are moving solitons with one extra fermion bounded on the soliton core. This explains what we observed in exact solutions: instead of adding one particle on the uniform background, the more energy-favorable excitation is the addition of one particle on the dark soliton. The energy cost in the creation of the dark soliton is offset by the energy gain of trapping the particle inside the dip of the density profile. The fact that the minimum energy is achieved exactly at the Fermi momentum is then a consequence of the soliton formation.
\section{Theory of $S=0$ Soliton}
In this section, we apply the general formalism to the $S=0$ soliton, where we will find a crossover between the two weak coupling limits of the soliton structure.
\subsection{Deep BEC Side}
The analysis on the deep BEC side is simpler, since we have only the Gross-Pitaevskii equation for the classic field $b(z)$ presented in \req{eq:GP}, and we don't need to worry about the self-consistency requirement as in \req{eq:selfcon}. In fact, from \req{eq:GP} we can reconstruct the low energy effective Hamiltonian as
\begin{equation}
\hat{\mathcal{H}}=\int dz\left[\frac{1}{2}\partial_z\hat{b}^{\dagger}\partial_z\hat{b}+g\hat{b}^{\dagger}\hat{b}^{\dagger}\hat{b}\hat{b} \right],
\end{equation}
with is just the Lieb-Liniger model defined in \req{eq:HLL} but with the mass $m_b=1$. As mentioned previously, the fact that $S=0$ (type-II) excitations of the Lieb-Liniger model have the physical interpretation as moving solitons is well understood \cite{Kulish:1976ek,PhysRevA.78.053630}. The energy and momentum can be calculated directly using the soliton profile in \req{eq:Soliton}:
\begin{equation}
\begin{split}
E^\text{BEC}_0(\theta_s)=&\int dz~\left(\frac{1}{2}|\partial_zb|^2+g(|b|^2-n_s)^2\right)\\
=&\frac{4}{3}n_sv_c\sin^3\frac{\theta_s}{2}\\
P^\text{BEC}_0(\theta_s)=&\int dz~\frac{1}{2i}(b^*\partial_zb-b\partial_zb^*)+n_s\theta_s\\
=&n_s(\theta_s-\sin\theta_s),
\end{split}
\end{equation}
then the soliton velocity is determined as $v^\text{BEC}_s=\partial E_0/\partial P_0=v_c\cos(\theta_s/2)$, which is consistent with the result in \req{eq:velspin}. A comparison of the semiclassical result with the exact solution is shown in Fig. \ref{fig:soliton1}, where in the weak coupling limit, we will obtain a next to perfect match \cite{Kulish:1976ek}.
\begin{figure}
\includegraphics[scale=0.5]{soliton1.pdf}
\caption{\footnotesize The typical $S=0$ excitation spectrum in the semiclassical result and the exact solution, The former is plotted for $\gamma=g/n=0.06$, and correspondingly the latter is plotted for $\delta\gamma=\gamma_1-\gamma_2=0.06$.}
\label{fig:soliton1}
\end{figure}
\subsection{Deep BCS Side}
Now we move on to the deep BCS side, where the situation is complicated by the requirement of the self-consistent condition in \req{eq:selfcon22}. The $S=0$ soliton is obtained by setting $\sum_{\sigma}\lrangle{\hat{\gamma}^{\alpha\dagger}_{+,k,\sigma}\hat{\gamma}^{\alpha}_{+,k,\sigma}}=0$ and $\sum_{\sigma}\lrangle{\hat{\gamma}^{+\dagger}_{0,\sigma}\gamma^{+}_{0,\sigma}}=0$, then \req{eq:selfcon22} reduces to
\begin{equation}
\Delta=\lambda \int\frac{dk}{2\pi}\frac{\Delta}{\epsilon_k}+\frac{\lambda}{4}\frac{\Delta_0}{v_F}\frac{\theta_s}{\pi}\frac{\sin\frac{\theta_s}{2}}{\cosh^2\left(\frac{\Delta_2}{v_F}z \right)}+\tau(i\partial_z\Delta).
\end{equation}
Compared with \req{eq:selfcon2} for the $S=1/2$ soliton, the self-consistent equation here differs in the second term on the righthand side: it is now proportional to $\theta_s$ instead of $(\theta_s-\pi)$ as in \req{eq:selfcon2}. Since the dark soliton corresponds to the parameterization that $\theta_s=\pi$ and $\tau=0$, this means that we cannot fulfill the self-consistent equation for the $S=0$ soliton thus constructed, and the ground state of $\hat{\mathcal{H}}_{\psi}$ in \req{eq:meanpsi} does not correspond to a proper $S=0$ excitation, as mentioned in section III.
A solution of the above problem was conjectured by ~\citet{PhysRevA.91.023616} and consisted in the assumption that both negative and positive energy localized states are occupied with fractional occupation number. We found this solution to be incorrect for the following reasons: (1) Only positive energy states of the BCS Hamiltonian are meaningful and including the negative energy ones, in fact, describes the same states by different variables; (2) Even if this mistake is rectified, the fractional occupation of the localized state is forbidden in the mean field level as this state is not connected to the continuum (unlike Fano resonance); (3) It gives the value of the energy and of the curvature at $P=k_F$ inconsistent with the exact solution \cite{1367-2630-18-7-075004}.
Here, inspired by the fact that the maximum energy is on the scale of the Fermi energy, we propose that the proper construction of a $S=0$ soliton is as follows. We break the weakly bounded pair at the bottom of the Fermi sea, which leaves us with two fermions. We then put one of them on the localized level to produce a $S=1/2$ soliton. This is possible because the breaking of the bound pair at the bottom of the Fermi sea has no effect on the linearized spectrum. After that, we can form a singlet from the other fermion and the $S=1/2$ soliton, which gives us the desired $S=0$ soliton. To carry out such a construction, we need to go beyond the present mean field analysis and include the Fock potential produced by the spin density on fermion of the opposite spin (see Fig. \ref{fig:Feynman2}). Hartree potential is not considered here since it is not sensitive to spin.
\begin{figure}[htp!]
\includegraphics[scale=0.37]{Feynman2.pdf}
\caption{\footnotesize The diagrammatic representation of the mean field potential (left) and the Fock potential (right) experienced by fermions. The solid line denotes the fermion propagator, the wiggled line denotes the boson propagator, and the fermion-boson vertex denotes the resonant coupling $t_b$. The thin arrow on the fermion propagator denotes the spin direction of the fermion.}
\label{fig:Feynman2}
\end{figure}
By including the Fock potential, the equation of motion for the fermionic fields is modified as:
\begin{equation}
\label{eq:modeequv}
\begin{split}
&\begin{pmatrix}-i\alpha v_F\partial_z-\alpha v_{\psi}k_F & \Delta(z)\\ \Delta^*(z) & i\alpha v_F\partial_z-\alpha v_{\psi}k_F \end{pmatrix}\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix}\\
&+\begin{pmatrix}-V_{\text{F}} & 0\\ 0 & V_{\text{F}}\end{pmatrix}\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix}=\bar{\epsilon}^{\alpha}_n\begin{pmatrix}u^{\alpha}_n\\ v^{\alpha}_n\end{pmatrix}.
\end{split}
\end{equation}
The Fock potential is $V_{\text{F}}=\frac{\lambda}{2}\frac{\Delta_2}{v_F}\text{sech}^2\left( \frac{\Delta_2}{v_F} \right)$, where we have incorporated the constant part $\lambda n/2$ of $V_F$ into the chemical potential. Also from \req{eq:vbbb2} we have $v_{\psi}\approx v^{\text{BCS}}_s$. The first term in $V_{\text{F}}$ comes from the fermions in the continuous spectrum and the second term comes from the fermion in the localized state. For states with momentum near $k_F$, the Fock potential $V_{\text{F}}$ only acts as a small correction to the chemical potential, while for states near zero momentum, $V_{\text{F}}$ has a more dramatic effect of producing an extra localized state. In the latter case, we can ignore the small off-diagonal components in \req{eq:modeequv}, and the hole excitation near zero momentum is described by the Schrodinger equation without linearization:
\begin{equation}
\label{eq:linearconfine}
\left(\partial^2_z+\mu+iv_{\psi}\partial_z\right)\psi(z)+V_{\text{F}}\psi(z)=\bar{\epsilon}\psi(z).
\end{equation}
As usual, we perform the gauge transformation that $\psi(z)\to \psi(z)e^{-iv_{\psi}z/2}$ with a shift in momentum as $-v_{\psi}/2$, and again we are led to the Schrodinger equation for a particle moving in P$\ddot{\text{o}}$schl-Teller potential:
\begin{equation}
\label{eq:ExtraBound}
-\partial^2_z\psi-\alpha^2\zeta(\zeta-1)\text{sech}^2\left( \alpha z \right)\psi=\left(-\bar{\epsilon}+\mu +\frac{v_{\psi}^2}{4}\right)\psi,
\end{equation}
where $\alpha=\frac{\Delta_2}{v_F}$, $\alpha^2\zeta(\zeta-1)=\frac{\lambda\Delta_2}{2v_F}$. This produces a bound hole state with energy
\begin{equation}
\label{eq:ExtraBound3}
\bar{\epsilon}_1=\frac{\Delta_0^3}{32\lambda v^3_F}\left(1-\frac{3}{8}\xi^2\right)+\mu+\frac{v^2_{\psi}}{4}.
\end{equation}
Also the momentum of this bound hole state is simply $k_1=-v_{\psi}/2$, and we can determine the eigenvalue $\epsilon_1$ that contributes to the energy $E$ as
\begin{equation}
\epsilon_1=\bar{\epsilon}_1+v_{\psi}k_1.
\end{equation}
This localized state then combines with the $S=1/2$ soliton to form a singlet, which is the desired $S=0$ soliton (see Fig. \ref{fig:filling}).
\begin{figure}[htp!]
\includegraphics[scale=0.24]{filling.pdf}
\caption{\footnotesize a) The spectrum in the Fock approximation, where $\epsilon^{\pm}_0$ represents the two localized states in \req{eq:localeigen}. They are linear combinations of the left and right moving localized states in \req{eq:local1} and \req{eq:local2} once nonlinear effects are taken into consideration. $\epsilon_1$ represents the extra localized state produced by the Fock potential. b) The configuration of the $S=0$ soliton, which is formed as a singlet of the two localized states with energies $\epsilon_1$ and $\epsilon^{+}_0$.}
\label{fig:filling}
\end{figure}
We can now determine the energy and the momentum of the $S=0$ soliton as
\begin{equation}
\begin{split}
E^\text{BCS}_0(\theta_s)& =E^\text{BCS}_{1/2}(\theta_s)+\epsilon_1\\
&= E_0+\left(\frac{\Delta_0}{4\pi}-\frac{3}{8}\frac{\Delta_0^3}{32\lambda v^3_F}-\frac{v_F^2}{4\pi^2}\right)\xi^2,\\
& E_0=\frac{2\Delta_0}{\pi}+\mu+\frac{\Delta^3_0}{32\lambda v_F^3},\\
P^\text{BCS}_0(\theta_s)&=P^\text{BCS}_{1/2}(\theta_s)+k_1 \\
& =k_F+\left(\frac{v_F}{2\pi}-\frac{\Delta_0}{2v_F}\right)\xi
\end{split}
\end{equation}
where we have used the expression for $v_{\psi}\approx v^{\text{BCS}}_{s}$ in \req{eq:velocity}. In the weak coupling limit, we have $v^2_F\gg \Delta_0$, then the energy does conform to what we observed in exact solutions that it is on the scale of the Fermi energy
$\mu=\epsilon_F$, and the dispersion of the $S=0$ soliton can be approximated as
\begin{equation}
\begin{split}
E_0(P_0)\approx E_0-(P_0-k_F)^2.
\end{split}
\end{equation}
which agrees with the exact solutions and reduces to the noninteracting fermion result. A comparison of the current semiclassical result with the exact solution is shown in Fig. \ref{fig:soliton2}, where the former grasps the basic features of the latter.
\begin{figure}
\includegraphics[scale=0.5]{soliton2.pdf}
\caption{\footnotesize The typical $S=0$ excitation spectrum in the semiclassical result and the exact solution. The latter is plotted for $\gamma=c_F/n=0.15$, and correspondingly the former is plotted taking the spin gap at the same coupling strength as the input parameter.}
\label{fig:soliton2}
\end{figure}
\subsection{Crossover Problem}
Here we argue that the crossover region of the $S=0$ soliton is not described by a simple mean field configuration but rather by the linear combination of the states considered in subsections A and B.
Unlike the $S=1/2$ soliton, the $S=0$ soliton on the BEC side and BCS side have different natures. The former is just the usual soliton formed in the condensed bound pairs, while the latter is a singlet formed by two localized spins (one is trapped by the Fock potential of the other). We refer to the latter as a dressed soliton. The dressed soliton can tunnel into the usual soliton configuration since the state localized by the Fock potential lies in the continuous spectrum (see Fig. \ref{fig:filling}). On the deep BCS side, the tunneling is negligible. When we tune the resonant level to leave the deep BCS side, the tunneling between the dressed soliton and the usual soliton becomes stronger, and the physical soliton will be a linear combination of them. Till on the deep BEC side, the usual soliton dominates. The two localized spins we have on the BCS side then bound together to become one of the bound pairs on the BEC side. There is no abrupt change happening in the soliton formation along the crossover, just as what we have observed in the excitation spectra of the exactly solvable models.
The above qualitative argument can be made more rigorous by analyzing the tunneling of the state localized by the Fock potential into the quasiparticle continuum. The desired analysis is performed for \req{eq:modeequv} in the regime where the chemical potential $\mu$ is the largest energy scale near the BCS side, so the off-diagonal part can be treated perturbatively. We have both electron-like eigenstate $\ket{\Psi^e}$ and hole-like eigenstates $\ket{\Psi^h}$ at zeroth-order, and in each sector we will get a localized state $\ket{\Psi^{e,h}_0}$. Here we focus on the state $\ket{\Psi^h_0}$ with energy $\epsilon^h_0$ on the scale of $\mu$, which will tunnel into the continuum of the electron-like state $\ket{\Psi^e_k}$ as the off-diagonal perturbation sets in. The resonance width due to this tunneling can be calculated using the Fermi golden rule:
\begin{equation}
\Gamma=2\pi \nu_{\uparrow,\downarrow}(2\epsilon_F)|\mathcal{M}|^2=\frac{1}{\sqrt{2}v_F}|\mathcal{M}|^2,
\end{equation}
where we have taken the density of states $\nu_{\uparrow,\downarrow}(\epsilon)=\nu(\epsilon)/2$ for each spin to be the one at $2\epsilon_F$ since the localized level is close to the chemical potential $\mu$, and $\mathcal{M}=\bra{\Psi^e_k}\begin{pmatrix}0 & \Delta \\ \Delta^* & 0 \end{pmatrix}\ket{\Psi^h_0}$ is the matrix element between the continuum and the localized state. It can be estimated by the Fourier component $\tilde{\Delta}(\sqrt{2}k_F)$ of the soliton profile, normalized by the size of the soliton:
\begin{equation}
\label{eq:Gamma}
|\mathcal{M}|^2=\frac{1}{l_s}|\tilde{\Delta}(\sqrt{2}k_F)|^2=\frac{\Delta^2_0l_s\pi^2\sin^2\frac{\theta_s}{2}}{\sinh^2\left( \frac{l_s\pi k_F}{\sqrt{2}} \right)}
\end{equation}
In the end we obtain the following result for the resonance width near the BCS side:
\begin{equation}
\Gamma=\frac{\Delta^2_0l_s\pi^2\sin^2\frac{\theta_s}{2}}{\sqrt{2}v_F\sinh^2\left( \frac{l_s\pi k_F}{\sqrt{2}} \right)}
\end{equation}
For the large chemical potential near the BCS side, the ratio between the resonance width $\Gamma$ and the energy $\epsilon^h_0$ is exponentially small and the localized state remains well-defined. This is equivalent to the statement that the tunneling to the usual soliton is negligible. As we tune the system away from the BCS side, up to the point where the chemical potential is comparable to the gap parameter $\Delta_0$, the velocity of the particle and the Fermi momentum are also tuned to be on the order of magnitude comparable to $\Delta_0$. At that point, the resonance width $\Gamma$ becomes comparable to the energy $\epsilon^h_0$. With further tuning toward the BEC side, we then encounter a large resonance width $\Gamma\gg \epsilon^h_0$, and the localized state ceases to be well defined and merges into the quasiparticle continuum. Correspondingly, we have the spin-singlet described on the BCS side develop into a normal bound pair on the BEC side. As a result, we have a smooth crossover from the soliton of BCS type into the one of BEC type, while the mathematical description of the state obtained from the tunneling between different mean field solutions is beyond the scope of this paper.
\section{Conclusion}
In this paper, we developed a semiclassical theory of moving solitons in one dimensional BCS-BEC crossover, where on both the deep BCS and deep BEC side, our results grasp the essential features of the exact solution. Our theory also resolves the inconsistency between the semiclassical analysis and the exact solutions in the attractive Yang-Gaudin model. In the meantime, we revealed the mechanism of a striking phenomenon discussed in our previous paper that the minimum energy of the spin excitation is fixed at the Fermi momentum along the whole range of BCS-BEC crossover in one dimension. Conventionally in higher dimensions, we would expect this momentum to be shifted from $k_F$ on the BCS side to zero somewhere on the way to the BEC side, and it is believed that this is the only sharp change that could happen in a BCS-BEC crossover \cite{2015qgee.book..179P}. We then show that the counterintuitive fixing comes about as a special feature of the one dimensional systems, that the conventional quasiparticle is not stable with respect to soliton formation. Our theory serves as yet another example of the important role solitons can play in low dimensional physical systems, in addition to those well established in one dimensional lattice models \cite{PhysRevB.22.2099,PhysRevB.21.2388} and charge density waves \cite{Brazovskii_1989,1980JETP...51..342B}.
\section{Acknowledgment}
We would like to thank Victor Gurarie for helpful discussions and valuable comments on the paper. This work is supported by Simons Foundation.
|
2,877,628,088,590 | arxiv | \section{Introduction}
\label{sec:introduction}
The physics of radiopulsars is still not fully understood despite
large effort of many theoreticians in this field. It is generally
assumed that radiopulsar has MHD-like magnetosphere which is very
close to the force-free stage -- the model firstly introduced by
\citet{GJ}. For many years solution of force-free MHD equations was
the problem, even for the simplest case of an aligned pulsar. Now to
solve the Grade-Shafranov equation describing structure of a
force-free magnetosphere of an aligned pulsar is not a problem anymore
\citep[see e.g.][]{CKF,Goodwin/04,Gruzinov:PSR,Timokhin2006:MNRAS1}.
Stationary magnetosphere configurations for an aligned rotator were
obtained also a the final stage in non-stationary numerical modelling
\citep{Komissarov06,McKinney:NS:06,Bucciantini06,Spitkovsky:incl:06}.
Even the case of an inclined rotator was studied numerically
\citep{Spitkovsky:incl:06}. However, the pulsar magnetosphere is a
very complicated physical system because most of the current carriers
(electrons and positrons) are produced inside the system, in the polar
cap cascades. Production of electron-positron pairs is a process with
a threshold, so it could operate only under specific conditions and,
generally speaking, not any given current density could flow through
the cascade zone.
In magnetohydrodynamics (MHD) the current density distribution is not
a free ``parameter'', it is obtained in course of solving of MHD
equations. In case of pulsars obtaining a solution of MHD equations
does not solve the problem, because it could happen that the polar cap
cascade zone could not provide the required current density
distribution and, hence, support the particular configurations of the
magnetosphere. In terms of MHD the polar cap cascade zone sets
complicated boundary conditions at the foot points of the open
magnetic field lines and any self-consistent solution of the problem
must match them. The most ``natural'' configuration of the
magnetosphere of an aligned rotator, when the last closed field line
extends up to the light cylinder, requires current density
distribution which could not be supported by stationary
electromagnetic cascades in the polar cap of pulsar
\citep[see][hereafter Paper~I]{Timokhin2006:MNRAS1}. That configuration
requires that in some parts of the polar cap the electric current
flows against the preferable direction of the accelerating electric
field. This seems to be not possible also for non-stationary
cascades, although this problem requires more carefully investigation
than it has been done before
\citep{Fawley/PhDT:1978,AlBer/Krotova:1975,Levinson05}. So, the
structure of the magnetosphere should be different from this simple
picture. The magnetosphere of a pulsar would have a configuration
with the current density distribution which can flow through the
polar cap cascade zone without suppression of electron-positron pair
creation. Whether such configuration exists is still an open
question, i.e. a possibility that the real pulsar magnetosphere has
large domains where MHD approximation is broken could not be
completely excluded too \citep[see e.g.][]{Arons79,MichelBook}.
As the pulsar magnetosphere and the polar cap cascade zone have too
different characteristic timescales, it would be barely possible to
proceed with modelling of the whole system at once. Therefore, these
physical systems should be modelled separately and the whole set of
solutions for each system should be found, in order to find compatible
ones. Namely, we suggest the following approach to the construction
of the pulsar magnetosphere model: one should find out which currents
could flow through the force-free pulsar magnetosphere and compare
them with the currents being able to flow through the polar cap
cascade zone. In this work we deal with the first part of the the
suggested ``program''. Namely, we consider the range of possible
current density distributions in force-free magnetosphere of an
aligned rotator.
Force-free magnetosphere of an aligned rotator is the simplest
possible case of an MHD-like pulsar magnetosphere and needs to be
investigated in the first place. This system has two physical degrees
of freedoms i) the size of the closed field line zone, and ii) the
distribution of the angular velocity of open magnetic field lines. In
each stationary configuration the current density distribution is
fixed. Considering different configurations by changing (i) and (ii)
and keeping them in reasonable range the whole set of admitted current
density distributions can be found. Differential rotation of the open
field lines is caused by variation of the accelerating electric
potential in the cascade zone across the polar cap. Theories of
stationary polar cap cascades predict rather small potential drop and
in this case only one degree of freedom is left -- the size of the
zone with closed magnetic field lines. This case was studied in
details in Paper~I, with the results that stationary polar cap cascades
are incompatible with stationary force-free magnetosphere. So, most
probably the polar cap cascades operate in non-stationary regime. For
non-stationary cascades the average potential drop in the accelerating
zone could be larger than the drop maintained by stationary cascades.
Hence, the open magnetic field lines may rotate with significantly
different angular velocities even in magnetospheres of young
pulsars. On the other hand, for old pulsars the potential drop in the
cascade zone is large, and magnetospheres of such pulsars should
rotate essentially differentially anyway.
The case of differentially rotating pulsar magnetosphere was not
investigated in details before Although some authors addressed the
case when the open magnetics field lines rotate differently than the
NS, but only the case of constant angular velocity was considered
\citep[e.g.][]{GurevichBeskinIstomin_Book,Contopoulos05}. The first
attempt to construct a self-consistent model of pulsar magnetosphere
with \emph{differentially} rotating open field line zone was made in
\citet{Timokhin::PSREQ2/2007}, hereafter Paper~II. In that paper we
considered only the case when the angular velocity of the open field
lines is less than the angular velocity of the NS. We have shown that
the current density can be made almost constant over the polar cap,
although on a cost of a large potential drop in the accelerating zone.
The angular velocity distributions was chosen ad hoc and the analysis
of the admitted range for current density distributions was not
performed.
In this paper we discuss properties of differentially rotating
magnetosphere of an aligned rotator in general and elaborate the
limits on the differential rotation. We study in detail the case when
the current density in the polar cap is a linear function on the
magnetic flux. It allows us to obtain main relations analytically. We
find the range in which physical parameters of the magnetosphere could
vary, requiring that a) the potential drop in the polar cap is not
greater that the vacuum potential drop and b) the current in the polar
cap does not change its direction.
The plan of the paper is as follows. In Section~\ref{sec:basic-model}
we discuss basic properties of differentially rotating force-free
magnetosphere of an aligned rotator and derive equations for angular
velocity distribution, current density and the Goldreich-Julian charge
density in the magnetosphere. In Section~\ref{sec:Equation_for_V} we
derive equations for the potential drop which supports configurations
with linear current density distribution in the polar cap of pulsar
and give their general solutions. In Section~\ref{sec:main-results}
we analyse the physical properties of admitted magnetosphere
configurations: the current density distribution, the maximum
potential drop, the angular velocity of the open magnetic field lines,
the Goldreich-Julian current density, the spindown rate and the total
energy of the magnetosphere. At the end of that section we we consider
as examples two sets of solutions: the one with constant current
densities and the another one with the smallest potential drops. In
Section~\ref{sec:Discussion} we summarise the results, discuss
limitation of the used approximation and briefly describe possible
modification of the obtained solutions which will arise in truly
self-consistent model. In that section we also discuss the issue with
the pulsar braking index.
\section{Differentially rotating magnetosphere: basic properties}
\label{sec:basic-model}
\subsection{Pulsar equation}
\label{sec:general-equation}
Here as in Papers~I,II we consider magnetosphere of an aligned rotator
that is at the coordinate origin and has dipolar magnetic field. We
use normalisations similar%
\footnote{note that here in contrast to Paper~I{} $\Psi$ is already
dimensionless}
to the ones in Paper~I, but now we write all equations in the spherical
coordinates $(r,\theta,\phi)$. We normalise all distances to the
light cylinder radius of the corotating magnetosphere
$\RLC\equiv{}c/\Omega$, where $\Omega$ is the angular velocity of the
neutron star (NS), $c$ is the speed of light. For the considered
axisymmetric case the magnetic field can be expressed through two
dimensionless scalar functions $\Psi$ and $S$ as (cf. eq.~(8) in
Paper~I)
\begin{equation}
\label{eq:B}
\B = \frac{\mu}{\RLC^3}
\frac{ \vec{\nabla}\Psi \times \vec{e_\phi} + S\vec{e_\phi} }{r\sin\theta}
\,,
\end{equation}
where $\vec{e_\phi}$ is the unit azimuthal, toroidal vector.
$\mu=B_0\RNS^3/2$ is the magnetic moment of the NS; $B_0$ is the
magnetic field strength at the magnetic pole, $\RNS$ is the NS radius.
The scalar function $\Psi$ is related to the magnetic flux as
$\Phi_\mathrm{mag}(\varpi,Z)=2\pi\,(\mu/\RLC)\:\Psi(r,\theta)$.
$\Phi_\mathrm{mag}$ is the magnetic flux trough a circle of a radius
$\varpi=r\sin\theta$ with its centre at the point on the rotation axis
being at the distance $Z=r\cos\theta$ from the NS. The lines of
constant $\Psi$ coincides with magnetic field lines. The scalar
function $S$ is related to the total current $J$ \emph{outflowing}
trough the same circle by
$J(\varpi,Z)=1/2\,(\mu/\RLC^2)\,c\:S(r,\theta)$.
The electric field in the force-free magnetosphere is given by
\begin{equation}
\label{eq:E}
\E = - \frac{\mu}{\RLC^3}\: \beta\, \nabla \Psi
\, ,
\end{equation}
where $\beta$ is the ratio of the angular velocity of the magnetic
field lines rotation $\ensuremath{\Omega_\mathrm{F}}$ to the angular velocity of the NS,
$\beta\equiv\ensuremath{\Omega_\mathrm{F}}/\Omega$ (cf. eq.~(14) in Paper~I). The difference of
the angular velocity of a magnetic field line from the angular
velocity of the NS is due to potential drop along that line in the
polar cap acceleration zone.
For these dimensionless functions the equation describing the
stationary force-free magnetosphere, the so-called pulsar equation
\citep{Michel73:b,ScharlemannWagoner73,Okamoto74}, takes the form (cf.
eq.~20 in Paper~I)
\begin{multline}
\label{eq:PsrEq}
\left[ 1 - (\beta r \sin\theta)^2 \right] \Delta\Psi -
\frac{2}{r}
\left(
\pd_r \Psi + \frac{\cos\theta}{\sin\theta}\frac{\pd_\theta\Psi}{r}
\right) +
\\
+ S \frac{d S}{d \Psi}
- \beta \frac{d \beta}{d \Psi} \left(r \sin\theta\: \nabla \Psi \right)^2
= 0
\,.
\end{multline}
This equation express the force balance across the magnetic field
lines. At the light cylinder the coefficient by the Laplacian
goes to zero and the pulsar equation reduces to
\begin{equation}
\label{eq:CondAtLC}
S \frac{d S}{d \Psi} =
\frac{1}{\beta} \frac{d \beta}{d \Psi} \left( \nabla \Psi \right)^2 +
2\beta \sin\theta\,
\left(
\pd_r \Psi + \beta \cos\theta\, \pd_\theta\Psi
\right)
\,.
\end{equation}
Each smooth solution must satisfy these two equations and the problem
of a solving the pulsar equations transforms to an eigenfunction
problem for the poloidal current function $S$ (see e.g. Section~2.3
in Paper~I). Equation~(\ref{eq:CondAtLC}) could also be considered as
an equation on the poloidal current.
We adopt for the magnetosphere the configuration with the so-called
Y-null point. Namely, we assume that the magnetosphere is divided in
two zones, the first one with closed magnetic field lines, which
extend from the NS up to the neutral point being at distance $x_0$
from the NS, and the second one, where magnetic field lines are open
and extend to infinity (see Fig.~\ref{fig:Magnetosphere}). In the
closed magnetic field line zone plasma corotates with the NS, there is
no poloidal current along field lines, and the magnetic field lines
there are equipotential. Apparently this zone can not extend beyond
the light cylinder. In the the rest of the magnetosphere magnetic
field lines are open due to poloidal current produced by outflowing
charged particles. The return current, needed to keep the NS charge
neutral flows in a thin region (current sheet) along the equatorial
plane and then along the last open magnetic field line. We assume
that this picture is stationary on the time scale of the order of the
period of the NS rotation. As it was outlined in Paper~I{}, the polar
cap cascades in pulsars are most probably non-stationary. The
characteristic time scale of the polar cap cascades
$\sim{}h/c\sim3\cdot10^{-5}$~sec ($h$ is the length of the
acceleration zone being of the order of $\RNS$) is much shorter that
the pulsar period (for most pulsars being $\gg{}10^{-3}$~sec). So,
for the global magnetosphere structure only time average of the
physical parameters connected to the cascade zone are important. In
the rest of the paper, when we discuss physical parameters set by the
cascade zone we will always mean the \emph{average} values of them,
unless the opposite is explicitly stated.
Differential rotation of the open magnetic field lines which is caused
by presence of a zone with the accelerating electric field in the
polar cap of pulsar i) contributes to the force balance across
magnetic field lines (the last term in eq.(\ref{eq:PsrEq})), ii)
modifies the current density in the magnetosphere (the first term in
r.h.s. of eq.~(\ref{eq:CondAtLC})), and iii) changes the position of
the light cylinder, where condition~(\ref{eq:CondAtLC}) must be
satisfied. Note that for the case i) and ii) the derivative
$d\beta/d\Psi$, i.e. the form of the distribution $\beta(\Psi)$, plays
an important role. So, for different angular velocity distributions
in the open magnetic field line zone there should exist different
magnetosphere configurations that have in general distinct current
density distributions. Let us now consider restrictions on the
differential rotation rate $\beta(\Psi)$.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig1.eps}
\caption{Structure of the magnetosphere of an aligned rotator
(schematic picture). Magnetic field lines are shown by solid
lines. Outflowing current $J$ along open magnetic field lines and
returning current $J_\mathrm{return}$ in the current sheet,
separating the open and closed magnetic field line zones, are
indicated by arrows. The current sheet is along the last open
magnetic field line, corresponding to the value of the flux
function $\PsiL$. Distances are measured in units of the light
cylinder radius for the corotating magnetosphere $\RLC$, i.e. the
point with $x=1$ marks the position of the light cylinder in the
corotating magnetosphere. The null point $x_0$ could lie anywhere
inside the interval $[0,1]$. Possible positions of the real light
cylinder are shown by dotted lines. The line \textbf{I}
corresponds to the case when $1/\beta(\PsiL)<x_0$; \textbf{II} --
to $x_0<1/\beta(\PsiL)<1$; \textbf{III} -- to $1<1/\beta(\PsiL)$
(see the text further in the article). }
\label{fig:Magnetosphere}
\end{figure}
\subsection{Angular velocity of the open magnetic field lines}
\label{sec:Beat(psi)}
Due to rotation of the NS a large potential difference arises between
magnetic field line foot points at the surface of the NS. The
potential difference between the pole and the magnetic field line
corresponding to the value of the magnetic flux function $\Psi$ is
\begin{equation}
\label{eq:Vpsi}
\Delta\mathcal{V}(\Psi) = \frac{\mu}{\RLC^2}\Psi
\end{equation}
In perfectly force-free magnetosphere the magnetic field lines are
equipotential. However, due to presence of the polar cap acceleration
zone, where MHD conditions are broken a part of this potential
difference appears as a potential drop between the surface of the NS
and the pair-formation front, above which the magnetic field line
remains equipotential. This potential drop is the reason why the open
magnetic field lines rotate differently from the NS. The normalised
angular velocity of a magnetic field line $\beta$ is expressed trough
the potential drop along the field line as (e.g. \citet{BeskinBook},
Paper~I)
\begin{equation}
\label{eq:OmF}
\beta = 1 + \frac{\RLC^2}{\mu}\frac{\pd \mathpzc{V}(\Psi)}{\pd \Psi}
\,.
\end{equation}
$\mathpzc{V}$ is the total potential drop (in statvolts) along the
magnetic field line in the polar cap acceleration zone (cf. eq.~(23)
in Paper~I).
The polar cap of pulsar is limited by the magnetic field line
corresponding to a value of the flux function $\PsiL$. The potential
drop between the rotation axis and the boundary of the polar cap is
\begin{equation}
\label{eq:Vvac}
\Delta\mathcal{V}(\PsiL) = \frac{\mu}{\RLC^2}\PsiL \equiv \Delta\PC{\mathcal{V}}
\end{equation}
This is the maximum available potential drop along an open magnetic
field line. It could be achieved in vacuum, when there is no plasma in
the polar cap. We will call $\Delta\PC{\mathcal{V}}$ the vacuum
potential drop. Let us normalise the poloidal flux function $\Psi$ to
its value at the last open magnetic field line $\PsiL$ and introduce a
new function $\psi\equiv\Psi/\PsiL$. Normalising potential drop along
field lines to the vacuum potential drop and introducing the
dimensionless function $V\equiv{}\mathpzc{V}/\Delta\PC{\mathcal{V}}$
we rewrite expression for the normalised angular velocity of of the
open magnetic field line as
\begin{equation}
\label{eq:BetaV}
\beta = 1 + \frac{\pd V}{\pd\psi}
\,.
\end{equation}
As the potential drop \emph{along} any field line can not be greater
than the vacuum drop and could not have different sign than the vacuum
drop, variation of the electric potential \emph{across} the polar cap
can not exceed the vacuum potential drop. In terms of the
dimensionless functions this condition has the form
\begin{equation}
\label{eq:DeltaV_less_1}
|V(\psi_1)-V(\psi_2)| \le 1, \quad \forall\,\psi_1,\psi_2 \in [0,1]
\,.
\end{equation}
Inequality~(\ref{eq:DeltaV_less_1}) sets the limit on the electric
potential in the polar cap of pulsar.
\subsection{Current density in the polar cap}
\label{sec:S(psi)}
In order to obtain the current density distribution in the polar cap
of pulsar the pulsar equation~(\ref{eq:PsrEq}) together with the
condition at the light cylinder~(\ref{eq:CondAtLC}) must be solved.
There is an analytical solution of the pulsar equation only for
split-monopole configuration of the poloidal magnetic field. Namely,
when flux function $\Psi$ has the form
\begin{equation}
\label{eq:PsiMonopole}
\Psi =\PsiM (1-\cos\theta)
\,,
\end{equation}
$\PsiM$ being a constant, equations~(\ref{eq:PsrEq}) and
(\ref{eq:CondAtLC}) have a smooth solution if the poloidal current
function $S$ has the form \citep[e.g.][]{Blandford/Znajek77}
\begin{equation}
\label{eq:SMonopole}
S(\Psi) = - \beta(\Psi)\, \Psi (2-\frac{\Psi}{\PsiM})
\,.
\end{equation}
Here $\PsiM$ corresponds to the value of the magnetic flux trough the
upper hemisphere, i.e. it corresponds to the magnetic field line lying
in the equatorial plane. The poloidal current given by
equation~(\ref{eq:SMonopole}) is very similar to current in the well
known Michel solution \citep{Michel73}, but this expression is valid
for non-constant $\beta(\Psi)$ too.
In this paper we will use for the poloidal current function $S$
expression~(\ref{eq:SMonopole}). Doing so, we assume that in the
neighbourhood of the light cylinder the geometry of the poloidal
magnetic field is close to a split monopole. This is good
approximation if the size of the closed magnetic field line zone is
much smaller than the light cylinder size, $x_0\ll1/\beta(\psi),\
\psi<1$. For configurations where the size of the corotating zone%
\footnote{plasma in the closed field line zone corotates with the NS,
so we will call the region with the closed magnetic field lines the
corotating zone}
approaches the light cylinder the poloidal current $S$ is different
from the one given by eq.~(\ref{eq:SMonopole}), but we expect that
this deviation should not exceed 10-20 per cents. Indeed, in the
numerical simulations described in Paper~I{}, where the case of constant
$\beta\equiv1$ was considered, the deviation of $S$ from the Michel's
poloidal current did not exceed 20 per cents and it got smaller for
smaller size of the corotating zone (see Fig.~3 in Paper~I{}).
Similarly, in Paper~II{}, where we considered the case of variable
$\beta<1$, the poloidal current deviated from the values given by the
analytical formula~(\ref{eq:SMonopole}) by less than 20 per cents and
the difference became smaller for smaller size of the corotating zone.
So, we may hope that the same relation holds in the general case too.
We intent to find the range of admitted current density distributions
in the force-free magnetosphere. Here we use the split-monopole
approximation for the poloidal current~(\ref{eq:SMonopole}), hence, we
can study only the effect of differential rotation on the current
density distribution. The dependence of the current density on the
size of the corotating zone in differentially rotating magnetosphere
will be addressed in a subsequent paper, where we will refine our
results by performing numerical simulations for different sizes of the
corotating zone.
So, in our approximation the last closed field line in dipole geometry
corresponds to the field line lying in the equatorial plane in
monopole geometry, i.e. $\PsiM=\PsiL$. In normalised variables
expression for the poloidal current has the form
\begin{equation}
\label{eq:SMonopoleNormalized}
S(\Psi) = - \PsiL\, \beta(\psi)\, \psi (2-\psi)
\,.
\end{equation}
The poloidal current density in the magnetosphere is \citep[see
e.g.][]{BeskinBook}
\begin{equation}
\label{eq:j_general}
\Pol{j} = \frac{c}{4\pi}\frac{\mu}{\RLC^4}
\frac{\vec{\nabla} S \times \vec{e_\phi}}{r\sin\theta}=
\frac{\Omega \Pol{\vec{B}}}{2\pi{}c} c \: \frac{1}{2}\frac{d S}{d\Psi}
\,.
\end{equation}
In the polar cap of pulsar the magnetic field is dipolar and, hence
poloidal. The Goldreich-Julian charge density for the corotating
magnetosphere near the NS is
\begin{equation}
\label{eq:RhoGJ}
\GJNS{\rho} = - \frac{\vec{\Omega}\cdot\vec{B}}{2\pi{}c}
\,.
\end{equation}
Using expressions~(\ref{eq:SMonopoleNormalized})-(\ref{eq:RhoGJ})
we get for the current density in the polar cap of pulsar
\begin{equation}
\label{eq:j}
j = \frac{1}{2} \GJNS{j}
\left[
2\beta(1-\psi) + \beta'\psi(2-\psi)
\right]
\,.
\end{equation}
The prime denotes differentiation with respect to $\psi$, i.e.
$\beta'\equiv d\beta/d\psi$; $\GJNS{j}\equiv\GJNS{\rho}\,c$ is the
Goldreich-Julian current density in the polar cap for the
\emph{corotating} magnetosphere. At the surface of the NS, where the
potential drop is zero and plasma corotates with the NS, $\GJNS{j}$
corresponds to the local GJ current density.
\subsection{Goldreich-Julian charge density in the polar cap for
differentially rotating magnetosphere}
\label{sec:RhoGJ_local}
The Goldreich-Julian (GJ) charge density is the charge density which
supports the force-free electric field:
\begin{equation}
\label{eq:RhoGJ_local_E}
\GJ{\rho} \equiv \frac{1}{4\pi}\, \vec{\nabla}\cdot\vec{E}
\,.
\end{equation}
The GJ charge density in points along a magnetic field line rotating
with an angular velocity different from the angular velocity of the NS
will be different from values given by the eq.~(\ref{eq:RhoGJ}).
Substituting expression for the force-free magnetic field~(\ref{eq:E})
into eq.~(\ref{eq:RhoGJ_local_E}) we get
\begin{equation}
\label{eq:RhoGJ_local_Psi}
\GJ{\rho} = - \frac{\mu}{4\pi\RLC^4}\,
(\beta \Delta\Psi + \beta'(\nabla\Psi)^2)
\,.
\end{equation}
We see that the GJ charge density depends not only on the angular
velocity of the field line rotation (the first term in
eq.~(\ref{eq:RhoGJ_local_Psi})), but also on the angular velocity
profile (the second term in eq.~(\ref{eq:RhoGJ_local_Psi})).
Near the NS the magnetic field is essentially dipolar. The magnetic
flux function $\Psi$ for dipolar magnetic field is
\begin{equation}
\label{eq:Psi_Dipole}
\Psi^\mathrm{dip} = \frac{\sin^2\theta}{r}
\,.
\end{equation}
Substituting this expression into equation~(\ref{eq:RhoGJ_local_Psi})
we get
\begin{eqnarray}
\label{eq:RhoGJ_local_theta}
\GJ{\rho} & = &
- \frac{\mu}{4\pi\RLC^4}\,\frac{1}{r^3} \times
\nonumber \\
&&
\left(
\beta\, 2 (3\cos^2\theta-1) +
\beta' \frac{\sin^2\theta}{r} (3\cos^2\theta+1)
\right)
\,.
\end{eqnarray}
In the polar cap of pulsar $\cos\theta\simeq{}1$ and
$\mu/(r\RLC)^3\simeq{}B/2$. Recalling expression for the magnetic
flux function for dipole magnetic field~(\ref{eq:Psi_Dipole}) we get
for the local GJ charge density in the polar cap of pulsar
\begin{equation}
\label{eq:RhoGJ_local}
\GJ{\rho} = \GJNS{\rho}\, (\beta + \beta'\psi)
\,.
\end{equation}
\section{Accelerating potential}
\label{sec:Equation_for_V}
In our approximation any current density distribution in force-free
magnetosphere of an aligned rotator has the form given by
eq.~(\ref{eq:j}). The current density depends on the angular velocity
of the magnetic field lines $\beta(\psi)$, which for a given field
line depends on the total potential drop along that line via
equation~(\ref{eq:BetaV}). The potential drop in the acceleration
zone can not exceed the vacuum potential drop, i.e. $V$ is limited by
inequality~(\ref{eq:DeltaV_less_1}).
So, if we wish to find the accelerating potential which supports a
force-free configuration of the magnetosphere for a given form of the
current density distribution%
\footnote{guessed from a model for the polar cap cascades, for
example}
in the polar cap we do the following. We equate the expression for
the current density distribution to the general expression for the
current density~(\ref{eq:j}), then we express $\beta(\psi)$ in terms
of $V(\psi)$ by means of equation~(\ref{eq:BetaV}), and obtain thus an
equation for the electric potential $V$ which supports a force-free
magnetosphere configuration with the desired current density
distribution. If solutions of the obtained equation fulfil
limitation~(\ref{eq:DeltaV_less_1}), such configuration is admitted,
if not -- such current density could not flow in force-free
magnetosphere of an aligned pulsar. Currently there is no detailed
model for non-stationary polar cap cascades from which we could deduce
reasonable shapes for current density distribution. Therefore, we try
to set constrains on the current density assuming linear dependence of
the current density on $\psi$.
In differentially rotating magnetosphere there are two characteristic
current densities. The first one is the Goldreich-Julian current
density for the corotating magnetosphere $\GJ{j}^0$. It corresponds to
the actual Goldreich-Julian current density in the magnetosphere at
the NS surface, where differential rotation is not yet build up. The
second characteristic current density is the actual Goldreich-Julian
current density $\GJ{j}$ in points above the acceleration zone where
the magnetosphere is already force-free and the final form of
differential rotation is established; in the polar cap $\GJ{j}$ is
given by formula~(\ref{eq:RhoGJ_local}). For magnetosphere with
strong differential rotation the current densities $\GJ{j}^0$ and
$\GJ{j}$ differ significantly. In this section we consider both
cases, namely, when the current density distribution is normalised to
$\GJ{j}^0$ and when it is normalised to $\GJ{j}$.
\subsection{Outflow with the current density being a constant fraction
of the actual Goldreich-Julian current density}
For non-stationary cascades the physics would be determined by the
response of the cascade zone to the inflowing particles and MHD waves
coming from the magnetosphere. However, the accelerating electric
field depends on the deviation of the charge density from the local
value of the GJ charge density. So, the first naive guess would be
that the preferable state of the cascade zone would be the state when
(on average) the current density is equal to the GJ current density
$\GJ{j}$:
\begin{equation}
\label{eq:j=jGJ}
j(\psi) = \GJ{j}(\psi) = \GJNS{j}\, (\beta + \beta'\psi)
\,.
\end{equation}
Equating this formula to the general expression for the current
density~(\ref{eq:j}) and substituting for $\beta$
expression~(\ref{eq:BetaV}), after algebraical transformation we get
the equation for the accelerating electric potential in the polar cap
of pulsar
\begin{equation}
\label{eq:Eq_V_for_jGJ_local}
V'' = - 2\, \frac{1+V'}{\psi}
\,.
\end{equation}
We set the boundary conditions for $V$ at the edge of the polar cap.
As the boundary conditions we can use the value of the normalised
angular velocity at the edge of the polar cap and the value of the
electric potential there
\begin{eqnarray}
\label{eq:Eq_V_BoundaryCond_beta}
1+V'(1) &=& \PC{\beta} \\
\label{eq:Eq_V_BoundaryCond_V}
V(1) &=& V_0
\end{eqnarray}
Solution of equation~(\ref{eq:Eq_V_for_jGJ_local}) satisfying the
boundary
conditions~(\ref{eq:Eq_V_BoundaryCond_V}),~(\ref{eq:Eq_V_BoundaryCond_beta})
is
\begin{equation}
\label{eq:SolV_for_jGJ_local}
V(\psi) = V_0 + (1-\psi) \left( 1 - \frac{\PC{\beta}}{\psi} \right)
\,.
\end{equation}
We see that unless $\PC{\beta}=0$ the potential has singularity at the
rotation axis, and, hence, such configuration can not be realised in
force-free magnetosphere of a pulsar. The
condition~(\ref{eq:DeltaV_less_1}) is violated -- the potential
difference exceeds the vacuum potential drop.
If $\PC{\beta}=0$, the potential is $V=V_0 + 1-\psi$ and from
eq.~(\ref{eq:BetaV}) we have $\beta(\psi)\equiv{}0$. Substituting
this into eq.~(\ref{eq:j}) we get for the current density
$j(\psi)\equiv{}0$. So, the case with $\PC{\beta}=0$ is degenerated,
as there is no poloidal current in the magnetosphere, it corresponds
to the vacuum solution.
Let us consider now a more general form for the current density
distribution
\begin{equation}
\label{eq:j=A_jGJ}
j(\psi) = A \GJ{j}(\psi) = A \GJNS{j}\, (\beta + \beta'\psi)
\,,
\end{equation}
where $A$ is a constant. In that case for the accelerating electric
potential in the polar cap of pulsar we have the equation
\begin{equation}
\label{eq:EqV_for_AjGJ_local}
V'' = 2(1+V') \frac{1-A-\psi}{\psi\left[\psi+2(A-1)\right]}
\end{equation}
For the same boundary
conditions~(\ref{eq:Eq_V_BoundaryCond_V}),~(\ref{eq:Eq_V_BoundaryCond_beta})
solution of this equation is
\begin{equation}
\label{eq:SolGen_V_jGJ_local}
V(\psi) = V_0 + 1 - \psi +
\frac{\PC{\beta}( 2 A - 1 )}{2(A -1)}
\log\left[
\frac{\psi (2 A - 1)}{\psi + 2 (A - 1)}
\right]
.
\end{equation}
This solution is valid for $A\neq{}1,1/2$. There is the same problem
with the electric potential in that solution. Namely, unless
$\PC{\beta}=0$ the potential $V$ is singular%
\footnote{the singularity arises because $V''(0)$ goes to infinity
unless $1+V'(0)=\beta(0)$ is zero, as it follows from
equation~(\ref{eq:EqV_for_AjGJ_local})}
at the rotation axis. The case with $A=1/2$ is also degenerated,
because in that case the solution for the electric potential is
$V(\psi)=V_0+1-\psi$ what yield the current density
$j(\psi)\equiv{}0$.
We see that solutions with the current density being a constant
fraction of the actual GJ current density are not allowed, except a
trivial degenerated case, corresponding to no net particle flow. The
naive physical picture does not work and the current density in the
magnetosphere in terms of the actual GJ current density must vary
across the polar cap. On the other hand, the GJ current density is
itself a variable function across the polar cap, it changes also with
the altitude within the acceleration zone, when the potential drop
increases until it reaches its final value. So, we find it more
convenient to consider the current density in terms of the
corotational GJ current density.
\subsection{Outflow with the current density being a linear function
of the magnetic flux in terms of the corotational Goldreich-Julian
current density}
In models with the space charge limited flow (SCLF), when charged
particles can freely escape from the NS surface
\citep[e.g.][]{Arons/Scharlemann/78}, the charge density at the NS
surface is always equal to the local GJ charge density there,
$(\rho=\GJNS{\rho})|_{r=\RNS}$. For SCLF the actual current density
in the polar cap could be less than $\GJNS{j}$ if acceleration of the
particles is periodically blocked in the non-stationary cascades. The
current density could be greater than $\GJNS{j}$ if there is an inflow
of particles having opposite charge to that of the GJ charge density
from the magnetosphere into the cascade zone \citep[e.g.][]{Lyubar92}.
Therefore, an expression for the current density in terms of the
corotational GJ current density $\GJNS{j}$ would be more informative
from the point of view of the cascade physics.
Let us consider the case when the current density distribution in the
polar cap of pulsar has the form
\begin{equation}
\label{eq:j=a_psi+b}
j = \GJNS{j}(a\psi+b)
\,,
\end{equation}
where $a,b$ are constants. The Michel current density distribution is
a particular case of this formula and corresponds to the values of
parameters $a=-1,b=1$. The equation for the electric potential for
this current density is
\begin{equation}
\label{eq:EqV_for_jGJ_NS}
V'' = 2\,\frac{a\psi+b-(1+V')(1-\psi)}{\psi(2-\psi)}
\,.
\end{equation}
Solution of the equation~(\ref{eq:EqV_for_jGJ_NS}) satisfying the boundary
conditions~(\ref{eq:Eq_V_BoundaryCond_V}),~(\ref{eq:Eq_V_BoundaryCond_beta})
is
\begin{multline}
\label{eq:SolGen_V_jGJ_NS}
V(\psi) =
V0 + (1+a)(1-\psi) + \\
+ \frac{1}{2}
\log\left[
(2-\psi )^{-\PC{\beta}-3a-2b}
\psi ^{\PC{\beta}-a-2b}
\right]
.
\end{multline}
We see that the potential is non singular at the rotation axis, if
$\PC{\beta}=a+2b$. So, the admitted solution for the electric
potential is
\begin{equation}
\label{eq:Sol_V_jGJ_NS}
V(\psi) = V0 + (1+a)(1-\psi) - 2(a+b)\log(2-\psi )
\,.
\end{equation}
In the rest of the paper we will use for the electric potential
expression~(\ref{eq:Sol_V_jGJ_NS}). We will analyse physical
properties of force-free magnetosphere configurations when the
electric potential in the acceleration zone of the polar cap has that
form.
\section{Properties of admitted configurations}
\label{sec:main-results}
\subsection{Admitted current density}
\label{sec:admitted-current}
The potential drop in the polar cap of pulsar is limited by the vacuum
potential drop. In our notations this limit is formulated as
inequality~(\ref{eq:DeltaV_less_1}). Parameters $a,b$ from the
expression for the electric current~(\ref{eq:j=a_psi+b}) enters into
the formula for the electric potential~(\ref{eq:Sol_V_jGJ_NS}).
Imposing limitation~(\ref{eq:DeltaV_less_1}) we get the admitted range
for these parameters in the force-free magnetosphere. In
Appendix~\ref{sec:App--admitt-curr-dens} we do such analysis and find
the region in the plane $(a,b)$ which is admitted by the
requirement~(\ref{eq:DeltaV_less_1}). This region is shown as a grey
area in Fig.~\ref{fig:ab_range_full}. From
Fig.~\ref{fig:ab_range_full} it is evident that for the most of the
admitted values of parameters $a,b$ the current density has different
signs in different parts of the polar cap. There is also a region
where the values of parameters correspond to the current density
distributions having the same sign as the GJ charge density in the
whole polar cap.
The physics of the polar cap cascades impose additional limitations on
the current density and accelerating electric potential distribution
in the polar cap. There is now no detailed theory of non-stationary
polar cap cascades. Therefore, in setting constrains on the current
density distribution we should use some simple assumptions about the
possible current density. There is a preferable direction for the
accelerating electric field in the polar cap. The direction of this
field in such that it accelerates charged particles having the same
sign as the GJ charge density away from the star. It is natural to
assume that the average current in the polar cap cascade should flow
in the same direction. The average current could flow in the opposite
direction only if the accelerating electric field is screened. In
order to screen the accelerating field a sufficient amount particles
of the same sign as the accelerated ones should come from the
magnetosphere and penetrate the accelerating potential drop. These
particles, however, are itself produced in the polar cap cascade.
They must be accelerated somewhere in the magnetosphere back to the NS
up to the energy comparable with the energy the primary particles get
in the polar cap cascade. Even if the problem of particle
acceleration back to the NS could be solved, screening of the electric
field will interrupt the particle creation, and, hence, there will be
not enough particles in the magnetosphere which could screen the
electric filed the next time. Although the real physics is more
complicated and is not yet fully understood, the case of the
unidirectional current in the polar cap is worth of detailed
investigation as it is ``the most natural'' from the point of view of
the polar cap cascade physics. In the following we will call the
current being of the same sign as the GJ charge density as
``positive'' and the current being of the opposite sign to the GJ
charge density as ``negative''.
The linear current density distribution~(\ref{eq:j=a_psi+b}) will be
always positive if
\begin{equation}
\label{eq:ab_for_j_positive}
b\ge\max(-a,0)
\,.
\end{equation}
Only a subset of the admitted values of $a,b$ corresponds to the
positive current density distribution. Such values of the parameters
$a,b$ are inside the triangle-like region shown in
Figs.~\ref{fig:Vmax_Contours},~\ref{fig:j/jGJ_Min},~\ref{fig:W}. We
see that a rather wide variety of positive current density
distributions are admitted in the force-free magnetosphere: current
density distributions being constant across the polar cap of pulsar
are admitted, as well as current densities decreasing or increasing
toward the polar cap boundary. So, the current density in the
force-free magnetosphere could deviate strongly from the classical
Michel current density, corresponding to the point $a=-1, b=1$. The
price for this freedom is the presence of a non-zero accelerating
electric potential in the polar cap. If the price for a particular
current density distribution is too hight, i.e. if the potential drop
is too large, only magnetosphere of pulsars close to the ``death
line'' could admit such current density. Let us now consider the
distribution of the potential drop in the parameter space $(a,b)$.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig2.eps}
\caption{Maximum potential drop across the polar cap. The dotted
lines show contours of $\Max{\Delta{}V}$. Contours for
$\Max{\Delta{}V}=0.05,0.1,0.2,0.5,0.8$ are shown. Labels on the
lines correspond to the values of $\Max{\Delta{}V}$. The line
corresponding to $\Max{\Delta{}V}=0.05$ is not labelled.}
\label{fig:Vmax_Contours}
\end{figure}
\subsection{Electric potential}
\label{sec:electric-potential}
We emphasised already that the shape of the function $V(\psi)$ is very
important for the resulting current density distribution. However, as
we do not understand in detail the physics of non-stationary cascades,
we cannot judge whether a particular form of $V(\psi)$ is admitted by
the cascade physics or not. On the other hand, in young pulsars the
average potential drop could not be very large, because already a
small fraction of the vacuum potential drop would be sufficient for
massive pair creation and screening of the accelerating electric
field. So, currently we could judge about reasonableness of a
particular current density distribution only from the maximum value of
the potential drop it requires. The electric potential given by
eq.~(\ref{eq:Sol_V_jGJ_NS}) is known up to the additive constant
$V_0$, which is the value of the accelerating potential at the polar
cap boundary. $V_0$ and thus the actually potential drop in the
accelerating zone can not be inferred from the magnetosphere physics
and is set by the physics of the polar cap cascades. The only thing
we could say about the actual potential drop in the acceleration zone
\emph{along} field lines is that its absolute value is not smaller
than the absolute value of the maximum potential drop of $V(\psi)$
\emph{across} the polar cap.
Let us now consider possible values of the maximum potential drop
across the polar cap of pulsar. If the potential is a monotone
function of $\psi$ in the polar cap, the maximum potential drop is the
drop between the rotation axis and the polar cap boundary. If the
potential as a function of $\psi$ has a minimum inside the polar cap,
the maximum potential drop will be either between the axis and the
minimum point, or between the edge and the minimum point. We analyse
this issue in details in Appendix~\ref{sec:App-maximum-potent-drop}.
In Fig.~\ref{fig:Vmax_Contours} the contour map of the maximum
potential drop in the plane $(a,b)$ is shown. The line given by
eq.~(\ref{eq:DVb_is0}) is the line where for fixed $a$ (or $b$) the
smallest value of the potential drop across the polar cap is achieved.
From this plot it is evident that even if the potential drop in the
polar cap is rather moderate, of the order of $\sim{}10$ per cents,
there are force-free magnetosphere configurations with the current
density distribution deviating significantly from the Michel current
density distribution. So, even for young pulsars there may be some
flexibility in the current density distribution admitted by the
force-free electrodynamics.
Note that force-free magnetosphere impose different constraints on
pulsars in aligned $\vec{\mu}\cdot\vec{\Omega}>0$ and anti-aligned
configuration $\vec{\mu}\cdot\vec{\Omega}<0$ configuration (pulsar and
antipulsar in terms of \citet{Ruderman/Sutherland75}). For pulsars
the accelerating potential is positive, i.e. it increases from the
surface of the NS toward the force-free zone above the pair formation
front. In case of antipulsar the potential is negative, it decreases
toward the pair formation front, because positive charges are
accelerated. Equations for the current
density~(\ref{eq:j}),~(\ref{eq:j=a_psi+b}) we used to derive the
equation for the electric potential~(\ref{eq:EqV_for_jGJ_NS}) contain
the expression for the GJ charge density as a factor, and, hence, the
resulting expression for the electric potential is the same for each
sign of the GJ current density. So, for pulsars there is a
\emph{minimum} in the accelerating potential distribution, for
antipulsar the distribution of the accelerating electric potential has
a \emph{maximum}. Mathematically this results from different signs of
the integration constant $V_0$.
\subsection{Angular velocity}
\label{sec:angular-velocity}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig3.eps}
\caption{Ratio of the actual current density to the Goldreich-Julian
current density $\iota(1)$ at the polar cap boundary, where the
minimum value of this ratio is achieved, see text. The dotted
lines show contours of $\iota(1)$}
\label{fig:j/jGJ_Min}
\end{figure}
The normalised angular velocity of the open magnetic field lines in
the force free magnetosphere with linear current density
distribution~(\ref{eq:j=a_psi+b}) is given by
\begin{equation}
\label{eq:Beta}
\beta(\psi) =\frac{2 b + a \psi }{2-\psi }
\,.
\end{equation}
For admitted current densities it grows with increasing of $\psi$,
because the first derivative $d\beta/d\psi$ for the admitted values of
$a,b$ is always non-negative. So, the angular velocity either
\emph{increase} toward the polar cap boundary or remains
\emph{constant} over the cap if $a=-b$. The latter case includes also
the Michel solution. The minimum value of $\beta$
\begin{equation}
\label{eq:Beta_Min}
\beta_\mathrm{min} = b
\,,
\end{equation}
is achieved at the rotation axis, where $\psi=0$, and the maximum value
\begin{equation}
\label{eq:Beta_Max}
\beta_\mathrm{min} = 2 b + a
\,,
\end{equation}
at the polar cap boundary, where $\psi=1$. So, the open field lines
can rotate slower, as well as faster that the NS, but the lines near
the polar cap boundary rotate not slower than the lines near the
rotation axis.
\subsection{Goldreich-Julian current density}
\label{sec:goldr-juli-curr}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig4.eps}
\caption{Spindown rate in terms of the Michel spindown rate. Dotted
lines show contours of $w$. Label on the lines correspond to the
values of $w$.}
\label{fig:W}
\end{figure}
Expression for the GJ current density in the polar cap can be obtained
by substitution of the expression~(\ref{eq:Beta}) for $\beta$ into
equation~(\ref{eq:RhoGJ_local}) for the GJ current density. We get
\begin{equation}
\label{eq:RhoGJ_linear_j}
\GJ{j}(\psi) = \GJNS{j}\, \frac{4b + a\psi(4-\psi)}{(\psi -2)^2}
\,.
\end{equation}
For the admitted values of the parameters $a,b$ the derivative
$d\GJ{j}/d\psi$ is always non-negative and, hence, the GJ current
density either \emph{increases} toward the polar cap boundary, or
remains \emph{constant} when $a=-b$. The actual current density,
however, could decrease as well as increase toward the polar cap edge.
For the charge separated flow the deviation of the current density
from the GJ current density generate an accelerating or a decelerating
electric field when $j<\GJ{j}$ or $j>\GJ{j}$ correspondingly.
Although in non-stationary cascades the particle flow would be not
charge separated, the ratio of the actual current density to the GJ
current density may give some clues about cascade states required by a
particular magnetosphere configuration. This ratio is given by
\begin{equation}
\label{eq:j_to_jGJ}
\iota(\psi) \equiv \frac{j(\psi)}{\GJ{j}(\psi)} =
\frac{(\psi -2)^2 (b+a \psi )}{a \psi (4-\psi) +4 b}
\end{equation}
For each admitted configuration the current density is equal to the GJ
current density at the rotation axis. For the admitted values of the
parameters $a,b$ the derivative $d\,\iota/d\psi$ is always positive,
and, hence, the current density in terms of the GJ current density
\emph{decreases} toward the polar cap boundary. So, except the
rotation axis the current density in the polar cap is always less than
the GJ current density. The relative deviation of the actual current
density from the GJ current density is maximal at the polar cap
boundary
\begin{equation}
\label{eq::j_to_jGJ_min}
\iota(1) = \frac{a+b}{3 a+4 b}
\,.
\end{equation}
Its maximum value $\iota_\mathrm{max}=1/3$ this ratio achieves when
$b=0$. Its minimum value $\iota_\mathrm{min}=0$ it achieves when
$a=-b$, what includes also the case of the Michel's current density
distribution. The contours of $\iota(1)$ are shown in
Fig.~\ref{fig:j/jGJ_Min}.
\subsection{Spin-down rate and the total energy of electromagnetic
field in the magnetosphere}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig5.eps}
\caption{Electric potential in the polar cap of pulsar as a function
of the normalised flux function $\psi$ for magnetosphere
configurations with a constant current density across the cap. In
all cases $V_0$ is set to zero. Numbers near the lines correspond
to the following values of $b$: 1 --- $b=0$; 2 --- $b=.5$; 3 ---
$b=\Max{b}/2$; 4 --- $b=1$; 5 --- $b=\Max{b}$. The line
corresponding to the minimum potential drop across the cap is
shown by the thick solid line (the line 3).}
\label{fig:a=0_v}
\end{figure}
In our notations the spindown rate of an aligned rotator is
(cf. eq.~(60) in Paper~I)
\begin{equation}
\label{eq:W_general}
W = \Wmd \int_0^{\PsiL} S(\Psi) \beta(\Psi)\, d\Psi
\,,
\end{equation}
where $\Wmd$ is the magnetodipolar energy losses defined as
\begin{equation}
\label{eq:Wmd}
\Wmd = \frac{B_0^2\RNS^6\Omega^4}{4c^3}
\,.
\end{equation}
Substituting expression for the poloidal
current~(\ref{eq:SMonopoleNormalized}) and using the normalised flux
function $\psi$ we get
\begin{equation}
\label{eq:W}
W = \Wmd\, \PsiL^2 \int_0^1 \beta^2(\psi) \psi(2-\psi)\, d\psi
\,.
\end{equation}
Expression for the spindown rate in the Michel solution
\begin{equation}
\label{eq:W_Michel}
W_\mathrm{M} = \frac{2}{3} \PsiL^2 \Wmd
\end{equation}
differs from the spindown rate obtained in the numerical simulations
of the corotating aligned rotator magnetosphere with by a constant factor.
However, it has very similar dependence on the size of the corotating
zone $x_0$ (cf. equations~(62),~(63) in Paper~I). As our solutions are
obtained in split-monopole approximation, they should differ from the
real solution approximately in the same way as the Michel solution
does. Because of this it would be more appropriate to normalise the
spindown rate to the spindown rate in the Michel split-monopole
solution. Doing so we will be able to study the effect of
differential rotation on the energy losses separately from the
dependence of the spindown rate on the size of the corotating zone.
For the normalised spindown rate in the considered case of linear
current density we get
\begin{multline}
\label{eq:W_to_W_Michel}
w \equiv \frac{W}{W_\mathrm{M}} =
4 a^2 (3 \log2 - 2) + \\
+ 3 a b (8 \log2-5) + 6 b^2 (2 \log2-1)
\,.
\end{multline}
In Fig.~\ref{fig:W} contour lines of $w$ are shown in the domain of
admitted values for parameters $a,b$. We see that spindown rate can
vary significantly, from zero to the value exceeding the Michel's
energy losses by a factor of $\approx6$. It increases with increasing
of the values of the parameters $a,b$ and decreases with decreasing of
that values. It is due to increasing or decreasing of the total
poloidal current in the magnetosphere correspondingly.
The total energy of the magnetosphere could be estimated from the
split-monopole solution. Using the formula~(\ref{eq:W__Spindown})
derived in Appendix~\ref{sec:energy-electr-field} we have for the
total energy of the electromagnetic field
\begin{equation}
\label{eq:W_total}
\mathcal{W}\simeq\Pol{\mathcal{W}} + (R-\RNS)\,W
\,,
\end{equation}
where $\Pol{\mathcal{W}}$ is the total energy of the poloidal magnetic
field and $R$ is the radius of the magnetosphere. The first term in
our approximation is the same for all magnetosphere configurations,
the difference in the total energy arises from the second term.
Hence, the contours of the constant total energy in the plane $(a,b)$
have the same form as the contours of the spindown rate $W$ shown in
Fig.~\ref{fig:W}. So, the total energy of the magnetosphere increases
with increasing of parameters $a,b$, i.e. it increases with the
increase of the poloidal current.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig6.eps}
\caption{Normalised angular velocity of the open magnetic field
lines as a function of the normalised flux function $\psi$ for
magnetosphere configurations with a constant current density
across the cap. Labelling of the curves is the same as in
Fig.~\ref{fig:a=0_v}.}
\label{fig:a=0_beta}
\end{figure}
\subsection{Example solutions}
\label{sec:part-solutions}
As examples we consider here the properties of two particular
solutions in details. We chose these solution because either their
current density or the potential drop seem to correspond to
``natural'' states of the polar cap cascades. Although we do not
claim that one of the solutions is realised as a real pulsar
configuration, but knowledge of their properties may be helpful in
understanding of the physical conditions the polar cap cascades should
adjust to.
\subsubsection{Configurations with constant current density}
\label{sec:constant-j}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig7.eps}
\caption{Current density as a function of the normalised flux
function $\psi$ for magnetosphere configurations with a constant
current density across the cap. Labelling of the curves is the
same as in Fig.~\ref{fig:a=0_v}. By the thick grey line the ratio
of the actual current density to the GJ current density
$\iota(\psi)$ is shown. For this case it is the same for all
solutions.}
\label{fig:a=0_j}
\end{figure}
At first we consider the case when the current density is constant
over the polar cap, i.e. $a=0$ and $j=b\,\GJNS{j}$. Constant density
distribution would be produced by cascades in their ``natural'' state,
if the current adjustment proceeds locally, without strong influence
from the current along adjacent field lines. The electric potential
in that case is
\begin{equation}
\label{eq:V_a_eq_0}
\Const{V}(\psi) = V_0+1-\psi-2b\log(2-\psi)
\,.
\end{equation}
This potential has the following properties (see Fig.~\ref{fig:a=0_v}
where $V(\psi)$ is shown for several values of $b$ assuming for the
sake of simplicity $V_0=0$):
\begin{itemize}
\item the admitted values of the current density in the polar cap of
pulsar are within interval $[0,\Max{b}]$, where
$\Max{b}=1/\log2\simeq{}1.443$.
\item if $0<b<\Max{b}/2\simeq{}0.721$ the value of the electric
potential at the rotation axis $\Const{V}(0)$ is larger than that
value at the polar cap edge $\Const{V}(1)$,
$\Const{V}(0)>\Const{V}(1)$
\item if $\Max{b}/2<b<\Max{b}$ the value of the electric potential at
the rotation axis $\Const{V}(0)$ is smaller than that value at the
polar cap edge $\Const{V}(1)$, $\Const{V}(0)<\Const{V}(1)$.
\item if $0<b<1/2$ or $1<b<\Max{b}$ the potential is a monotone
function of $\psi$; if $1/2<b<\Max{b}$ it has a minimum.
\item in the point $b=\Max{b}/2$ the maximum potential drop across the
polar cap reaches its minimum value $\Max{\Delta{}V}=0.086$.
\end{itemize}
The reason for such behaviour of the potential is easy to understand
from the Fig.~\ref{fig:ab_range} in
Appendix~\ref{sec:App-maximum-potent-drop}. The critical points where
$V(\psi)$ changes its behaviour are the points where the line $a=0$
intersects the boundaries of the regions I,II,II, and IV.
The angular velocity of the open magnetic field lines is
\begin{equation}
\label{eq:Beta_b_eq_0}
\Const{\beta}(\psi) = \frac{2 b}{2-\psi}
\end{equation}
Distribution of the corresponding angular velocity is shown in
Fig.~\ref{fig:a=0_beta}. For $b>1$ the angular velocity of rotation
of all open magnetic field lines is larger than the angular velocity
of the NS. For $b<1/2$ all magnetic field lines rotate slower that the
NS. For $1/2<b<1$ some open filed lines near the rotation axis rotates
slower that the NS, the other open field lines rotates faster than the
NS.
The current density distribution in terms of the GJ current density is
\begin{equation}
\label{eq:j2jGJ_a_eq_0}
\Const{\iota}(\psi) = \frac{1}{4}(2-\psi)^2
\,,
\end{equation}
and it doesn't depend on the value of the parameter b. The current
density is always sub-Goldreich-Julian, except the rotation axis,
where it is equal to the GJ current density.
The normalised spindown rate for the considered case has simple
quadratic dependence on the current density
\begin{equation}
\label{eq:w_a_eq_0}
\Const{w} = 6 (\log4 -1) b^2
\,.
\end{equation}
This dependence is shown in Fig.~\ref{fig:a=0_w}. The energy losses in
configuration with a constant current density could not be higher than
$\approx{}4.82$ of the energy losses in the corresponding Michel
solution.
The case $b=1$ is worth to mention separately, as it is ``the most
natural'' state for the space charge limited particle flow, for which
the current density at the surface on the NS is equal to the
corotational GJ current density. In
Figs.~\ref{fig:a=0_v},~\ref{fig:a=0_beta},~\ref{fig:a=0_j} the lines
corresponding to this case are labelled with ``3''. The maximum
potential drop for the configuration with the current density
distribution being equal to the corotational GJ current density is
$\Delta{}\Max{V}=0.386$ and the angular velocity of the open field
lines varies from $1$ at the rotation axis to $2$ at the polar cap
boundary.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig8.eps}
\caption{Spindown rate of an aligned rotator normalised to the
spindown rate in the Michel solution for magnetosphere
configurations with a constant current density across the cap as a
function of the current density in the polar cap (parameter $b$).}
\label{fig:a=0_w}
\end{figure}
\subsubsection{Configurations with the smallest potential drops}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig9.eps}
\caption{Electric potential in the polar cap of pulsar as a function
of the normalised flux function $\psi$ for magnetosphere
configurations with the smallest potential drop across the cap.
In all cases $V_0$ is set to zero. Numbered lines correspond to
the following values of $a$: 1 --- $a=-1$ (Michel's solution); 2 ---
$a=0$ (solution with a constant current density); 3 --- $a=1$; 4 ---
$a=2$; 5 --- $a=1/(log4-1)$.}
\label{fig:opt_v}
\end{figure}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig10.eps}
\caption{Normalised angular velocity of the open magnetic field
lines as a function of the normalised flux function $\psi$ for
magnetosphere configurations with the smallest potential drops
across the cap. Labelling of the curves is the same as in
Fig.~\ref{fig:opt_v}.}
\label{fig:opt_beta}
\end{figure}
As the next example we consider the case when the maximum potential
drop across the polar cap for a fixed value of either $a$ or $b$ is
minimal. The points corresponding to such values of parameters are
show in
Figs.~\ref{fig:Vmax_Contours},~\ref{fig:j/jGJ_Min},~\ref{fig:W} by the
thick grey line. Equation for this line in the plane $(a,b)$ is
derived in Appendix~\ref{sec:App-maximum-potent-drop},
equation~(\ref{eq:DVb_is0}). In some sense this is an optimal
configuration for the cascade zone, because for a fixed value of the
current density at some magnetic field line such configuration
requires the smallest potential drop among the other admitted
configurations. The accelerating potential for the considered class
of configurations is
\begin{equation}
\label{eq:V_ab_opt}
\Opt{V}(\psi) = V_0 - (a+1)
\left\{
\psi +\log\left[ \left( 1-\frac{\psi}{2} \right)^{\frac{1}{\log2}}\right]
\right\}
\,.
\end{equation}
The potential is shown as a function of $\psi$ in Fig.~\ref{fig:opt_v}
for several particular cases assuming for the sake of simplicity zero
potential drop at the polar cap boundary. The potential has always a
minimum at the point
\begin{equation}
\label{eq:psi_min__ab_opt}
\Opt{\Min{\psi}} = 2-\frac{1}{\log 2} \simeq 0.557
\,,
\end{equation}
the position of this minimum does not depend of the values of $a,b$.
The minimum value of the maximal potential drop across the cap,
$\min(\Max{\Delta{}V})=0$, is achieved at the left end of the grey
line, at the point $(a=-1,b=1)$ corresponding to the Michel solution.
The maximum potential drop across the gap for this class of
configurations, $\max(\Max{\Delta{}V})=0.309$, is achieved at the
right end of the grey line, at the point $(a=1/(\log4-1),b=0)$.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig11.eps}
\caption{Current density as a function of the normalised flux
function $\psi$ for magnetosphere configurations with the smallest
potential drops across the cap. Labelling of the curves is the
same as in Fig.~\ref{fig:opt_v}.}
\label{fig:opt_j}
\end{figure}
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig12.eps}
\caption{Ratio of the actual current density to the GJ current
density $\iota$ as a function of the normalised flux function
$\psi$ for magnetosphere configurations with the smallest
potential drops across the cap. Labelling of the curves is the
same as in Fig.\ref{fig:opt_v}.}
\label{fig:opt_j2gj}
\end{figure}
The angular velocity of the open field lines is
\begin{equation}
\label{eq:Beta_ab_opt}
\Opt{\beta}(\psi) = \frac{a+1}{(2-\psi )\log2}-a
\end{equation}
Distribution of $\Opt{\beta}(\psi)$ is shown in
Fig.~\ref{fig:opt_beta}. Before the minimum point $\Opt{\Min{\psi}}$
$\beta$ is not greater than 1, after that point $\beta$ is not smaller
than 1. With increasing of the maximum potential drop the variation
of the angular velocity across the polar cap becomes larger.
The current density distribution in the considered case has the form
\begin{equation}
\label{eq:j__ab_opt}
\Opt{j}(\psi) = a \psi + \frac{a (1-\log4) + 1}{\log4}
\,.
\end{equation}
That distributions is shown in Fig.~\ref{fig:opt_j}. All curves pass
through the point $\hat{\psi}=1-1/\log4$, where the current density is
$\Opt{j}(\hat{\psi})=\GJNS{j}/\log4$.
The current density distribution in terms of the Goldreich-Julian
current density is
\begin{equation}
\label{eq:j2jGJ_opt}
\Opt{\iota}(\psi) =
\frac{(\psi -2)^2 \left[ (\psi -1)a\log4 + a + 1 \right]}%
{\psi{}(4-\psi )a\log4 + 4a(1-\log4) + 4}
\,,
\end{equation}
It monotonically decreases from 1 at the rotation axis to its minimum
value at the PC boundary. This minimum value is in the range
$[0,1/3]$, the lowest value corresponds to the left end of the grey
curve (the Michel solution), the upper value corresponds to the right
end of the grey line. The outflow is sub-GJ everywhere except the
rotation axis.
The normalised spindown rate for the considered case is
\begin{multline}
\label{eq:w_opt}
\Opt{w} = \frac{1}{2 \log^{2}2}
\left\{
\left[ 2\log^{2}2 - 3(1-\log2) \right] a^2 -
\right.\\
\left.
- 3(2-3\log2)a - 3(1-2\log2)
\right\}
\,.
\end{multline}
It is shown as a function of the parameter $a$ in
Fig.~\ref{fig:opt_w}. We see that for the considered configuration the
spindown rate as a function of the parameter $a$ increases more slowly
than the spindown rate for configurations with a constant current
density as a function of $b$, cf. Fig.\ref{fig:a=0_w}. The energy
losses could not be higher than $\approx{}2.13$ of the energy losses
in the corresponding Michel solution.
\begin{figure}
\includegraphics[clip,width=\columnwidth]{fig13.eps}
\caption{Spindown rate of an aligned rotator normalised to the
spindown rate in the Michel solution for magnetosphere
configurations with the smallest potential drops across the polar
cap as a function of the parameter $a$.}
\label{fig:opt_w}
\end{figure}
\section{Discussion}
\label{sec:Discussion}
The main aim of this paper was to study the range of admitted current
density distributions in force-free magnetosphere of an aligned
rotator. Taking into account that this subject was not studied in
details before, the linear model used in this work is in our opinion
an adequate approach to the problem. The knowledge of the
magnetosphere behaviour in response to different potential drops in
the polar cap would be very useful for future modelling of
non-stationary polar cap cascades. This formalism could be used as a
tool allowing to judge quickly whether a particular model of the polar
cap cascades is compatible with the force-free magnetosphere or not.
It may also give a clue how the magnetosphere would respond to a
particular current density distributions obtained at some step in
course of numerical solution. Although the analytical model presented
here needs to be refined in numerical simulations, the presence of
some analytical relations should be very handy for numerical cascade
modelling.
We have considered here a simple case when the current density in the
polar cap of pulsar is a linear function of the magnetic flux.
However, the generalisation of the model to the case of a more
complicated shape of the current density distribution is
straightforward. One should proceed with the steps described at the
beginning of the Section~\ref{sec:Equation_for_V} for the desired form
of the current density distribution. The resulting equation for the
electric potential will be an ordinary differential equation, and
numerical solution of such equation for any given current density will
not be a problem.
The main conclusion we would like to draw from the presented results
is that even for rather moderate potential drop in the acceleration
zone the current density distribution can deviate significantly from
the ``canonical'' Michel distribution, which is basically preserved
for dipole geometry if all field lines corotates with the NS (see
Paper~I). In particular, a magnetosphere configuration with the
constant current density at the level of 73 per cents of the
Goldreich-Julian current density at the NS surface would require a
potential drop in the acceleration zone of the order of 10 per cents
of the vacuum potential drop. For time-dependent cascades this may be
realised even for young pulsars. We should note however that for
young pulsars the potential drop of the order of 10 per cents of the
vacuum drop could cause overheating of the polar cap by the particles
accelerated toward the NS
\citep[e.g.][]{Harding/Muslimov:heating_1::2001,Harding/Muslimov:heating_2::2002}.
In that sense such potential drop may be too large for young pulsars.
On the other hand, without knowledge of the dynamics of non-stationary
cascades it is in our opinion too preliminary to exclude the
possibility of such configurations for young pulsars, as in
non-stationary regime the heating of the cap may be not so strong as
in stationary cascades \citep[see][]{Levinson05}.
We used split-monopole approximation for the poloidal current density
distribution in the magnetosphere, which would produce an accurate
results only for configurations with very small size of the corotating
zone, less than the size of the light cylinder
$x_0\ll{}1/\beta(\PsiL)$, see Fig.~\ref{fig:Magnetosphere}. For the
(most interesting) case when these sizes are comparable, results
obtained in this work could be considered only as a zero approximation
to the real problem. We should point out an important modification
introduced by the dipole geometry of the magnetic field. For the
dipole geometry there will be some magnetic field lines which bent to
the equatorial plane at the light cylinder. For these field lines the second term%
\footnote{which in the cylindrical coordinates $(\varpi,\phi,z)$ is
$2\beta\pd{}_\varpi\Psi$}
on the r.h.s. of equation~(\ref{eq:CondAtLC}) will be negative, and in
order to get positive current density along these lines a more steep
dependence of $\beta$ on $\psi$ would be necessary. As a result the
potential drop in the dipole geometry would be higher than it is
obtained in our approximation. Figuratively speaking, in our model we
could correct only for the decrease of the electric current density
toward the polar cap boundary present in the Michel solution. On the
other hand, if $\beta(\PsiL)>1$, the size of the corotating zone can
be less as well as greater%
\footnote{for the closed magnetic field lines the angular velocity is
$\Omega$ and they are still inside \emph{their} light cylinder; the
adjustment of the angular velocity occurs in the current sheet,
which is for sure a non-force-free domain.}
then the light cylinder radius at the last open field line. In the
latter case there should be less magnetic field lines which bent
toward the equatorial plane, than in the first case, cf.
Fig.~\ref{fig:Magnetosphere} cases \textbf{I} and \textbf{II}. Hence,
the correction introduced by the dipolar field geometry for some
subset of our solutions would be non-monotonic as the size of the
corotating zone $x_0$ increases. So, there is still possibility that a
moderate potential drop could allow a large variety of the current
densities, although this issue needs careful investigation.
In this paper we ignored the electrodynamics of the polar cap zone.
Although without a theory of time-depending cascades we cannot put
more limits on the electric potential than the one we used in
Section~\ref{sec:Beat(psi)}, there is an additional limitation coming
from the basic electrodynamics. Namely, the accelerating potential
near conducting walls, which the current sheet at the polar cap edge
is believed to be, should approach zero. However, we could speculate
that there is a thin non-force-free zone at the edge of the polar cap
where the adjustment of the potential happens. In other words, the
return current and the actually nearly equipotential region may occupy
not the whole non-force-free zone. Because of this that limitation
would not eventually restrict our solutions.
At the end we would like to discuss briefly the issue with the pulsar
breaking index. If the inner pulsar magnetosphere is force-free, then
the spindown rate of an aligned pulsar as a function of angular
velocity will deviate from the power law $W\propto\Omega^4$ if the
size of the corotating zone or/and the distribution $\beta(\psi)$
change with time. The assumption that these ``parameters'' are time
dependent seems to us to be natural, because with the ageing of pulsar
the conditions in the polar cap cascade zone change and the
magnetosphere should adjust to these new conditions. In the frame of
our model we could make some simple estimations how the braking index
of pulsar is affected by the changes of these two ``parameters''.
As an example we consider the case when the pulsar magnetosphere
evolves through a set of configurations with a constant current
density. The spindown rate for such configurations is
\begin{equation}
\label{eq:W_x0_b___a_0}
W \propto \Omega^4 \PsiL^2 b^2 \sim \Omega^4 x_0^{-2} b^2
\,,
\end{equation}
here we estimate \PsiL{} assuming dipole field in the corotation zone.
If $b$ and/or $x_0$ are functions of time, the spindown rate will be
different from the spindown of a dipole in vacuum. If at some moment
the dependence of the size of the corotating zone and the current
density on $\Omega$ could be approximated as $x_0 \propto \Omega^\xi$
and $b_0 \propto \Omega^\zeta$ correspondingly, the braking index of
pulsar measured at that time is
\begin{equation}
\label{eq:nbreak__a_0}
n = 3 - 2\xi + 2\zeta
\,.
\end{equation}
We see, that the deviation of the braking index from the ``canonical''
value being equal to 3 would be by a factor of two larger than the
dependence of $b$ and $x_0$ on $\Omega$. The braking index could be
smaller as well as larger than 3, depending on the sign of the
expression $\zeta-\xi$. For instance, if in an old pulsar the
potential drop increases and, as the consequence, the current density
decreases, $\zeta$ is positive, and the braking index could be
\emph{greater} than 3. We note, that there are evidences for such
values of the braking index for old pulsars
\citep{Arzoumanian/Chernoff/Cordes:2002}. On the other hand, if $x_0$
decreases with the time, as it was proposed in Paper~I{} for young
pulsars, the braking index would be less than 3. However even in this
simple picture different trends may be possible. In reality the
evolution of the pulsar magnetosphere will be more complicated and
more steep as well as more gradual dependence of the braking index on
the current density would be possible. This would results in a rather
wide range of possible values of pulsar braking index.
\section*{Acknowledgments}
I acknowledge J.~Arons, J.~Gil, Yu.~Lyubarsky and G.~Melikidze for
fruitful discussions. I would like to thank J.Arons for useful
suggestions to the draft version of the article. This work was
partially supported by the Russian grants N.Sh.-5218.2006.2,
RNP.2.1.1.5940, N.Sh.-10181.2006.2 and by the Israel-US Binational
Science Foundation
\bibliographystyle{mn2e}
|
2,877,628,088,591 | arxiv | \section{Setting}
In order to understand iterative decoding of low-density-parity-check
codes (LDPC), two operations need to be studied. These operations
are the variable node convolution $\otimes$ and the check node
convolution $\boxtimes$. They correspond to the merging of information
respectively by variable nodes and by check nodes in the iterative
decoding process. The reader is assumed to be familiar with LDPC
codes as well as the formalism of modeling channels by densities.
A very complete introduction to the topic is \cite{mct}.
The notion of extremes of information combining (EIC) was introduced
by I. Land, P. Hoeher, S. Huettinger, and J. B. Huber in \cite{EIC03},
and further extended by I. Sutskover, S. Shamai, and J. Ziv, see \cite{EIC05} or \cite{EIC07}. The
idea of EIC is to associate to densities certain functionals, e.g.
the entropy functional, and to see how these functionals behave
under the combining of information, i.e. the two kinds of convolutions.
The purpose of this work is to solve optimizing problems that arise
in this setting. We will focus solely on the check convolution
$\boxtimes$ although many statements can be proven in the same way
for the variable node convolution.
\subsection{Notations}
There are several representations for a binary memoryless and symmetric-output channel (BMS).
As is done for instance in \cite{mct}, we see a BMS as a convex combination of binary symmetric channels (BSC),
given by a weight distribution $w$. Then we have (by definition)
\begin{ex}[Binary Symmetric Channel BSC($\epsilon$)]
\begin{align*}
w_{\text{BSC($\epsilon$)}} = \delta_{\epsilon}
\end{align*}
\end{ex}
\begin{ex}[Binary Erasure Channel BEC($\epsilon$)]
\begin{align*}
w_{\text{BEC($\epsilon$)}} = \bar{\epsilon}\delta_{0} + \epsilon\delta_{\frac{1}{2}}
\end{align*}
\end{ex}
The functionals of interest in this domain are
\begin{align*}
E(a) &= \int_0^{\frac{1}{2}} dw_a(\epsilon) \epsilon \\
H(a) &= \int_0^{\frac{1}{2}} dw_a(\epsilon) h_2\big(\epsilon\big) \\
B(a) &= \int_0^{\frac{1}{2}} dw_a(\epsilon) 2\sqrt{\epsilon (1-\epsilon)}
\end{align*}
which we call respectively the error probability, the entropy and
the Battacharyya functional. These can all be thought of as measures
of the channel quality. They are equal to $0$ for the perfect
channel and equal to $1$ for a useless channel. Applying these functionals
to the check convolution of two densities corresponds to
\begin{align*}
E(a \boxtimes b) &= \frac{1-(1-2E(a))(1-2E(b))}{2} \\
H(a\boxtimes b) &= \int dw_a(\epsilon) dw_b(\epsilon') h_2\Big(\frac{1-(1-2\epsilon)(1-2\epsilon')}{2}\Big)\\
B(a\boxtimes b) &= \int dw_a(\epsilon) dw_b(\epsilon')\sqrt{1-\big( (1-2\epsilon)(1-2\epsilon') \big)^2}
\end{align*}
In the sequel we will frequently refer to the following two functions, $f_H$ and $f_B$ :
\begin{align*}
f_H : X &\in [0;1] \mapsto h_2\big(\frac{1-X}{2}\big) \\
f_B : X &\in [0;1] \mapsto \sqrt{1-X^2}
\end{align*}
\subsection{Motivation}
A classical result in EIC, shown in \cite{EIC05} and \cite{EIC03},
is the following.
\begin{thm}\label{prev}
Let $b_0$ be any BMS channel. Amongst channels $a$, with fixed entropy $H$, $H(a \boxtimes b_0)$ is
\begin{itemize}
\item minimized by the \text{BSC}$(h_2^{-1}(h))$ and
\item maximized by the \text{BEC}(h).
\end{itemize}
\end{thm}
A quick and useful application of \ref{prev} is to give bounds on
the thresholds of LDPC codes. The same statement can be done with
the Battacharyya functional $B$. We will derive an alternate (calculus
free) proof of the second item in \ref{obounds}.
Sometimes one might need to deal with non-linear
expressions such as $H(a^{\boxtimes 4})-H(a^{\boxtimes 12})$. Let
us sketch very loosely, following \cite{URK11.2}, how such
expressions can appear. Apart from the Shannon threshold another
threshold called the \emph{Area Threshold} can be defined. The Area
threshold depends on the code and channel family under consideration.
In the case of a code taken from the $(d_l,d_r)$ regular ensemble,
one can compute this threshold $h^A$.
Consider a code taken from the $(d_l,d_r)$ regular ensemble, and transmission over a "gentle" channel family $\{ c_{\sigma}\}_{\underline{\sigma}}^{\overline{\sigma}}$, that is a family that is smooth, ordered, and complete\footnote{The definitions of these terms can be found for instance in \cite{mct}. Examples of such families include amongst many others the $\{\text{BEC}(h)\}_0^{1}$ and the $\{\text{BSC}(h)_0^{1}$ as well as combinations of these two, and other classical families like the $\{\text{BAWGNC}(\sigma)\}_0^{\infty}$.}. Ordered means that the bigger the channel parameter $\sigma$ the worst the channel is, in other words, all the functionals introduced above increase with $\sigma$, "smooth" means we can derivate $\sigma$ in the integrals.
Then one can define a GEXIT curve in the following manner. Take a FP $(c_{\sigma},x)$ and define $y = x^{\boxtimes d_r-1}$. Then plot,
$$
\big(H(c_{\sigma}), G(c_{\sigma},y^{\otimes d_l}) \big).
$$
Here
$
G(c_{\sigma}, \cdot) = \frac{H(\frac{dc_{\sigma}}{d\sigma} \otimes \cdot)}{H(\frac{c_{\sigma}}{d\sigma})}
$
In the case of the BEC, changing the channel parameter $\sigma$, corresponds to revealing certain bits, and the kernel $G(c_{\sigma},\cdot)$ represents the probability that this bit was not previously known from the observation of the value of other neighboring bits\footnote{Neighbors is to be understood in the sense of the Tanner graph as usual.}. In general everything has the same meaning but with soft information.
The kernel models how much more (compared to if we use only extrinsic observations) information is known about a generic bit, if the channel is made slightly better. For instance if the channel changes from being useless ($H(c)=1$) to slightly better, all the information we get is useful because with a useless channel nothing is known. So there is a point at $(1,1)$.
So intuitively, the area below this curve (assuming it exists and is smooth) between $h$ and $1$, should be a measure in bits of the total useful information that we get through BP decoding for $\sigma$ s.t. $H(c_{\sigma})=h$. As the rate of the code is roughly $1-\frac{d_l}{d_r}$, $1-\frac{d_l}{d_r}$ bits of information is enough to fully determine a codeword.
It is then natural to define the Area threshold $h^{A}$ as the point on the horizontal axis s.t. the area below the curve starting at $h^{A}$ to $1$ is equal to the design rate $1-\frac{d_l}{d_r}$. Of course this notion is dependent on the channel family.
However, an iterative decoder like BP might not be able to "use" all this information\footnote{Think of the BEC for which what BP does, is solving a system of equations by iteratively solving equations where all variables are known but one. Even if the system is full rank there might still be large portions that remain unknown to BP.}. So in general $h^{\text{BP}} \leq h^{A}$.
It would make sense that $h^{\text{MAP}} = h^{A}$, although in the general setting all that is known is $h^{\text{MAP}} \leq h^{A}$
In \cite{URK11.2}, it is shown that the value of the integral from $h$ to $1$ is
\begin{align}
1-\frac{d_l}{d_r} - h-(d_l-1-\frac{d_l}{d_r})H(x^{\boxtimes d_r}) + (d_l-1)H(x^{\boxtimes d_r-1})
\end{align}
where $x$ is "the" BP fixed point with entropy $h$ for the channel family under consideration.
The value of $h^{A}$, turns out to be the right bound of the domain where the following holds
\begin{align}\label{area}
- h-(d_l-1-\frac{d_l}{d_r})H(x^{\boxtimes d_r}) + (d_l-1)H(x^{\boxtimes d_r-1}) \geq 0.
\end{align}
Here, $x$ is "the" density evolution fixed point with entropy $h$, using
belief propagation (BP) decoding. In \cite{URK11.2} it is shown
that indeed (\ref{area}) holds, \emph{universally} over all BMS
channels $x$ with entropy lower or equal to $\frac{d_l}{d_r}$, in
the asymptotic regime $d_l,d_r \to \infty$ with $\frac{d_l}{d_r}$ fixed. This
implies the Area threshold universally approaches the Shannon
threshold. We will derive another proof of this fact in Section
\ref{sareat} (see Proposition \ref{cclarea}).
In \cite{URK11.2} it is then shown that a class of spatially coupled
codes achieve the Area threshold, under BP decoding. This combined
with the fact above, gives a new way to achieve capacity.
\section{Results}
Our results fit in a slightly more general framework than that of Theorem \ref{prev}:
we will consider expressions of the type $\Phi(\rho(a))$
where $\rho$ is a polynomial, and $\Phi$ is either $H$ or $B$.
We use the following notation
\begin{notation}
Let $\rho(X) = \sum c_i X^{i}$ be any polynomial s.t. $\rho(0) =0$,\footnote{Instead of considering polynomials who vanish at $0$, we could use a convention like $a^{\boxtimes 0}=$"Perfect Channel".} i.e. $c_0 = 0$ and $\Phi$ be one of the functionals above. We will use the convention
\begin{align}\label{def1}
\Phi(\rho(a)) \stackrel{\text{def}}{=} \sum_i c_i \Phi(a^{\boxtimes i})
\end{align}
\end{notation}
The following two statements are our main results.
We prove them in the next section.
\begin{prop}\label{t1}
Let $\rho$ be any polynomial s.t. $\rho(0)=0$,
and $\Phi$ be $H$ or $B$. Consider the following problem
\begin{align*}
\mbox{\text{MAX} }&\Phi(\rho(a))\\
\mbox{s.t. }&\Phi(a) = \phi_0
\end{align*}
Then, if $\rho$ is $\cup$-convex over $[0;f_{\Phi}^{-1}(\phi_0)^2]$, the BEC solves this problem.
\end{prop}
\begin{prop}\label{t2}
Let $\rho$ be any polynomial s.t. $\rho(0)=0$, and $\Phi$ be $H$ or $B$. Consider the following problem
\begin{align*}
\mbox{\text{OPT} }&\Phi(\rho(a))\\
\mbox{s.t. }&E(a) = \epsilon
\end{align*}
Then, if $\rho$ is increasing over $[0;1-2\epsilon]$,
\begin{itemize}
\item the \text{BEC} minimizes this problem.
\item the \text{BSC} maximizes this problem.
\end{itemize}
\end{prop}
\begin{disc}
The hypotheses for these propositions are probably not tight, they just ease the proofs.
The reader should not pay too much attention to the obscure terms $f_{\Phi}^{-1}(\phi_0)^2$.
The maximizing part in the previous result \ref{prev} follow as a special case of Proposition \ref{t1} with $\rho=X^d$.
Our improvement, technically speaking, is dealing with other polynomials than $X^d$.
\end{disc}
Proposition \ref{t1} only addresses half of the question. We suspect that
in most cases the minimizer is the BSC, and pose this as an interesting open question.
Dealing with the problem \ref{area} requires that we have a lower bound.
This is the purpouse of the following lemma
\begin{lem}\label{l1}
Suppose $\rho$ is increasing over $[0;f_{\Phi}^{-1}(\phi_0)^2]$. Then, for all channels $a$ with $\Phi(a)=\phi_0$,
\begin{align}\label{b1}
\Phi(\rho(a)) \geq \rho(1) - \rho(f_{\Phi}^{-1}(\phi_0)^2)
\end{align}
\end{lem}
\vspace*{0.1cm}
\section{Proofs}
Before we start the proof, a few preliminary observations are needed.
\subsection{Preliminary observations}\label{prelim}
Let $\Phi$ be either $H$, the entropy or $B$, the Battacharyya functional. In both cases the "kernel" $f_{\Phi}$ can be expanded in power series,
\begin{align*}
f_\Phi(X) = 1 - \sum_{1}^{\infty} a_{\Phi,n} X^{2n}
\end{align*}
where equality still holds for $X=1$. The crucial property of
$\big(a_{\Phi,n}\big)_n$ is that all the terms are positive and
furthermore
$$
\sum_{n\geq0} a_{\Phi,n} = 1
$$
The explicit formulas are
\begin{align*}
a_{H,n} &= \frac{1}{2\log(2)n(2n-1)} \\
a_{B,n} &= \frac{{2n\choose n}}{(2n-1)4^n}
\end{align*}
This expansion can be plugged in the definition of $\Phi(a)$ to yield
\begin{align*}
1-\Phi(a) &= 1-\int dw_a(\epsilon)f_{\Phi}(1-2\epsilon)\\
&=\sum a_{\Phi,n} \int dw_a(\epsilon)(1-2\epsilon)^{2n}
\end{align*}
and we can proceed in a similar fashion for $\Phi(a \boxtimes b)$ or more complicated expressions.
\begin{defn}[moments]
For a channel $a$, its $n$-th moment is defined by
$$\gamma_{a,n} = \int dw_a(\epsilon)(1-2\epsilon)^{2n}.$$
\end{defn}
We call the $\gamma_{a,n}$s \emph{moments} even if, strictly speaking,
they are not. Note that in terms of moments, the \text{BEC} is
characterized by having all its moments equal, and the \text{BSC}
by having moments that decrease geometrically.
\begin{ex}
Fix $\Phi = \phi_0$, where $\Phi$ is either the Battacharyya functional or the entropy.
Consider the \text{BEC} and the \text{BSC} s.t. there $\Phi(.)$ is equal to $\phi_0$. Then,
\begin{align*}
\gamma_{\text{BEC},n} &= 1-\phi_0 \\
\gamma_{\text{BSC},n} &= f_{\Phi}^{-1}(\phi_0)^{2n}
\end{align*}
\end{ex}
With this definition
\begin{align}\label{w1}
1-\Phi(a) = \sum_{n\geq 1} a_{\Phi,n} \gamma_{a,n}
\end{align}
Note also that if $\Phi=H$, then $1-\Phi$ is no other than $C$, the capacity functional. Also, using Fubini, we see that
$
\int dw_a(\epsilon) dw_b(\epsilon')(1-2\epsilon)^{2n}(1-2\epsilon')^{2n} = \gamma_{a,n} \gamma_{b,n}
$
and it follows that
\begin{align}{\label{e1}}
1-\Phi(a\boxtimes b) = \sum a_{\Phi,n} \gamma_{a,n} \gamma_{b,n}
\end{align}
and this yields straightforwardly
\begin{align}{\label{f1}}
1-\Phi(a^{\boxtimes i}) = \sum a_{\Phi,n} \gamma_{a,n}^{i}
\end{align}
More generally, if $\rho=\sum_{i \geq 1} c_i X^{i}$ is a polynomial
\begin{align*}
\Phi(\rho(a)) &\stackrel{(\ref{def1})}{=} \sum_i c_i \Phi(a^{\boxtimes i}) \\
&\stackrel{(\ref{f1})}{=}\sum_i c_i \left( 1-\sum_n (a_{\Phi,n} \gamma_{a,n}^i)\right) \\
&\stackrel{\text{def}}{=} \sum_i c_i-\sum_n a_{\Phi,n} \underbrace{\sum_i c_i \gamma_{a,n}^i}_{\rho(\gamma_{a,n})}
\end{align*}
which can be rewritten as
\begin{align}\label{phirhogen}
\Phi(\rho(a)) = \rho(1) - \sum_n a_{\Phi,n}\rho(\gamma_{a,n})
\end{align}
Although very simple, the expansion above gives an efficient way
to derive numerous bounds. All the proofs presented here rely heavily
on it.
It will be convenient in the sequel to know the range
the moments can achieve. They are decreasing and positive. So the
biggest moment is the first $\gamma_{a,1}$. The next lemma states
what channel $a$ maximizes $\gamma_{a,1}$.
\begin{lem}\label{l2}
Amongst all channels $a$, s.t. $\Phi(a) = \phi_0$, the \text{BSC} maximizes $\gamma_{a,1}$.
\end{lem}
\begin{proof}
The function $x \mapsto x^n$ is $\cup$-convex. Using Jensen's inequality
\begin{align*}
\gamma_{a,n} = \int dw_a(\epsilon) (1-2\epsilon)^{2n} \geq \Big(\int dw_a(\epsilon) (1-2\epsilon)^2\Big)^n = \gamma_{a,1}^n
\end{align*}
Then notice
\begin{align*}
1-\phi_{0} = \sum a_{\Phi,n} \gamma_{a,n} &\geq \sum a_{\Phi,n} \gamma_{a,1}^n \\
&=1-f_{\Phi}(\sqrt{\gamma_{a,1}})
\end{align*}
Inverting this inequality - $f_{\Phi}^{-1}$ is decreasing because $f_{\Phi}$ is - gives
\begin{align*}
\gamma_{a,1} \leq (f_{\Phi}^{-1}(\phi_0))^2
\end{align*}
The bound is attained by and only by the \text{BSC}, for which indeed
\begin{align}\label{g1bsc}
\gamma_{\text{BSC},1} = f_{\Phi}^{-1}(\phi_0)^2
\end{align}
\end{proof}
\begin{notation}
We may write $\gamma_1$ instead of $\gamma_{\text{BSC},1}$.
\end{notation}
Bounds can be used at two different levels. Either we bound the moments themselves - like in the derivation of \ref{l1} - that would be the first level. Or we can look at the expressions from one step further and see
$
\sum a_{\Phi,n} \gamma_{a,n}
$
as an expectation
$
E(\gamma)
$.
Here the expectation is taken w.r.t to a discrete measure given
by the weights $(a_{\Phi,n})$. In this second setup, we can then use classical inequalities, like
the Jensen inequality. That is the idea of the proof of \ref{t1}
\subsection{Proof of \ref{t1}}\label{pt1}
Notice, by assumption and Lemma \ref{l1} the range over which $\rho$ is convex covers the values the moments can take.
\begin{align*}
\Phi(\rho(a)) &\stackrel{(\ref{phirhogen})}{=} \rho(1) - \sum_n a_{\Phi,n}\rho(\gamma_{a,n}) \\
&\stackrel{\text{Jensen}}{\leq} \rho(1) - \rho\big( \sum_n a_{\Phi,n}\gamma_{a,n}\big)\\
&\stackrel{(\ref{w1})}{=}\rho(1) - \rho(1-\phi_0)
\end{align*}
To conclude notice that
$
\rho(1) - \rho(1-\phi_0) = \Phi(\rho(\text{BEC}(\phi_0)))
$.
\subsection{Proof of \ref{t2}}
Proposition \ref{t2} is a direct corollary of
\begin{lem}\label{l2}
For all $n\in\field{N}$, amongst the channels $a$ with fixed $E$ say $\epsilon$, the one who minimizes (resp. maximizes) $\gamma_{a,n}$ is the \text{BSC}$(\epsilon)$ (resp \text{BEC}$(2\epsilon)$).
\end{lem}
\begin{proof}[Proof of \ref{l2}]
Even though it is not mandatory to do so, we can choose according to Caratheodory Prinicple (see \ref{conv}) to restrict ourselves to combinations of two $\delta$'s.
$$ a = \alpha \delta_{\epsilon_1}+\bar{\alpha} \delta_{\epsilon_2}$$
Then using the $\cup$-convexity of $\epsilon \mapsto (1-2\epsilon)^{2n}$ we have
\begin{align*}
\underbrace{1-2\epsilon}_{\gamma_{\text{BEC},n}} \geq \underbrace{\alpha (1-2\epsilon_1)^{2n} + \bar{\alpha} (1-2\epsilon_2)^{2n}}_{\gamma_{a,n}} \geq \underbrace{(1-2\epsilon)^{2n}}_{\gamma_{\text{BSC},n}}
\end{align*}
\end{proof}
The polynomial $\rho$ is supposed to be increasing over $[0;1-2\epsilon]$, that is over a range that covers all the values the moments can take. Using this and \ref{l2}, the optimizers to each term in the series expansion of
$
\Phi(\rho(a)) \stackrel{(\ref{phirhogen})}{=} \rho(1) - \sum_n a_{\Phi,n}\rho(\gamma_{a,n})
$
are the same, so we know they are the global optimizers.
\subsection{Proof of \ref{l1}}
\begin{proof}
We simply use $\gamma_{a,n} \leq \gamma_{a,1}$ and the monotonicity of $\rho$ to get
$$\forall n \, \rho(\gamma_{a,n}) \leq \rho(\gamma_{a,1})$$
Then
\begin{align*}
\Phi(\rho(a)) \stackrel{(\ref{phirhogen})}{=} \rho(1) - \sum_n a_{\Phi,n}\rho(\gamma_{a,n}) \geq \rho(1) - \rho(\gamma_{a,1})\underbrace{\sum_n a_{\Phi,n}}_{1},
\end{align*}
and using Lemma \ref{l1} and again the monotonicity of $\rho$
\begin{align*}
\rho(\gamma_{a,1}) \leq \rho(\gamma_{\text{BSC},1}) = \rho\big(f_{\Phi}^{-1}(\phi_0)^2\big)
\end{align*}
(\ref{b1}) follows.
\end{proof}
\input{obounds}
\section{An application : studying the area threshold}\label{sareat}
Remember our initial problem which was to study when (\ref{area}) holds.
Fix $c_0>0$, we would like to know first, when{\small
\begin{align}\label{targ2}
-A = - h - (d_l-1-\frac{d_l}{d_r})H(a^{\boxtimes d_r})+(d_l-1)H(a^{\boxtimes d_r-1}) \geq c_0
\end{align}
}
holds. We are going to show
\begin{lem}\label{neglem}
If the following two conditions are fulfilled then (\ref{targ2}) holds.
\begin{itemize}
\item[(i)] $\big(1-2h_2^{-1}(h)\big)^2 \leq \big(\frac{c_0}{d_l-1}\big)^{\frac{1}{d_r-1}}$
\item[(ii)] $h\leq \frac{d_l}{d_r}-2c_0 $
\end{itemize}
\end{lem}
\begin{proof}
Define
\begin{align*}
d & =d_r & \kappa &= \frac{d_l-1-\frac{d_l}{d_r}}{d_l-1} &\rho &= X^{d-1}-\kappa X^{d}
\end{align*}
We are going to use the bound from Lemma \ref{l1}.
The condition for $\rho$ to be increasing over the range of interest is
$\big(1-2h_2^{-1}(h)\big)^2 \leq \frac{d_r-1}{\kappa d_r}$
which is always true when $\kappa$ is given the value $\kappa = \frac{d_l-1-\frac{d_l}{d_r}}{d_l-1}$. So by Lemma \ref{l1}
{\small
\begin{align}\label{b12}
H(\rho(a)) \geq 1-\kappa - \left(1-2h_2^{-1}(h)\right)^{2(d-1)} + \kappa \left(1-2h_2^{-1}(h)\right)^{2d}
\end{align}
}
and then,
{\small
\begin{align*}
&- h - (d_l-1-\frac{d_l}{d_r})H(a^{\boxtimes d_r})+(d_l-1)H(a^{\boxtimes d_r-1}) \\
&= - h+(d_l-1)H(\rho(a)) \\
&\stackrel{(\ref{b12})}{\geq} -h+(d_l-1)\Big[1-\kappa-\rho((1-2h_2^{-1}(h))^2\big)\Big]\\
\end{align*}
}
Also $(d_l-1)(1-\kappa) = \frac{d_l}{d_r}$. So for (\ref{targ2}) to hold it is enough that
\begin{align*}
&-h+(d_l-1)\Big[1-\kappa-\rho((1-2h_2^{-1}(h))^2\big)\Big] \geq c_0 \\
&\Leftrightarrow \frac{d_l}{d_r}-h - (d_l-1)\rho((1-2h_2^{-1}(h))^2\big) \geq c_0
\end{align*}
which can be rewritten
\begin{align}\label{int1}
\frac{d_l}{d_r}-h \geq \underbrace{(d_l-1)\rho((1-2h_2^{-1}(h))^2\big)}_{\xi(h)} +c_0
\end{align}
(i) is s.t. $\xi(h) \leq c_0$, and then (ii) makes (\ref{int1}) true. Indeed,
{\small
\begin{align*}
\xi(h) &= (d_l-1)(1-2h_2^{-1}(h))^{2d_r-2}- (d_l-1-\frac{d_l}{d_r})(1-2h_2^{-1}(h))^{2d_r} \\
&\leq (d_l-1)(1-2h_2^{-1}(h))^{2d_r-2}\\
&\leq (d_l-1)\Big(\frac{c_0}{d_l-1}\Big)^{\frac{d_r-1}{d_r-1}} = c_0
\end{align*}
}
If we are interested only in the sign of $A(h)$ and not how far it is from $0$, we can let $c_0 = f(d_l,d_r)$ to increase the range of valid $h$. For instance, taking $c_0 = (d_l-1) \exp(-\sqrt{d_r-1})$,
\begin{align*}
(i) \Leftrightarrow h_2^{-1}(h) \geq \frac{K}{\sqrt{d_r}}(1 + o(1))
\end{align*}
Where $K$ is some constant. Asymptotically this can be taken (changing the constant) to be simply
$$
h_2^{-1}(h) \geq \frac{K}{\sqrt{d_r}}.
$$
In the end, we are left with
\begin{prop}\label{cclarea}
For, $d_l,d_r$ large enough, the range for which (\ref{area}) holds contains an interval of the form $[L(d_l,d_r);R(d_l,d_r)]$ where
\begin{align*}
L(d_l,d_r)&= h_2(\frac{K}{\sqrt{d_r}}) \\
R(d_l,d_r)&= \frac{d_l}{d_r} - o(d_r \exp(-\sqrt{d_r}))
\end{align*}
\end{prop}
\begin{rem}
Actually, changing $c_0(d_l,d_r)$, we could replace any $\sqrt{.}$ by $(.)^{\alpha}$ for any $\alpha <1$.
\end{rem}
\end{proof}
Proposition \ref{t1} in this context can be rephrased as
\begin{cor}
Define $\kappa$ as above
$$
\kappa = \frac{d_l-1-\frac{d_l}{d_r}}{d_l-1}
$$
Amongst channels $a$, with fixed entropy h, assuming the following condition is fulfilled
\begin{align}\label{cond}
(1-2h_2^{-1}(h))^{2} \leq \frac{\kappa-2}{d_r\kappa}
\end{align}-
then the BEC(h) minimizes
$$
-h-(d_l-1-\frac{d_l}{d_r})H(x^{\boxtimes d_r}) + (d_l-1)H(x^{\boxtimes d_r-1})
$$
\end{cor}
\section{Convex optimization and the shape of extremal densities}\label{conv}
Classical convex analysis provides powerful tools that allow - at least in the case where the target functional is linear - to drastically reduce the range of possible optimizers. Remember we represented the channels by probability measures over $[0;1]$. The basic principle is as follows
\begin{thm}[Dual Caratheodory]\label{car}
Take $\Phi$ \emph{any} continuous linear functional over BMS channels, like all those discussed above, and consider the following problem
\begin{align*}
\mbox{OPT }&\Phi(a)\\
\mbox{s.t. }&(\Phi_1(a),\ldots, \Phi_m(a)) = (\phi_1,\ldots,\phi_m)\\
\end{align*}
Then there are extremal densities $a_{+}$ and $a_{-}$ with support of cardinality at most $m+1$.
\end{thm}
\begin{disc}
The constraints are also assumed to be linear. A more extensive source on the topic is \cite{conv}.
This principle sheds some light on the fact that the \text{BSC} (which has one mass point in our representation) and the \text{BEC} (which has two) appear so often as extremal densities, when we consider problems with a single constraint. Indeed, one constraint corresponds to at most two mass points.
Extensions of the Caratheodory Principle were amongst the tools used in \cite{EIC07} to track \emph{two} channel parameters (namely $H$ and $E$) through the process of iterative decoding. As a result new bounds on iterative decoding were shown.
\end{disc}
It seems hard to derive proofs using solely \ref{car}. However it can be used to do numerical experiments. One way to proceed is as follows. Consider the target functional $\Phi(\rho(a))$ where $\rho$ is of degree $d$. Introduce $d$ variables $a_1,a_2,\ldots,a_d$ and replace (for $k\leq d$)
\begin{align*}
\Phi(a^{\boxtimes k}) \rightsquigarrow {d \choose k}^{-1}\sum_{1\leq i_1<\ldots < i_k\leq d} \Phi(a_{i_1}\boxtimes \ldots\boxtimes a_{i_k})
\end{align*}
Denote $\tilde{\Phi}(a_1,\ldots,a_d)$ the expression we get. If it is maximized by a tuple where all coefficients $a_i$ are the same, then we know the initial expression has the same maximizer. To maximize $\tilde{\Phi}$, a simple tractable heuristic is optimizing coordinate after coordinate. Starting from random $a_i$s, to fix each coordinate except coordinate $i$, then find the best combination of two $\delta's$ for this coordinate. And repeat for all $i\leq d$. This gave good results for the motivational expression of (\ref{area}) and led to the claim
\begin{claim}
The expression in (\ref{area}), for any $h$ and when $(d_l,d_r) = (3,6)$ or $(5,10)$ (the cases we tested) is always minimized by the BSC and maximized by the BEC.
\end{claim}
\section*{Acknowledgment}
The author would like to thank R\"udiger Urbanke his guidance and support.
\bibliographystyle{IEEEtran}
\section{Other Inequalities}\label{obounds}
Here we give other inequalities that can be derived using the power series expansion, just as in the proofs of (\ref{t1}) and (\ref{t2}). We will only prove (\ref{i5}) along with the equality case which is the second part of (\ref{prev}). Remember that $\Phi$ stands for either $H$ or $B$. The reals $\alpha$ and $\beta$ sum to $1$.
{\small
\begin{align}
1-\Phi(a\boxtimes b) &\geq (1-\Phi(a))(1-\Phi(b)) \label{i5}\\
\Phi(a^{\boxtimes d}) \leq \Phi(a)\Big(d -& \Phi(a) - \Phi(a^{\boxtimes 2}) - \ldots - \Phi(a^{\boxtimes d-1}) \Big) \label{i*}\\
1-\Phi(a) &\leq \sqrt{1-\Phi(a\boxtimes a)}\label{i2}\\
1-\Phi(a \boxtimes b) &\leq \sqrt{1-\Phi(a\boxtimes a)}\sqrt{1-\Phi(b \boxtimes b)}\label{i3} \\
1-\Phi(a \boxtimes b) &\leq \sqrt{1-\Phi(a\boxtimes a\boxtimes b)}\sqrt{1-\Phi(b)}\label{i4} \\
\Phi \left((\alpha a + \beta b)^{\boxtimes d} \right)&\geq \alpha \Phi(a^{\boxtimes d}) + \beta \Phi(b^{\boxtimes d})\label{i18}\\
\sqrt[d]{1-\Phi((\alpha a + \beta b )^{\boxtimes d})} &\leq \alpha\sqrt[d]{1-\Phi(a^{\boxtimes d})} +\beta \sqrt[d]{1-\Phi(b^{\boxtimes d})} \label{i6}\\
\Phi(a) &\leq f_\Phi(1-2E(a))\label{i1}\\
1-\Phi(a \boxtimes b) &\leq (1-\Phi(a))(1-2E(b))\label{i7}
\end{align}
}
\begin{proof}
(\ref{i5}): We do the same as in \ref{pt1}, except using another inequality than Jensen. Recall from (\ref{e1}) that
$$1-\Phi(a \boxtimes b) = \sum a_{\Phi,n} \gamma_{a,n} \gamma_{b,n}.$$
We use the following corollary of FKG inequality
$$\field{E}(fg) \geq \field{E}(f)\field{E}(g),$$
whenever $f$,$g$ have the same monotonicity. Equality case is when $f$ or $g$ is constant a.e. . Here $f:n \mapsto \gamma_{a,n}$, $g:n\mapsto \gamma_{b,n} $ and $\field{E}(f)=\sum a_{\Phi,n} f_n$.
So, since the moments are decreasing, we get
\begin{align*}
1-\Phi(a \boxtimes b) &= \sum a_{\Phi,n} \gamma_{a,n} \gamma_{b,n} \\
&\geq \sum a_{\Phi,n} \gamma_{a,n} \sum a_{\Phi,n} \gamma_{b,n}\\
&= (1-\Phi(a))(1-\Phi(b))
\end{align*}
with equality when $a$ or $b$ is from the $BEC$ family.
\end{proof}
|
2,877,628,088,592 | arxiv | \section{Introduction}
\label{sec:intr}
The Higgs mechanism gives rise to clear and detectable signatures that can be probed at hadron colliders. The Tevatron is now leading a race~\cite{Collaboration:2009je}, that the LHC will soon join, to find this spinless particle. The most relevant searches for a SM Higgs boson by the LEP experiments~\cite{Schael:2006cr} were based on the associated production mechanism, via $e^+ e^- \to Z h( \to b \bar{b})$ and $e^+ e^- \to Z h (\to \tau^+ \tau^-)$. A limit of $m_h > 114.4$ $GeV$ was obtained combining all LEP analyses based on such searches. As the coupling of a Higgs boson to gauge bosons is fixed in the SM, it is clear that a lighter Higgs can only exist in a model where its couplings to gauge bosons (and possibly those to fermions) are reduced relative to the SM. Moreover, any complementary process, such as $e^+e^-\to Ah$, where $A$ is a pseudo-scalar Higgs boson and $h$ is the lightest Higgs scalar boson (i.e., with the same quantum number as the SM state) in the model, which appears, e.g., in a pure 2-Higgs Doublet Model (2HDM) or in the Minimal Supersymmetric Standard Model (MSSM), has to be kinematically forbidden. This is a consequence of the obvious sum rule $g_{hZZ}^2 + g_{hAZ}^2$ valid in a pure 2HDM and in the MSSM. There were also searches at LEP based on the Yukawa processes $e^+ e^- \to b \bar{b} h(\to \tau^+ \tau^-)$ in the Higgs mass range $m_h =4-12$ GeV in~\cite{Abbiendi:2001kp} and in the channels $b \bar{b} b \bar{b}$, $b \bar{b} \tau^+ \tau^-$ and $ \tau^+ \tau^- \tau^+ \tau^-$ for Higgs masses up to 50 GeV in~\cite{Abdallah:2004wy}. These studies are only relevant for large values of the Yukawa coupling constants and will be discussed in section~\ref{sec:bounds}.
Such a scenario, with a very light Higgs, can easily arise by adding Higgs scalar singlets and/or doublets to the Higgs sector of the SM. Starting from the first simple extension that allows for a lighter Higgs state - i.e., adding a neutral singlet - we will explore all possible scenarios to a maximum of a Democratic 3-Higgs Doublet Model (3HDM (D))~\cite{Grossman:1994jb} plus one neutral scalar singlet. We will see that from the very simplest extension to all models built thereafter, a very light Higgs state is allowed via a reduction of the couplings to gauge bosons. As expected, the more freedom is added by the new fields the more parameter space is available to accommodate a light Higgs boson. The possibility of the existence of a very light Higgs boson, with a mass below 80 GeV, in the context of models with two Higgs doublets, was discussed in~\cite{Kalinowski:1995dm}. More recently, a very light Higgs boson in the context of the MSSM, with a mass as low as about 60 GeV, was discussed in~\cite{Belyaev:2006rf}.
Assuming this light object would have eluded all LEP searches due to a small enough $g_{hVV}$ coupling (where $V=W,Z$), the production mechanisms that involve such $g_{hVV}$ couplings, like vector boson fusion or Higgs-strahlung, have consequently negligible cross sections. Therefore, we have to rely on the production modes where Higgs couplings to fermions are present. Under these conditions, the process with the largest cross section is gluon fusion. Hence, we have performed a detailed parton level study on the feasibility of the detection of a very light Higgs state (below $\sim 100 \, GeV$) at the LHC in the production process $pp \to h j \to \tau^+ \tau^- j$~\cite{Ellis:1987xu}, where $j$ represents a resolved jet (that indeed proceeds mainly via gluon fusion). We have recently presented a similar study for the SM Higgs boson~\cite{us0} (see also \cite{Mellado:2004tj}) for the LHC and a study for the Tevatron was performed in~\cite{Belyaev:2002zz}.
The reason to choose such a final state, involving $\tau$'s, amongst those accessible at hadron colliders, is clear. Had we chosen $h \to b \bar{b}$ instead, we would have been overwhelmed by QCD background while the final state with $h \to \gamma \gamma$ was not considered because when we vary the Higgs boson mass from 120 $GeV$ (where the Branching Ratio (BR) roughly peaks for a SM-like Higgs) down to 20 $GeV$, the corresponding BR drops by a factor of $\sim$ 10. Therefore, unless one is exploring a model where the Higgs decay to photons is enhanced, the $h \to \tau^+ \tau^-$ mode is the most appropriate channel for light Higgs states. Other production channels, like $pp \to h t \bar{t}$, will be explored in the future~\cite{us}. We will show that such a very light Higgs could be detected with this process in several extended models and that for particular scenarios an early detection is also possible. At the very least, an effort should be made to definitely exclude such a light particle and the LHC definitely has the means to do so.
The plan of the paper is as follows. The next section is devoted to describe the signal and background processes in the SM while the following one extrapolates our findings to a variety of beyond the SM scenarios with an enlarged Higgs sector. Sect. IV introduces the experimental and theoretical bounds enforced in our analysis. Final results are presented in Sect. V (for various scenarios separately) while Sect. VI draws our conclusions. Finally, one appendix will help us in classifying four types of 2HDM.
\section{Signal and backgrounds in the SM}
\label{sec:signal}
In a recent work~\cite{us0} we have performed a detailed parton level study on the feasibility of the detection of a Higgs boson in the gluon fusion process $pp (gg+gq) \to h j\to \tau^+ \tau^- j$ at the LHC. In this section we will extend the study for Higgs masses below 100 $GeV$ to probe extensions of the SM where a very light Higgs is still allowed. The results will be presented considering the case
where all Higgs couplings to fermions are the SM ones so that they can be used in extensions of the SM with the same final state. As a detailed discussion of the parton level analysis was already presented in~\cite{us0}, here we will just highlight the main points and refer the reader to reference~\cite{us0} for details. The signal, $pp \to gg (q) \to hg (q) \to \tau^+ \tau^- g (q)$, is a one loop process with partonic contributions from
$gg \to hg$, $gq \to hq$, which is approximately 20 \% of the total cross section, and $qq \to hg$, which was shown to be negligible~\cite{Abdullin:1998er} and was not taken into account in our study. We note that this is a parton level study: effects of initial and final state radiation as well as hadronisation were not taken into account.
The SM signal and all background processes were generated with CalcHEP~\cite{Pukhov:2004ca} and cross checked with MadGraph/MadEvent~\cite{Alwall:2007st}. The Higgs BRs to $\tau^+ \tau^-$ were evaluated with the HDECAY~\cite{Djouadi:1997yw} package (and modifications thereof). In the models with an extended scalar sector, the one loop amplitudes for the signal $pp \to gg (q) \to hg (q) \to \tau^+ \tau^- g (q)$ were generated and calculated with the packages FeynArts~\cite{feynarts} and FormCalc~\cite{formcalc}. The scalar integrals were evaluated with LoopTools~\cite{looptools} and the CTEQ6L parton distribution functions~\cite{cteq} were used. The jet (leptons) energies were smeared according to the following Gaussian distribution
\begin{equation}
\frac{\Delta E}{E}=\frac{0.5 (0.15)}{\sqrt{E}} \, GeV,
\end{equation}
to take into account the respective detector energy resolution effects, where $0.5$ is the factor for jets while $0.15$ is the corresponding factor for leptons.
Each $\tau$ can either decay leptonically or hadronically. As a three jet final state is very hard to identify at a hadron collider, we will concentrate on the other two possibilities - two taus decaying leptonically ($ll$) or one tau decaying leptonically and the other hadronically ($lj$). These are also the final states with robust trigger signatures - the events are selected by an isolated electron with $p_T^e > 22 \, GeV$ or an isolated muon with $p_T^\mu > 20 \, GeV$. In both analyses we have considered the main source of irreducible background: $pp \to Z/\gamma^* j \to l l j$ for $ll$ and $pp \to Z/\gamma^* j \to l j j$, where one jet originates from a tau, for the $l j$ case. In $pp \to Z/\gamma^* j \to l l j$ we include all possible combinations of $l=e,\mu$ and in $pp \to Z/\gamma^* j \to l j j$ only the intermediate state $\tau^+ \tau^- j$ is included - the $jjj$ signature, where a jet would fake a lepton with a given probability, is taken into account in the $jjj$ background.
The main source of reducible background for the $ll$ analysis comes from $pp\to W^+ W^- j$ while for the $lj$ case it is the process $pp\to W j j$ that dominates~\cite{us0}. The tau reconstruction efficiency was taken to be 0.3 and accordingly we have used a tau rejection factor against jets as a function of the jet $p_T$ using the values presented in the ATLAS study in~\cite{Aad:2009wy}. Finally, we have included the $pp \to t\bar{t}$ background taken at Next-to-Leading-Order (NLO). By vetoing the events if the tagging jet is consistent with a $b$-jet hypothesis for $|\eta| < 2.5$ we were able to discard most of the $t\bar{t}$ background. The $t\bar{t}$ background is larger for $ll$ as there are more possible combinations when the $W$ bosons decay leptonically.
\begin{widetext}
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm]{missing60-ll.eps}
\includegraphics[width=7.5cm]{missing60-lj.eps}
\end{center}
\caption{Transverse missing energy distribution for signal and backgrounds for a Higgs mass of 60 $GeV$.
On the left for the $ll$ analysis and on the right for the $lj$ case. In this figure all cuts described in the text
were applied except for the missing energy cut.}
\label{fig:missing}
\end{figure}
\end{widetext}
The two other a priori very important sources of reducible background can be brought to a manageable level through a judicial cut in the transverse missing energy. This is clearly seen in fig.~\ref{fig:missing} where the transverse missing energy (imbalance of all observed momenta) distribution for a Higgs mass of 60 $GeV$ is shown. The huge QCD $pp \to jjj$ background drops five orders of magnitude as soon as we cross the 25 $GeV$ threshold for the missing energy. There is still a tail due to the leptonic decays of $c$ and $b$-quarks which involve a considerable amount of missing energy as is clear from fig.~\ref{fig:missing}. In accordance with CMS~\cite{CMS} and ATLAS studies~\cite{Aad:2009wy} we have used 0.001 as the probability of a jet faking one electron and, as explained earlier, we have taken the tau reconstruction efficiency to be 0.3 and accordingly we have used a tau rejection factor against jets as a function of the jet $p_T$~\cite{Aad:2009wy} that range from 0.01 to 0.001.
The identification of the Higgs boson signal can only be accomplished by an effective reduction of the dominating irreducible $Zj$ background. Therefore, the reconstruction of the mass peak $m_{\tau \tau}$ at $m_h$ is essential. For a more detailed discussion see~\cite{us0, Belyaev:2002zz, Ellis:1987xu}. The reconstructed mass distribution is presented in fig.~\ref{fig:window} for a Higgs mass of 60 $GeV$ after all cuts for $ll$ on the left panel and for $lj$ on the right one. In both analyses we have sharp mass peaks for the signal and also clear peaks at $m_Z$ for the $Zj$ background.
\begin{widetext}
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm]{mass60-ll.eps}
\includegraphics[width=7.5cm]{mass60-lj.eps}
\end{center}
\caption{Reconstructed mass $m_{\tau \tau}$ distributions for $\tau^+ \tau^-$
decaying leptonically on the left and semi-leptonically on the right for a Higgs mass of 60 $GeV$.}
\label{fig:window}
\end{figure}
\end{widetext}
Here we summarise our analysis by specifying the following event selection procedure.
\begin{itemize}
\item We require one electron with $p_T^e > 22 \, GeV$ or one muon with $p_T^\mu > 20 \, GeV$ for
triggering purposes. An additional lepton in the event has $p^{e}_T > 15 \, GeV$ and $p^{\mu}_T
> 10 \, GeV$. A 90 \% efficiency is assumed for the reconstruction of the electron and muon and
the separation between leptons and/or jets was chosen as $\Delta R_{j(l)j(l)} > 0.4$ and $|\eta_{l} | < 3.5$ for all leptons.
\item We require that at least one jet has $p^{j}_T > \,40 GeV$ and $|\eta_{j}| < 4$.
\item We require that the hadronic tau has $p^{j}_T > \,20 GeV$ and $|\eta_{j}| < 4$.
\item We veto the event if there is an additional jet with $p^{j}_T > \,20 GeV$ and $|\eta_{j}| < 5$.
\item We apply a mass window $m_h - 15 \, GeV < m_{\tau \tau} < m_h + 15 \, GeV$.
\item Events are vetoed if the tagging jet consistent with a $b$-jet hypothesis is found with $|\eta| < 2.5$
(we assume a $b$-jet tagging efficiency of 60 \%).
\item Finally, we require the transverse missing energy to be $\slashed{E}_T > 30 \, GeV$.
\end{itemize}
In tab.~I
we present the SM signal and sum of all background cross sections, signal-to-background ($\sigma_{S}/\sigma_{B} $) ratios and the
significance ($\sigma_{S}/\sqrt{\sigma_{B}} $) as a function of the Higgs mass. In the first and second columns we show the results for the $ll$ analysis while columns three and four are for the $lj$ case. In the last two columns we present the combined values for the signal-to-background $\sigma_{S}/\sigma_{B} $ ratio and the significance $\sigma_{S}/\sqrt{\sigma_{B}} $. This study could still be extended to values below 20 GeV, as an experimental analysis could well be carried out for such very small values of the Higgs mass. However, the computational tools we are using here are not reliable in this Higgs mass regime, so we refrain from investigating this phenomenological possibility now.
This study could still be extended to values below 20 GeV provided the experimental analysis could be carried for very small values of the Higgs masses.
It is clear that the signal observation can be systematically challenging for the larger Higgs masses, but the values of the $\sigma_{S}/\sigma_{B} $ ratio can be improved at the expense of the significance by shrinking the Higgs mass window especially when its mass is close to the mass of the $Z$ boson. The highest significance and the highest $\sigma_{S}/\sigma_{B} $ ratio takes place for $m_h=20$ $GeV$ because it is the value farthest away from the irreducible $Z j $ background.
In tab.~\ref{tab:lumi} we present the luminosities required for a 95 \% Confidence Level (CL) exclusion, 3$\sigma$ and 5$\sigma$ discovery of a Higgs boson with SM-like Higgs couplings to the fermions at $\sqrt{s} = 14$ TeV as a function of the Higgs mass. A light Higgs boson with SM-like couplings to the fermions can be excluded at 95 \% CL in the mass range 20--60 $GeV$ with less than 1 $fb^{-1}$ of total integrated luminosity.
\begin{table}[ht]
\begin{center}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c } \hline \hline
Mass ($GeV$) &&& $\sigma_{S_{(ll)}}$ (fb) &&&& $\sum \sigma_{B_{(ll)}}$(fb)&&&& $\sigma_{S_{(lj)}}$ (fb) &&&& $\sum \sigma_{B_{(lj)}}$ (fb)&&&& $\sigma_{S}/\sigma_{B}$ (\%) &&&& $\sigma_{S}/\sqrt{\sigma_{B}}$ ($\sqrt{fb}$ ) \\
\hline
20 &&& 11.6 &&&& 14.1 &&&& 5.4 &&&& 28.8 &&&& 84.1 &&&& 3.24 \\ \hline
30 &&& 11.8 &&&& 23.0 &&&& 9.9 &&&& 35.4 &&&& 58.4 &&&& 2.97 \\ \hline
40 &&& 11.5 &&&& 27.1 &&&& 10.6 &&&& 41.2 &&&& 49.7 &&&& 2.76 \\ \hline
50 &&& 11.3 &&&& 30.4 &&&& 10.9 &&&& 47.9 &&&& 43.4 &&&& 2.58 \\ \hline
60 &&& 11.9 &&&& 41.2 &&&& 11.3 &&&& 63.3 &&&& 34.0 &&&& 2.34 \\ \hline
70 &&& 13.0 &&&& 169.5 &&&& 12.0 &&&& 149.0 &&&& 11.2 &&&& 1.41 \\ \hline
80 &&& 13.8 &&&& 890.0 &&&& 12.9 &&&& 856.2 &&&& 2.2 &&&& 0.64 \\ \hline
90 &&& 14.4 &&&& 1178.3 &&&& 14.1 &&&& 1145.6 &&&& 1.7 &&&& 0.60 \\ \hline
100 &&& 14.9 &&&& 1124.7 &&&& 15.5 &&&& 1142.6 &&&& 1.9 &&&& 0.64 \\ \hline \hline
\end{tabular}
\caption{Cross sections for signal and sum of all backgrounds after all cuts as a function of the Higgs mass for a Higgs boson with SM-like couplings to the fermions. In the first and second columns we show the results for the $ll$ analysis while columns three and four are for the $lj$ case. In the last two columns we present the combined values for $\sigma_{S}/\sigma_{B}$ and $\sigma_{S}/\sqrt{\sigma_{B}}$, summed under quadrature. The analysis was done for masses between 20 and 100 $GeV$.}
\end{center}
\label{tab:sigbac}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c} \hline \hline
Mass ($GeV$) &&& 95 \% CL exclusion $L~(fb^{-1})$ &&&& 3$\sigma$ discovery $L~ (fb^{-1})$&&&& 5$\sigma$ discovery $L~ (fb^{-1})$ \\
\hline
20 &&& 0.38 &&&& 0.86 &&&& 2.38 \\ \hline
30 &&& 0.45 &&&& 1.02 &&&& 2.84 \\ \hline
40 &&& 0.53 &&&& 1.18 &&&& 3.28 \\ \hline
50 &&& 0.60 &&&& 1.35 &&&& 3.76 \\ \hline
60 &&& 0.73 &&&& 1.64 &&&& 4.56 \\ \hline
70 &&& 2.02 &&&& 4.56 &&&& 12.7 \\ \hline
80 &&& 9.76 &&&& 22.0 &&&& 61.0 \\ \hline
90 &&& 11.4 &&&& 25.7 &&&& 71.3 \\ \hline
100 &&& 9.87 &&&& 22.2 &&&& 62.0 \\ \hline \hline
\end{tabular}
\caption{Integrated luminosities needed to reach a 95 \% CL exclusion, 3$\sigma$ and 5$\sigma$ discovery for a Higgs boson with SM-like
couplings to the fermions, at the LHC. Luminosities shown are for the combined results of the two analyses
(leptonic and semi-leptonic final states) - signal to background ratio and sensitivities summed under quadrature.}
\end{center}
\label{tab:lumi}
\end{table}
\section{Extensions of the Higgs sector}
\label{sec:ext}
Which are the simplest extensions of the Higgs sector of the SM that can accommodate a very light Higgs boson? One can write an extensive list of models with very light (pseudo)scalars - it is enough to enlarge the parameter space by adding an arbitrary number of fields to accomplish such a goal. However, we want these models to reproduce the SM results and to have a high predictive power at the LHC. For simplicity we will restrict ourselves to models where Charge and Parity
(CP) is conserved in the Higgs sector and where natural flavour conservation is assumed~\cite{Glashow}. In the SM, the Higgs couplings to gauge bosons are fixed by the gauge structure and the Higgs Vacuum Expectation Value (VEV). One of the simplest extensions of the SM scalar sector is to add a neutral singlet
(i.e., $I=0$ and $Y=0$), where $I$ is the isospin and $Y$ is the hypercharge. Such a singlet does not couple to gauge bosons nor does it couple to fermions. The CP-even component of this singlet can mix with the CP-even Higgs field from the doublet. Therefore, the coupling to fermions and gauge bosons can only change due to the rotation angle related to this mixing. Calling the rotation angle $f (\chi)$, the couplings are then just redefined as
\begin{equation}
g_{VVh}^{SM} \to f (\chi) \, g_{VVh}^{SM}, \qquad \qquad g_{ffh}^{SM} \to f (\chi) \, g_{ffh}^{SM},
\label{eq:singlet}
\end{equation}
where $V$ stands for a gauge boson (i.e., $V=W,Z$) and $f$ is a generic fermion~\footnote{If only one singlet is added, $f (\chi) = \sin \chi$ (or $\cos \chi$) depending on how one defines the rotation angle.}. In this case, having a light Higgs implies that $f (\chi) \ll 1$ and therefore not only the searches based on production and decay processes that proceed via couplings with gauge bosons will yield negligible rates but the same is true for the ones that rely on fermion-Higgs couplings. Such a light Higgs state $h$ could only be produced then in processes involving self-Higgs couplings like for example in the gluon fusion
or vector boson fusion reactions, $gg \to H \to hh$ or in $qq \to qqH \to qqhh$,
respectively. In this scenario, the other CP-even Higgs boson, $H$, is SM-like in its couplings to vector bosons and to fermions. As for decays, the light Higgs BRs are the SM ones because all Higgs couplings fermions and gauge bosons are rescaled by the same factor, with the decay to light fermions ($b$'s, $c$'s and $\tau$'s) dominating by virtue of the small Higgs mass that the gauge boson decays are not open yet. Obviously, this scenario cannot be studied with the analysis presented in this work because of a negligible production rate for the $h$ state.
The next step is to add one doublet to the SM to obtain what is known as a 2HDM. Here we will not be concerned about any specific 2HDM potential - we just require CP conservation in the Higgs sector. The couplings to gauge bosons are universal and we define the coupling of gauge bosons to the lightest Higgs as $\sin (\beta - \alpha) \, g_{SM}$, where $\beta$ is the mixing angle in the CP-odd and charged sectors ($\tan \beta$ is also the ratio of the Higgs VEVs) and $\alpha$ is the mixing angle in the CP-even Higgs sector. The Yukawa Lagrangian can be built in four different and independent ways~\cite{catalogue} if Flavour Changing Neutral Currents (FCNCs) are to be avoided. Regarding the Yukawa Lagrangian, there are two clearly different scenarios to be considered. The first one is a SM-like scenario where only one doublet, say $\phi_2$, gives mass to all fermions usually referred to as type I model. The second class of models is the one where both doublets participate in the mass generation process. With natural flavour conservation, one can build the following models: type II is the model where $\phi_2$ couples to up-type quarks and $\phi_1$ couples to down-type quarks and leptons; in a type III model $\phi_2$ couples to up-type quarks and to leptons and $\phi_1$ couples to down-type quarks; a type IV model is instead built such that $\phi_2$ couples to all quarks and $\phi_1$ couples to all leptons. We present all Yukawa couplings in appendix A. Adding extra neutral singlets to 2HDMs amounts to a redefinition of the couplings to the SM particles equivalent to~(\ref{eq:singlet}), that is
\begin{equation}
g_{VVh}^{\rm 2HDM} \to f_{\chi_i} \, g_{VVh}^{\rm 2HDM}, \qquad \qquad g_{ffh}^{\rm 2HDM} \to f_{\chi_i} \, g_{ffh}^{\rm 2HDM},
\label{eq:singlet2}
\end{equation}
where $f_{\chi_i}$ depends now on the number of extra singlets that are added and on the explicit form of the scalar potential.
A class of models which constitute a simple extension of the 2HDM are the ones obtained by adding an arbitrary number, $n$, (we will call these models 2HDM+nD) of doublets that \textit{do not couple} to the fermions. In what follows we will follow closely the discussion in (and notation of) \cite{Barger:2009me}, where a thorough analysis of all models discussed in this work is presented. Now we have to distinguish between type I and the other types II, III and IV. In type I models, only one doublet gives mass to the fermions - one can then build a new field which is a linear combination of all remaining $(n-1)$ doublets that do not couple to the fermions. If again we choose $\phi_2$ to give mass to the fermions and $\phi_1$ as the combined field with VEVs $v_2$ and $v_1$, respectively, we will have
\begin{equation}
v_1^2 + v_2^2 = v^2 \omega^2, \qquad \qquad 0 < \omega \leq 1,
\end{equation}
where $\omega$ is a function of the remaining $(n-1)$ VEVs. $\omega = 1$ is the 2HDM case while $\omega <1 $ is a signal that a non-zero VEV is carried by the linear combination of the $(n-1)$ fields orthogonal to the light Higgs state.
The situation is slightly more complicated for models II, III and IV. Taking Model II as an example and following~\cite{Barger:2009me} we examine the case where just one more doublet is added, and parametrise the mixing with the extra doublet in terms of an angle $\theta$,
\begin{equation}
h= \cos \theta h' + \sin \theta h_0,
\end{equation}
where $h'$ is the usual 2HDM lightest CP-even Higgs boson defined as $h'= \cos \alpha \phi_u - \sin \alpha \phi_d$ in terms of the original doublets $\Phi_u$ and $\Phi_d$ responsible for giving mass to the fermions; $h_0$ is the CP-even state of the new doublet $\phi_0$ that does not participate in the process of mass generation. There is no mixing between the two fields when $\sin \theta =0$ and in this case $h = h'$. With the usual definition for $\tan \beta = v_2/v_1$, we define $\cos \Omega =\sqrt{(v_1^2 + v_2^2)}/v$ and $\sin \Omega = v_0 /v$, with $0 \leq \Omega < \pi/2$ to write the couplings to gauge bosons as
\begin{equation}
g_{VVh} = (\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta) \, g_{VVh}^{SM}
\end{equation}
and to fermions
\begin{equation}
g_{\bar{u} u h} = \frac{\cos \theta}{\cos \Omega} \frac{\cos \alpha}{\sin \beta} \, g_{ffh}^{SM}, \qquad \qquad g_{\bar{d} d h} = g_{\bar{l} l h} = - \frac{\cos \theta}{\cos \Omega} \frac{\sin \alpha}{\cos \beta} \, g_{ffh}^{SM}.
\end{equation}
From the physical point of view, adding more doublets will not bring anything new to the particular case we are discussing. The same is true if we add an arbitrary number of singlets - the effect is the same as for the SM - to reduce all Higgs couplings to the 2HDM fields by the same amount.
Finally we discuss the case where the fermions masses arise from couplings to three different Higgs doublets. We restrict our study to the democratic model described and constrained in ~\cite{Grossman:1994jb}, where up-type quarks, down-type quarks and charged leptons all get their mass from a different doublet. This model is known as the democratic 3HDM and will be represented by 3HDM(D). Following~\cite{Barger:2009me} we define $\tan \beta = v_u/v_d$ and $\cos \Omega =\sqrt{(v_u^2 + v_d^2 )}/v$, $\sin \Omega = v_l /v$, where $v_u$, $v_d$ and $v_l$ are the VEVs of the doublets that couple to the up-quarks, down-quarks and charged leptons, respectively. With these definitions the couplings to gauge bosons are
\begin{equation}
g_{VVh} = (\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta) \, g_{VVh}^{SM}
\end{equation}
while the ones to fermions can be written as
\begin{equation}
g_{\bar{u} u h} = \frac{\cos \theta}{\cos \Omega} \frac{\cos \alpha}{\sin \beta} \, g_{ffh}^{SM}, \qquad g_{\bar{d} d h} = - \frac{\cos \theta}{\cos \Omega} \frac{\sin \alpha}{\cos \beta} \, g_{ffh}^{SM}, \qquad g_{\bar{l} l h} = \frac{\sin \theta}{\sin \Omega} \, g_{ffh}^{SM}.
\label{eq:3HDMf}
\end{equation}
Again, one can add more singlets and doublets to the democratic 3HDM but this will just increase our freedom to have a light Higgs boson. We refer the reader to~\cite{Barger:2009me} for a discussion on extensions of the 3HDM.
In tab.~\ref{tab:models} we present for each model the cross sections for the processes $e^+ e^- \to Z h$ and $pp \to gh$ relative to the respective SM cross sections. In the last column, the BR$(h \to \tau^+ \tau^-)$ relative to the SM one is shown as a function of the SM BR to up-quarks, down-quarks and charged leptons. It should be noted that while the expressions for the cross sections are exact, the ones for the BRs are only truly accurate when the Higgs decay to gluons is negligible or when it proceeds mainly through a top loop (like in the SM) in which case it can easily be included in the expression using the respective $g_u$ coupling.
\begin{table}[ht]
\begin{center}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline
Model &&& $\bar{\sigma} (e^+ e^- \to Z h)$ &&&& $\bar{\sigma} (gg \to gh)$ &&&& $\overline{BR} (h \to \tau^+ \tau^-)$ \\ \hline
2HDMI &&& $\sin^2 (\beta - \alpha) $ &&&& $\frac{\cos^2 \alpha}{\sin^2 \beta}$ &&&& $\approx$ 1 \\ \hline
2HDMI+nD &&& $\omega^2 \sin^2 (\beta - \alpha) $ &&&& $\frac{1}{\omega^2} \frac{\cos^2 \alpha}{\sin^2 \beta}$ &&&& $\approx$ 1 \\ \hline
2HDMII &&& $\sin^2 (\beta - \alpha) $ &&&& $F_{loop}$ &&&& $[1 + (\frac{g_u^2}{g_l^2}-1)BR_u ]^{-1}$ \\ \hline
2HDMII+nD &&& $(\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta)^2$ &&&& $ \frac{\cos^2 \theta}{\cos^2 \Omega} \, F_{loop}$ &&&& $[1 + (\frac{g_u^2}{g_l^2}-1)BR_u ]^{-1}$ \\ \hline
2HDMIII &&& $\sin^2 (\beta - \alpha) $ &&&& $F_{loop}$ &&&& $[1 + (\frac{g_d^2}{g_l^2}-1)BR_d ]^{-1}$ \\ \hline
2HDMIII+nD &&& $(\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta)^2$ &&&& $\frac{\cos^2 \theta}{\cos^2 \Omega} \, F_{loop}$ &&&& $[1 +(\frac{g_d^2}{g_l^2}-1)BR_d ]^{-1}$ \\ \hline
2HDMIV &&& $\sin^2 (\beta - \alpha) $ &&&& $\frac{\cos^2 \alpha}{\sin^2 \beta}$ &&&& $[1 + (\frac{g_u^2}{g_l^2}-1) \, (1 - BR_{\tau}) ]^{-1}$ \\ \hline
2HDMIV+nD &&& $(\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta)^2$ &&&& $\frac{\cos^2 \theta}{\cos^2 \Omega} \frac{\cos^2 \alpha}{\sin^2 \beta}$ &&&& $[1 + (\frac{g_u^2}{g_l^2}-1) \, (1 - BR_{\tau}) ]^{-1}$ \\ \hline
3HDM (D) &&& $(\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta)^2$ &&&& $\frac{\cos^2 \theta}{\cos^2 \Omega} \, F_{loop}$ &&&& $[1 + (\frac{g_u^2}{g_l^2}-1)BR_u + (\frac{g_d^2}{g_l^2}-1)BR_d ]^{-1}$ \\ \hline \hline
\end{tabular}
\caption{Cross sections for $e^+ e^- \to Z h$ and $gg \to gh$ relative to the respective SM cross sections for the models discussed in the text. In the last column, the BR$(h \to \tau^+ \tau^-)$ relative to the SM one is shown; in each row, $g_i$ refers to the model on that particular row. For 2HDM the coupling are presented in the appendix and moreover, adding doublets will not alter the $g_i$ ratios therefore expressions for 2HDM and 2HDM+nD are the same. In the case of the 3HDM, the Yukawa couplings are shown in Eq. (\ref{eq:3HDMf}).}
\end{center}
\label{tab:models}
\end{table}
The function $F_{loop}$ represents the loop contribution which cannot be written as a function of the SM cross section. In the SM, the top loop contribution dominates over the bottom loop one by a factor $(m_t/m_b)^2$. This is also true in models type I and IV and their extensions. In all other models the factor that multiplies the top loop is different from the one that multiplies the bottom loop. We write this function symbolically as
\begin{equation}
F_{loop}=|\frac{\cos \alpha}{\sin \beta} \, t_{loop}^{SM}-\frac{\sin \alpha}{\cos \beta} \, b_{loop}^{SM}|^2 \, .
\end{equation}
In this case we cannot use the SM results and the process has to be recalculated.
\section{Experimental and theoretical bounds}
\label{sec:bounds}
In this section we present an overview on the bounds of the extensions of the Higgs sector discussed in the previous section. We start by noting that we would like to keep this study as general as possible. Hence, we will disregard bounds that require a knowledge about the specific Higgs potential being used. This means that only the bounds stemming from couplings to gauge bosons and to fermions will be used. We start with the lightest neutral Higgs state. As discussed in the introduction, a SM Higgs boson with a mass below 114 $GeV$ was excluded by the LEP experiments. What we are interested in this work is to know under what circumstances a light CP-even Higgs boson from a more general model could have escaped detection at LEP. All topological searches based on the processes $e^+ e^- \rightarrow H_1 Z$ and $e^+ e^- \rightarrow H_1 H_2$, where $H_1$ can be any CP-even Higgs boson and $H_2$ can be either a CP-even or a CP-odd Higgs boson were presented in~\cite{Schael:2006cr}. Considering a scenario where all other neutral bosons are heavy enough to have eluded searches based on the various $H_1H_2$ combinations in the generic process $e^+ e^- \rightarrow H_1 H_2$, we just need to be concerned with the bounds from $e^+ e^- \rightarrow H_1 Z$. Therefore, in this study, the masses of the remaining Higgs bosons have no bearing in our analysis. For the pure 2HDMs, the smaller $\sin (\beta-\alpha)$ is the more the coupling $ZZh$ becomes negligible. Therefore, a very light Higgs could only have escaped detection in a parameter region where $\sin (\beta-\alpha) \to 0$. Depending on the Yukawa model chosen, there are two bounds we have to take into account from direct searches: one that comes from
\begin{equation}
\frac{\sigma(e^+e^-\to {H_1^{\rm 2HDM} Z)} \, {\rm BR} (H_1^{\rm 2HDM} \rightarrow b \bar{b})}{\sigma(e^+e^-\to
{H_1^{\rm SM} Z}) \, {\rm BR} (H_1^{\rm SM} \rightarrow b \bar{b})} \, \, = \, \sin^2 (\alpha - \beta) \, \frac{{\rm BR} (H_1^{\rm
2HDM} \rightarrow b \bar{b})}{{\rm BR} (H_1^{\rm SM} \rightarrow b \bar{b})} \, \, ,
\end{equation}
and the other one originates from the process
\begin{equation}
\frac{\sigma(e^+e^-\to {H_1^{\rm 2HDM} Z)} \, {\rm BR} (H_1^{\rm
2HDM} \rightarrow \tau^+ \tau^-)}{\sigma(e^+e^-\to
{H_1^{\rm SM} Z}) \, {\rm BR} (H_1^{\rm
SM} \rightarrow \tau^+ \tau^-)} \, = \, \sin^2 (\alpha - \beta) \, \frac{{\rm BR} (H_1^{\rm
2HDM} \rightarrow \tau^+ \tau^-)}{{\rm BR} (H_1^{\rm
SM} \rightarrow \tau^+ \tau^-)} \, \, .
\end{equation}
The reason being that, whatever the values of the 2HDM parameters chosen are, the main decays for a Higgs in the mass region below $\sim$ 100 $GeV$ are to $b \bar{b}$ and $\tau^+ \tau^-$ if decays to other Higgs are not kinematically allowed. The BRs involved depend mainly on the values of $\alpha$, $\tan \beta$ and $m_h$ that are constrained to obey
\begin{equation}
\sin^2 (\beta - \alpha) \, \frac{{\rm BR}_{th}^j (h \to b \bar{b} \, (\tau^+ \tau^-))}{{\rm BR}_{th}^{SM} (h \to b \bar{b} \, (\tau^+ \tau^-))} < \left( \frac{\sigma^{\rm 2HDM}}{\sigma^{SM}} (h \to b \bar{b}\, (\tau^+ \tau^-))\right)_{exp}
\end{equation}
where $j=$ I, II, III, IV, BR$_{th}$ is the theoretical 2HDM BR and the subscript $exp$ stands for the experimentally measured value. Note that the limit cannot be shown in the ($(\beta - \alpha)$, $m_h$) plane because of the angle dependence of the Higgs BRs to $b \bar{b}$ and to $\tau^+ \tau^-$. It is straightforward to check that when $\sin (\beta - \alpha) \approx 0.1$ there is essentially no bound on the lightest Higgs boson mass. To be more precise, in the above mass range, a $100\, \%$ BR to $b \bar{b}$ forces $\sin (\beta - \alpha) \gtrsim 0.13$ while a $100\, \%$ BR to $\tau^+ \tau^-$ implies $\sin (\beta - \alpha) \gtrsim 0.16$ regardless of the Higgs mass. Above these values the limit on $\sin (\beta - \alpha)$ strongly depends on the value of the Higgs mass. Taking as an example Model II, for $\sin (\beta - \alpha) \approx 0.2$ the limit immediately jumps to $m_h > 75.6$ $GeV$. We will therefore use $\sin (\beta - \alpha) = 0.1$ as a benchmark value. Note that in the limit $\sin (\beta - \alpha) = 0$ we would have a light gaugephobic~\cite{Pois:1993ay} Higgs boson which is again not experimentally excluded. Because $\tan \beta$ is an important parameter in our analysis we note that for a Higgs mass of 40 $GeV$ for $\tan \beta = 1$ we have $0.57 \leq \sin \alpha \leq 0.82$ while if $\tan \beta = 30$ we have $0.98 \leq \sin \alpha \leq 1$. For larger masses the bounds are obviously relaxed but the most important conclusion is that the allowed values of $\sin \alpha$ are always positive and for large $\tan \beta$ we have $\sin \alpha \approx 1$. It is important to note that these bounds do not depend on the total number of (pseudo)scalars in the model under consideration. In the reminder of this section we will discuss the limits that do depend on the number of Higgs states.
The most restrictive bound for a light scalar is the one coming from the muon anomalous magnetic moment $(g-2)_\mu$~\cite{g-2}. A very detailed account on the subject, including present status of experiment and theory, can be found in~\cite{Jegerlehner:2009ry}. A detailed study of the new physics contributions would force us to redo the calculations and subtract the diagrams where a SM Higgs takes part. In what follows we will restrict our discussion to the pure 2HDM case - all further extensions of the scalar sector will add more freedom to the model, hence they would be much more unconstrained.
\begin{widetext}
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.61cm]{gm2v2.eps}
\end{center}
\caption{$\Delta a_{\mu}= (a^{exp}_{\mu}-a^{th}_{\mu}) + a^{th-2HDM}_{\mu}$ as a function of $\tan \beta$ for $m_h=$ 40 $GeV$ and $m_A=200$ $GeV$.}
\label{fig:gm2}
\end{figure}
\end{widetext}
There are two important contributions from extended models to the muon anomalous magnetic moment: a one loop contribution, first calculated for the 2HDM in~\cite{Haber:1978jt}, and the two loop Barr-Zee contribution~\cite{Barr:1990vd}. The one loop diagram for the light Higgs is proportional to $g^2_{h \mu^+ \mu^-}$ and will therefore have the SM sign which gives a positive contribution to $(g-2)_\mu$. This could help cure the present $3 \sigma$ deviation relative to the SM. However, the two loop contribution is proportional to $g_{h \mu^+ \mu^-} \, g_{h \bar{b} b}$ and $g_{h \mu^+ \mu^-} \, g_{h \bar{t} t}$ and for SM-like couplings this amounts to a negative addition that, if large, will increase the difference between theory and experiment. In fig.~\ref{fig:gm2} we plot $\Delta a_{\mu}= (a^{exp}_{\mu}-a^{th}_{\mu}) + a^{{\rm th-2HDM}}_{\mu}$ as a function of $\tan \beta$ for a light Higgs mass of 40 $GeV$ and $m_A=200$ $GeV$. We have included both the one loop and the two loop contributions and the calculation is presented for the limit $\alpha \approx \beta$ to comply with the LEP bound on the Higgs mass. We have checked that our results for model II agree with the ones presented in~\cite{Jegerlehner:2009ry} for the same limit. For small values of $\tan \beta$ the 2HDM contributions can all be safely neglected. As $\tan \beta$ grows model II makes the discrepancy between theory and experiment to grow. The most interesting scenario is the one in model IV - the leptonic 2HDM. In this model, the new contribution moves the theoretical calculation closer to the experimental result. This is a very interesting fact, from all 2HDM this is the only one that actually helps to cure the problem. Finally, new contributions in models I and III do not vary with $\tan \beta$.
There are other bounds that constrain the pure 2HDMs which deserve a brief comment. Values of $\tan \beta$ smaller than $\approx 1$ are disallowed both by the constraints coming from $R_b$ (the $b$-jet fraction in $e^+e^-\to Z\to$ jets) \cite{LEPEWWG,SLD} and from $B_q \bar{B_q}$ mixing~\cite{Oslandk}. Limits from $B_{s} \to \mu^+ \mu^-$ are not likely to affect the models discussed here - it is sufficient to take a large charged Higgs boson mass to avoid these bounds in pure 2HDM models~\cite{Bll1}. New contributions to the $\rho$ parameter stemming from Higgs states \cite{Rhoparam} have to comply with the current limits from precision measurements \cite{pdg4}: $|\delta\rho| \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 10^{-3}$. Again for the pure 2HDMs, there are limiting cases though, related to an underlying custodial symmetry, where the extra contributions to $\delta\rho$ vanish. Since we are in the limit of very small $\sin (\beta - \alpha)$ there are two limits to consider: one is $m_h \approx m_{H^\pm}$ and $\sin (\beta - \alpha) \approx 0$ while the other is $m_{H^\pm} = m_A^{}$. As we want a light CP-even Higgs we choose $m_{H^\pm} \approx m_A$. Note however that our results only depend on the mass of the light Higgs bosons - any change in one of the other Higgs boson masses do not affect our results.
As mentioned in the introduction, there were also searches at LEP to test the Yukawa couplings~\cite{Abbiendi:2001kp,Abdallah:2004wy} based on the channels $b \bar{b} b \bar{b}$, $b \bar{b} \tau^+ \tau^-$ and $ \tau^+ \tau^- \tau^+ \tau^-$ for Higgs masses up to 50 GeV. In a pure 2HDM we are forced to be in a region where $\alpha \approx \beta$. Hence, the light Higgs coupling to fermions is either proportional to $\tan \beta$ or to $1/\tan \beta$. The limits obtained~\cite{Abbiendi:2001kp,Abdallah:2004wy} are for
\begin{equation}
\frac{g_{h f \bar{f}}^{\rm 2HDM}}{g_{h f \bar{f}}^{\rm SM}} \sqrt{BR^{\rm 2HDM} (h \to f \bar{f})} \, ,
\end{equation}
which in the most interesting scenarios is reduced to $\tan \beta \, \sqrt{BR^{\rm 2HDM} (h \to f \bar{f})}$ - otherwise the data gives no useful bounds on the parameters of the 2HDM. The results can be easily applied to model III and model IV because for large $\tan \beta$, $BR(h \to b \bar{b}) \approx 100 \%$ in model III and $BR(h \to \tau^+ \tau^-) \approx 100 \%$ in model IV. In any case, even for a Higgs as light as 20 GeV, the obtained bounds are for model III $\tan \beta \lesssim 21$ and for model IV $\tan \beta \lesssim 62$.
The theoretical bounds related to tree level unitarity~\cite{unit1} and vacuum stability~\cite{vac1} (boundness from below) will not influence our results either.
Finally we note that although the pure 2HDMs play a special role here because they are protected against charge and CP breaking~\cite{Ferreira:2004yd}, the same is not true for 3HDMs or for Higgs models with even more doublets~\cite{Barroso:2006pa}.
\section{Results and discussion}
\subsection{2HDM I to IV}
\begin{widetext}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.4cm]{BRtanm4080v2.eps}
\end{center}
\caption{Higgs BRs into $\tau^+ \tau^-$ relative to the SM one as a function of $\tan \beta$ in 2HDM Models I to IV with $\sin (\beta - \alpha)=0.1$ and $m_h=40$ $GeV$ (and also $m_h=80$ $GeV$ for model IV).}
\label{fig:BR}
\end{figure}
\end{widetext}
With the LEP bounds forcing $\sin (\beta - \alpha)$ to be small for a very light Higgs to still be allowed, the BRs and cross sections for pure 2HDMs depend almost exclusively on two parameters which we choose to be $m_h$ and $\tan \beta$. The BR$(h \to \tau^+ \tau^-)$ has a Higgs mass dependence very similar to the SM one. Therefore, big differences can only arise in the $\tan \beta$ dependence. In fig.~\ref{fig:BR} we present the Higgs BRs into $\tau^+ \tau^-$ relative to the SM one as a function of $\tan \beta$ in 2HDM Models I to IV with $\sin (\beta - \alpha)=0.1$. Model I has by construction the SM BRs except for exceptional singular points where a given parameter is null\footnote{The 2HDM fermiophobic Higgs~\cite{fermio} originates from model I by setting $\alpha= \pi/2$.}. In model II, the down quarks and charged leptons couple to the same doublet and since these are the main Higgs decays in the mass region under consideration, the behaviour is again very similar to the SM one. For the remaining two models we should distinguish the region of low $\tan \beta$ and the region of high $\tan \beta$. For $\tan \beta \approx 1$ and for model III, there will be an enhancement in BR$(h \to \tau^+ \tau^-)$ but not in the production cross section. When $\tan \beta \gg 1$ is now model IV that sees its BR enhanced and this growth depends on the Higgs mass. Therefore the global trend is as follows: in models I and II BR$(h \to \tau^+ \tau^-)$ is SM-like in all parameter space discussed, in model III it is enhanced for small $\tan \beta$ and in model IV it grows with $\tan \beta$ when compared with the SM value (for other studies on the different Yukawa versions of the 2HDM see~\cite{oldnew}).
The process $pp \to gg (qg) \to h j$ proceeds via a quark loop. As the Yukawa couplings are proportional to the quark masses, only top and bottom loops give non negligible contributions to the cross section. In the SM, the top loop is always the dominant. In our study there are two cases to consider. In models type I and IV the light Higgs couples to up and down quarks with the same strength. In this case we can write
\begin{equation}
\sigma^{\rm 2HDM} (pp \to gg \to h) = \frac{\cos^2 \alpha}{\sin^2
\beta} \quad \sigma^{SM} (pp \to gg \to h)
\end{equation}
which in turn means that, in the limit $\sin (\beta - \alpha) \to 0$, the relation is approximately
\begin{equation}
\sigma^{\rm 2HDM} (pp \to gg \to h) = \frac{1}{\tan^2 \beta}
\quad \sigma^{SM} (pp \to gg \to h).
\end{equation}
As we saw earlier, the available constraints force $\tan \beta$ to be above 1. Therefore, in models I and IV, the 2HDM cross section is the SM cross section for $\tan \beta = 1$ and then drops like $\tan^2 \beta$ with growing $\tan \beta$. This means that for $\tan \beta =10$ the cross section is 100 times smaller and even if the decay to $\tau^+ \tau^-$ reaches 100 $\%$ it will still be 10 times smaller than the corresponding SM cross section. In models type II and III the light Higgs couples to the up quarks as $\cos \alpha /\sin \beta$ and to the down quarks as $- \sin \alpha /\cos \beta$. Hence the contribution from each loop depends heavily on the value of $\tan \beta$. The larger $\tan \beta$ is the more the decay to $\tau^+ \tau^-$ becomes negligible in model III. In model II the width $\Gamma (h \to \tau^+ \tau^-)$ preserves the SM proportionality to $\Gamma (h \to b \bar{b})$ and as the cross section grows for large $\tan \beta$, the ratio $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ to the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$ will also increase. In fig.~\ref{fig:2HDM} we present this ratio for model II with $\tan \beta=1,\, 30$ (left) and for model IV with $\tan \beta =2, \, 3$ (right), as a function of the Higgs mass and with $\sin (\beta - \alpha)= 0.1$. All cross sections in this section were calculated at leading order. As stated before, we have chosen the benchmark $\sin (\beta - \alpha)=0.1 $, which is representative of a Higgs whose mass is not bounded by the available experimental data. We present luminosity lines of $1 \, fb^{-1}$ and $100 \, pb^{-1}$ for Model II and $1 \, fb^{-1}$ and $500 \, pb^{-1}$ for Model IV. These lines represent the integrated luminosity needed to exclude the model at 95 \% CL. Obviously, as the Higgs mass approaches the $Z$ boson mass, the required luminosity grows. However, there are regions of the parameter space that can be probed with less than $100 \, pb^{-1}$ of integrated luminosity with the LHC working at an energy of 14 TeV. The regions easily probed are always for the low mass region (below 60 GeV) and large $\tan \beta$ values in Model II and small to moderate values of $\tan \beta$ for Model IV.
\begin{widetext}
\begin{figure}
\begin{center}
\includegraphics[width=7.55cm]{lightFmassIIv3.eps}
\includegraphics[width=7.65cm]{lightFmassIVv3.eps}
\end{center}
\caption{In the left panel we present the ratio between $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ in model II and the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$ as a function of $m_h$ and $\tan \beta=$ 1 and 30. In the right panel we show the corresponding ratio for model IV now for $\tan \beta=$ 2 and 3. In both cases we take $\sin (\beta - \alpha)= 0.1$. We also show lines of total integrated luminosity $100 \, pb^{-1}$ and $1 \, fb^{-1}$ for model II and $500 \, pb^{-1}$ and $1 \, fb^{-1}$ for model IV.}
\label{fig:2HDM}
\end{figure}
\end{widetext}
\vspace*{0.25cm}
\subsection{Beyond 2HDMs}
\begin{widetext}
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{tsinal1v3.eps}
\includegraphics[width=7.7cm]{tsinal30v3.eps}
\end{center}
\caption{In both plots the ratio between $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ and the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$ is shown, multiplied by the factor $cos^2 \Omega$ for $m_h = 40 \, GeV$ and two values of $\tan \beta$, $1$ (left) and $30$ (right) for all extensions of the 2HDM in the limit described in the text. We also present the total integrated luminosities $100 \, pb^{-1}$ and $2 \, fb^{-1}$ (left) and $100 \, pb^{-1}$ and $1 \, fb^{-1}$ (right).}
\label{fig:2HDMext}
\end{figure}
\end{widetext}
When we proceed to more general models, the first step is again to make sure the LEP bounds are respected. In tab.~\ref{tab:models} we present the Higgs couplings to gauge bosons for models with more than two doublets which is written as
\begin{equation}
(\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta)^2
\end{equation}
where there are two new parameters to consider: $\sin \Omega$ which is a measure of the contribution of the new VEV(s) (it can come from the third doublet or from a combination of more than one) and $\sin \theta$ which determines the amount of mixing of the new CP-even field(s) to the lightest CP-even state from the 2HDM. Note that, except for the 3HDM case, just the first two doublets give mass to the fermions. There are several ways to make this quantity small enough to avoid the LEP bound. We start by considering the scenario where almost no mixing occurs between the new doublet and the remaining ones and the new VEV is maximal, which can be translated into $\cos \Omega \ll 1$ and $\sin \theta \ll 1$. In this scenario, the fermion Yukawa couplings have to be increased to give the fermions the required masses. All cross sections, independently of the Yukawa model under study, are now rescaled according to
\begin{equation}
\sigma^{\rm 2HDM} \to \frac{1}{\cos^2\Omega } \, \sigma^{\rm 2HDM}
\end{equation}
and this means that there is an enhancement already at the level of the cross section. Both $\alpha$ and $\tan \beta$ are now free to vary in all the allowed range, provided theoretical and experimental constraints are fulfilled. In fig.~\ref{fig:2HDMext} we present the ratio between $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ and the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$, multiplied by the factor $\cos^2 \Omega$ for $m_h = 40 \, GeV$ and two values of $\tan \beta$, $1$ (left) and $30$ (right) for all Yukawa extensions of the 2HDM+nD. Note that to get the actual value of the cross section one needs to multiply it by $1/\cos^2 \Omega$ and therefore the numbers will always be larger than the ones shown in the figures. We also present several values of the total integrated luminosity needed to probe the models at 95 \% CL. In the left panel we can see that $100 \, pb^{-1}$ are enough to constraint a big portion of model III+nD while with $2 \, fb^{-1}$ just marginal regions of the models are left untested. Between the two vertical lines, the allowed region of the parameter space for the pure 2HDM is shown - where $\alpha$ is constrained due to our choice of $\sin (\beta - \alpha) = 0.1$ and $\tan \beta = 1$. In the right panel, $\tan \beta = 30$, it is now models II and IV that have the largest cross sections. We show the 95 \% CL lines for $100 \, pb^{-1}$ and $1 \, fb^{-1}$ total integrated luminosity. For large $\tan \beta$ only model III+nD and regions close to $\sin \alpha = 0$ in the other models will not be excluded with a few $fb^{-1}$ of integrated luminosity. The decay to tau pairs is negligible in this limit for the 3HDM(D).
We now consider the scenario $\sin \Omega \ll 1$ and $\alpha \approx \beta$ and we will explore, as an example, model IV+nD and the democratic 3HDM. Production cross sections are now rescaled as
\begin{equation}
\sigma^{\rm 2HDM} \to \cos^2 \theta \, \sigma^{\rm 2HDM}
\end{equation}
which means that they will now be smaller than the corresponding 2HDM cross sections. In fig.~\ref{fig:3HDM} we show the ratio between $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ and the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$ as a function of $\sin \theta$ for $\tan \beta= 3$ and $m_h = 40 \, GeV$. For definiteness we take $\sin \Omega = 0.1$ and $\alpha = \beta$. The prospect of excluding a light Higgs in model IV+nD is good but even with $5 fb^{-1}$ of integrated luminosity only a small portion of the 3HDM(D) will be probed at 95 \% CL. Hence, only several years of integrated luminosity will allow us to exclude a light Higgs from a 3HDM(D) in this limit.
\begin{widetext}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{3HDMv3.eps}
\end{center}
\caption{Ratio between $\sigma({pp \to hg}) \, {\rm{BR}}({h \to \tau^+ \tau^-})$ and the SM $\sigma({pp \to h_{SM} g}) \, {\rm{BR}}({h_{SM} \to \tau^+ \tau^-})$ as a function of $\sin \theta$ and $\tan \beta= 3$ and $m_h = 40 \, GeV$. Models IV+nD and 3HDM(D) are compared.}
\label{fig:3HDM}
\end{figure}
\end{widetext}
There are other scenarios where the LEP bound could be avoided. One is $\sin \theta << 1$ and $\alpha \approx \beta$. The first relation eliminates the 3HDM(D) light Higgs decay to leptons. For all 2HDM+nD the cross sections are again rescaled via the factor $1/\cos^2 \Omega$ as compared to the pure 2HDM. Therefore, the results presented in fig.~\ref{fig:2HDM} will be rescaled with $1/\cos^2 \Omega$ to apply for the corresponding 2HDM+nD models. There is also the possibility of having $\sin \Omega << 1$ and $\cos \theta << 1$. Due to the second relation all production cross sections will be severely reduced. The only model that has a small chance to be probed in such a limit is the 3HDM(D) because the BR to leptons can be of the order 100 \%.
Finally, there is a completely different class of options where none of the previous limits is required. By taking
\begin{equation}
g_{VVh} = (\cos \Omega \cos \theta \sin (\beta - \alpha) + \sin \Omega \sin \theta) \, g_{VVh}^{SM} = 0
\end{equation}
and therefore
\begin{equation}
\sin (\beta - \alpha) = - \tan \Omega \tan \theta \, ,
\end{equation}
we do not need to require any special limit to avoid the LEP bound. As an example, if $\sin (\beta - \alpha) = - \tan \Omega = - \tan \theta = - 1$ the LEP bound is avoided and the lightest Higgs from 3HDM (D) will have SM-like couplings to fermions. Therefore, the SM results shown in table II, can be applied to the 3HDM (D). Note that there is no such limit in pure 2HDM cases where a small $\sin (\beta - \alpha)$ is required.
\section{Conclusions}
\label{sec:conclusions}
We have discussed the possibility of finding a very light Higgs boson at the LHC. The existence of such a very light CP-even scalar is severely constrained by the LEP bounds which assume the SM coupling for the vertex $ZZh$. If however this coupling is smaller than the SM one, the bound is relaxed and in some scenarios the bound on the Higgs mass even ceases to exist. This can be accomplished by the introduction of extra Higgs fields. We have seen that the introduction of a neutral singlet is enough to avoid the bounds but for this particular process, the production cross section becomes too small. Next, we have introduced an arbitrary number of doublets and singlets with the following restrictions: no FCNC are generated at tree level and CP is conserved. We have shown that in pure 2HDMs the limit of very small $(\beta - \alpha)$ is the only way to avoid the LEP bounds - if one further extends the scalar sector, several combinations of different limits in the parameter space lead to the same result. We have also shown that, whatever the model is, a few $fb^{-1}$ of integrated luminosity are enough to probe large portions of the associated parameter space.
With the LHC running and with the search for the Higgs boson on the way, we should ask ourselves what to do if we do not find a SM Higgs boson. It seems clear that we should turn our attention to more general potentials and in particular to the ones where a light Higgs boson is allowed. However, even if a Higgs boson is found and even if looks very much like the SM Higgs boson, we should make sure that we did not miss any other (pseudo)scalar particle potentially present in the data. We believe this work is a very important contribution to achieve such a goal.
|
2,877,628,088,593 | arxiv | \section{Introduction}
\label{submission}
Computer aided medical image analysis has become a critical component in the diagnosis and treatment of various diseases \cite{chakraborty2023overview,duncan2000medical}. Deep learning models, such as Convolution neural networks (CNNs), have shown exceptional performance in analyzing medical images, including magnetic resonance imaging (MRI), computed tomography (CT), and histology images \cite{chan2020deep}. However, the performance of these models can be affected by several factors, including variations in image quality, lighting conditions, and magnification scales. In particular, changes in magnification scales between training and testing datasets can significantly impact the accuracy and robustness of deep learning models in medical image analysis \cite{gupta2017breast}.
CNNs are cited to be the most commonly used deep learning architecture for medical image analysis \cite{li2014medical}. However CNN, can struggle when it comes to handling medical images with anatomical features at varying magnification scales. In general, training a CNN on images at a specific magnification scale may result in good performance on that scale, but this performance may not generalize well to other magnification scales \cite{alkassar2021going}. This is a significant limitation when analysing medical imaging modalities like histology images where slight to moderate changes in magnification variability is common. The inability of CNN to generalize across magnification scales leads to sub-optimal inference performance on external datasets \cite{gupta2017breast}. Though, augmenting input images with perturbations in scales can slightly improve performance of CNNs, it is also important to explore or develop more robust deep learning architectures that can generate features that are inherently invariant to the changes in scale of input images. Such architectures should be designed to capture the important features in the images, regardless of the shift in the magnification scale, in order to provide robust performance for medical image analysis in a clinical settings.
In this study, we evaluate the robustness of multiple popular deep learning architectures including CNN based architectures such as ResNet \cite{he2016deep} and MobileNet \cite{Howard2017-rm}, Self-attention based architectures such as Vision Transformers (VIT) \cite{dosovitskiy2021an} and Swin Transformers \cite{liu2021swin}, and token mixing models such as Fourier-Net (FNet) \cite{lee2021fnet}, ConvMixer \cite{trockman2022patches}, Multi-Layer Perceptron-Mixer (MLP-Mixer) \cite{tolstikhin2021mlp}, and WaveMix \cite{https://doi.org/10.48550/arxiv.2205.14375}. Our aim is to compare the performance of these deep learning models when the magnification of the test data differs from the training data. The BreakHis \cite{spanhol2015dataset} dataset , which includes breast cancer histopathological images at varying magnification levels, is utilized for our experiments. The empirical performance differences between the deep learning models will be used to determine the most robust architecture for histopathological image analysis.
\section{Experiments}
\subsection{Dataset}
We utilize the BreakHis \cite{spanhol2015dataset} dataset, which is a well-known public dataset in the field of digital breast histopathology for our experiments. It has been widely used in the development and evaluation of computer-aided diagnosis (CAD) systems for breast cancer diagnosis. It provides a challenging benchmark for the development of CAD systems due to the inherent large variations in tissue appearances.
The dataset consist of 7,909 microscopy images of breast tissue biopsy specimens from 82 patients diagnosed with either benign or malignant breast tumors. The images are collected from four different institutions and are of four different magnifications scales - 40X, 100X, 200X and 400X.
In addition to the malignancy information of each image, the dataset is further annotated with information like the patient's age, the sub-type of malignancy and the type of biopsy. The dataset is slightly imbalanced in terms of the distribution of benign and malignant cases and the distribution of different magnifications. In the dataset there are 5,429 malignant cases whereas benign cases are only about 2,480.
As the BreakHis \cite{spanhol2015dataset} dataset contains multiple images at different magnification levels, the dataset serves as a challenging and representative testbed for evaluating the robustness of deep learning architectures across the different magnification levels or scales. These evaluations will be carried out by training some of the recently reported deep learning architecture on one magnification level of the BreakHis \cite{spanhol2015dataset} dataset and testing these trained models across multiple held-out magnification levels. Observing the average test accuracy on the different magnification levels can hence reveal the robustness of deep learning architectures to varying image magnification at inference..
\subsection{Models}
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\includegraphics[scale=0.75]{tokenmixer.drawio.pdf}
\caption{Architectures of various token-mixers along with the general MetaFormer block. The token-mixing operation in different models is performed by different operations, such as spacial MLP, depth-wise convolution, self-attention, Fourier and Wavelet transforms}
\label{fig:token}
\end{center}
\vskip -0.2in
\end{figure*}
For CNN based models, we compared performance using ResNet-18, ResNet-34 and ResNet-50 from the ResNet family \cite{he2016deep}, and MobileNetV3-small-0.50, MobileNetV3-small-0.75 and MobileNetV3-small-100 from MobileNet family of models. We used ViT-Tiny, ViT-Small and ViT-Base (all using patch size of 16, see \cite{dosovitskiy2021an}) along with Swin-Tiny and Swin-Base (all using patch size of 4 and window size of 7, see \cite{liu2021swin}) for the experiments.
\subsubsection{Token-Mixers}
Token-mixers are the family of models which uses an architecture similar to MetaFormer \cite{yu2022metaformer} as its fundamental block as shown in ~\Cref{fig:token}. Transformer models can be considered as token-mixing model which uses self-attention for token-mixing. Other token-mixers use Fourier transforms (FNet) \cite{lee2021fnet}, Wavelet transforms (WaveMix) \cite{https://doi.org/10.48550/arxiv.2205.14375}, spatial-MLP (MLP-Mixer) \cite{tolstikhin2021mlp} or depth-wise convolutions (ConvMixer) \cite{trockman2022patches} for token-mixing. Token-mixing models have been shown to be more efficient in terms of parameters and computation compared to attention-based transformers \cite{yu2022metaformer}.
FNet \cite{lee2021fnet} was actually designed for natural language processing (NLP) tasks and was designed to handle 1D inputs sequences. We have used the 2D-FNet, i.e., a modified FNet that used a 2D Fourier transform for spacial token-mixing instead of a 1D Fourier transform used in FNet. The 2-D FNet can process images in the 2D form without the need to unroll it into sequence of patches or pixels as done in transformer and FNet. We experimented by varying the embedding dimension and number of layers to get the best model.
WaveMix \cite{https://doi.org/10.48550/arxiv.2205.14375} uses 2D-Discrete Wavelet transform (2D-DWT) for token-mixing. We experimented by varying the embedding dimension, number of layers and number of levels of 2D-DWT used in WaveMix to get the model which gives highest validation accuracy in the dataset.
ConvMixer \cite{trockman2022patches} uses depth-wise convolution for spacial token-mixing and point-wise convolutions for channel toke-mixing. We used ConvMixer-1536/20, ConvMixer-768/32, and ConvMixer-1024/20 available in Timm model library for our experiments.
MLP-Mixer \cite{tolstikhin2021mlp} uses spacial MLP and channel MLP to mix tokens. We used MLP-Mixer-Small (patch size of 16) and MLP-Mixer-Base (patch size of 16) in our experiments.
\subsection{Implementation details}
The dataset was divided into train, validation and test sets in the ratio 7:1:2 for each of the magnification. Due to limited computational resources, the maximum number of training epochs was set to 300. All experiments were done with a single 80 GB Nvidia A100 GPU. \emph{No pre-trained weights was used for any of the models}. We used the ResNet, MobileNet, Vision transformer, Swin transformer, ConvMixer and MLP-Mixer available in Timm (PyTorch Image Models) library~\cite{rw2019timm}\footnote{available at \url{http://github.com/rwightman/pytorch-image-models/}} Since WaveMix and FNet was unavailable in Timm library, it was implemented from original paper. The Timm training script~\cite{rw2019timm} with default hyper-parameter values was used to train all the models. Cross-entropy loss was used for training. We used automatic mixed precision in PyTorch during training to optimize speed and memory consumption.
The images were resized to $672\times448$ for the experiments. Transformer-based models and MLP-Mixer required the images to be resized to certain specific sizes like $224\times224$ or $384\times384$. We trained models of varying sizes belonging to the same architecture on the training set and evaluated it on validation set to find the model size that gives the best performance on the Breakhis \cite{spanhol2015dataset} dataset. The model size with highest average validation performance over all magnifications was used for evaluation using test set.
The maximum batch-size was set to 128. For larger models, we reduced the batch-size so that it can fit in the GPU. Top-1 accuracy on the test set of the best of three runs with random initialization is reported as a generalization metric based on prevailing protocols~\cite{hassani2021escaping}.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\includegraphics[scale=0.68]{graphbreak.pdf}
\caption{Average of all test accuracy reported for various training magnifications for each of the models compared}
\label{fig:test}
\end{center}
\vskip -0.2in
\end{figure}
\section{Results and Discussions}
\
\begin{table*}[]
\centering
\caption{Results of Inter-magnification classification performance of all CNN, transformers and token-mixers on Breakhis \cite{spanhol2015dataset} dataset. Accuracy on test set is reported.}
\vspace{0.5 cm}
\centering
\includegraphics[scale=0.70]{mod_expt1.pdf}
\label{tab:result}
\end{table*}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\includegraphics[scale=0.83]{graphhis.pdf}
\caption{Average of test accuracy when training and testing was done on same magnification for each model is compared}
\label{fig:diag}
\end{center}
\vskip -0.2in
\end{figure}
The Inter-magnification classification performance of all the best performing model variants of CNN, transformer and token-mixer models are shown in ~\Cref{tab:result}. We can see that WaveMix performs better than all the other models in maintaining high performance across different testing magnifications. Only ConvMixer, another token-mixer could perform better than WaveMix in one magnification ($200\times$). We also observe that the accuracy of WaveMix is the most stable, never falling bellow 87\%. Other models that perform well, such as, ConvMixer and ResNet-34, suffers from unstable performance with accuracy falling to 81\% and 78\% respectively. The better performance of WaveMix is due to the ability of 2D wavelet transform to efficiently mix token information and the subsequent use of deconvolution layers which also aids in rapid expansion of receptive field after each wavelet block.
We also see from~\Cref{fig:test} that WaveMix performs the best amoung all models when we take the overall average of all the average testing accuracy over all magnifications. We observe that the performance of token-mixers (green) like MLP-Mixer and FNet is comparable to that of transformer based models (red). CNN-based models (blue) perform better than transformer models.
\Cref{fig:diag} show the average of test accuracy when training and testing was done on same magnifications. We observe that ConvMixer performs better than WaveMix when train and test magnifications are same. Even ResNet-34 is performing almost on par with WaveMix and ConvMixer. This shows that even though other models perform well when magnification of training and test data are same, they cannot translate that performance when magnification of training and testing set differs from each other. WaveMix is mostly invariant to this change of magnification between train and test data and is able to provide consistent performance compared to other CNN, transformer and token-mixing models.
FNet consumed largest GPU RAM (4-8$\times$ more) compared to other architectures. CNN-based models perform much better than transformer model-based models in Breakhis classification. There is a significant drop in performance when the transformer-based models are trained on 40$\times$ magnification and tested. Similar drop in accuracy for 40$\times$ magnification training was observed for MLP-Mixer.
\section{Conclusions}
In conclusion, our study evaluated the robustness of various deep learning models for histopathological image analysis under different testing magnifications. We compared ResNet, MobileNet, Vision Transformers, Swin Transformers, Fourier-Net, ConvMixer, MLP-Mixer, and WaveMix using the BreakHis \cite{spanhol2015dataset} dataset. Our experiments demonstrated that the WaveMix architecture, which intrinsically incorporates multi-resolution features, is the most robust model to changes in inference magnification. We observed a stable accuracy of at least 87\% across all test scenarios. These findings highlight the importance of implementing a robust architecture, such as WaveMix, not only for histopathological image analysis but also for medical image analysis in general. This would help to ensure that anatomical features of diverse scales do not influence the accuracy of deep learning-based systems, thereby improving the reliability of diagnostic inference in clinical practice.
\nocite{langley00}
|
2,877,628,088,594 | arxiv | \section{\label{sec:level1}Introduction}
The solid state structures seem quite promising to implement
quantum computers. The first proposal of the solid state quantum
computer based on quantum dots was put forward in 1998 by D. Loss
and P. DiVincenzo \cite{loss}. The quantum calculations were
offered to be performed on single electron spins placed in quantum
dots. To read out the result one should measure the state of the
quantum dot register consisting of single spins. Several
possibilities were proposed there. In particular, two of them
exploited a spin blockade regime. One could use spin dependent
tunneling of a target electron into a quantum dot with a definite
spin orientation of a reference electron. The final charge state
of the dot (whether tunneling occurred or not) could be tested by
a single electron transistor (SET) quite sensitive for an
environment charge.
In the same year Kane offered a solid state quantum computer based
on $^{{\rm 3}{\rm 1}}$P atoms embedded in Si substrate
\cite{kane}. The computation was to be performed on $^{{\rm 3}{\rm
1}}$P nuclei spins mediated by outer shell electrons. The
inventive procedure to transfer a resulting nucleus spin state
into an electron spin state was proposed. The latter could be
measured by single electron tunneling into the reference atom,
i.e. almost with the same means as that in Ref. \cite{loss}. This
idea was much developed in Ref. \cite{recher} where it was
proposed to test a single electron spin in a quantum dot by a
spin-polarized current passing through this dot sandwiched between
leads. The inspiring experiment demonstrating such a readout of a
single electron spin placed in an open quantum dot by detecting a
current passing through the dot was presented in \cite{giorga}.
Far earlier than the first solid state quantum computer implementations
were put forward a set of scanning tunneling microscopy experiments
to observe an evolution of a single spin with the help of spin-polarized
current emanating from a magnetic tip had begun \cite{S1, S2, S3, S4}.
Most of proposals of spin-based quantum computers relate to
electron spins although their relaxation is much faster than that
of nucleus spins. It is caused by relatively easier and faster
operation upon them and a possibility of measurement of individual
electron spin state.
In general, most of proposals of spin-based quantum computers
relate just to electron spins although their relaxation is much faster
than that of nucleus spins. It is caused by relatively easier and
faster operation upon them and a possibility of measurement of
individual electron spin state. All suggestions of individual
electron spin measurement made so far are based on exchange
interaction between a target electron and a referenced electron
which spin orientation is known.
One of the publications on the topic \cite{engel} concerns the
problem how to perform a particular measurement of spin states via
tunneling in adjacent quantum dots, namely, to make clear whether
spins are parallel or antiparallel. This kind of measurement could
be employed for quantum calculations instead of organizing an
interaction between qubits which can hardly be controlled with
fairly high accuracy. The recent paper \cite{sarovar} discusses a
possibility to test a single spin via spin-dependent scattering
inside a field effect transistor channel. However, spin-flip
(Kondo-like) phenomena were unreasonably ignored there.
Here we examine an opportunity to employ a quantum wire with
a spin-polarized current to measure a single electron spin inserted
in a nearby quantum dot. Firstly, in Ref. \cite{vyurkov1}
there was pointed out
to an opportunity that a current passing through a quantum wire
could be quite sensitive to a nearby charge due to Coulomb blockade.
Further the sensitivity of a quantum wire with spin polarized current
to a spin state of an electron in adjacent quantum dot was clarified
in Ref. \cite{vyurkov2}.
There the possibility of spin blockade of current was
elucidated. In this paper we propose to employ Fano resonances
in a quantum wire to make the measurement much more sensitive.
We also regard the question how to make a non-demolishing
measurement by virtue of a spin-orbit interaction.
\section{\label{sec:level1}Quantum wire to measure a single spin state}
We suggest a measurement which allows via detecting the current to
conclude whether the electron in a quantum dot has the same spin
orientation as electrons in the quantum wire or opposite one.
\begin{figure*}
\includegraphics{Fig1}
\caption{\label{fig:wide} A quantum wire coupled to a nearby
quantum dot via tunneling.}
\end{figure*}
The structure under consideration is schematically depicted in
Fig. 1. It consists of a quantum wire transmitting a spin
polarized current coupled to a nearby quantum dot via tunneling.
The current through the wire is determined by well-known Landauer
formula
\begin{equation}
\label{eq1} I(V_{sd} ) = {\frac{{e}}{{h}}}{\sum\limits_{i =
0}^{\infty} {}} \int {dE \ T_{i} (E){\left[ {f_{s} (E) - f_{d}
(E)} \right]}} ,
\end{equation}
\noindent where summation is fulfilled over all modes of
transversal quantization in the wire, $T_{i}(E)$ is a transmission
coefficient for i-th mode dependent on the total energy $E$, the
factor $e/h$ arises from a conductance quantum $e^{2}/h$ for spin
polarized current in a ballistic wire, $f(E)$ is Fermi-Dirac
distribution function
\begin{equation}
\label{eq2} f(E) = {\frac{{1}}{{1 + \exp {\frac{{E - \mu}}
{{kT}}}}}},
\end{equation}
\noindent providing the chemical potentials in the source contact
$\mu _{{\rm s}}$ and in the drain contact $\mu _{{\rm d}}$ are
shifted by bias: $\mu _{{\rm s}}-\mu _{{\rm d}}=e V_{{\rm s}{\rm
d}}$. The transmission coefficients $T_{i}(E)$ may be different
for different spin state of the quantum dot electron with respect
to spin polarization of electrons in quantum wire. This results in
different current. The measurement is operated by potentials on
gate electrodes.
Recently the proposals appeared to exploit Fano resonances for a
measurement of an individual electron spin state with the help of
quantum wire or quantum constriction which in fact could be
regarded as a merely short quantum wire \cite{mourokh,vyurkov3}.
Indeed, Fano resonances make such a measurement more sensitive.
Fano effect in the considered structure means the following. An
electron moving along the quantum wire can partially penetrate
into the quantum dot due to tunneling. The interference between
two routes, one of which passes through the dot and another
doesn't, determine the transmission coefficient of an electron
through the wire. In other words, the discrete energy spectrum in
the dot interferes with a continuum in the wire. This interference
becomes destructive when the energy of an electron in wire
coincides with that in a dot. This results in backscattering which
can be detected by a dip on a current-voltage curve, so called
Fano antiresonance.
Transmission coefficient $T$ for an electron with the energy
detuning from the resonance \textit{$\varepsilon $} is supplied by
the expression
\begin{equation}
\label{eq0} T = {\frac{{{\left| {\varepsilon + q\Gamma}
\right|}^{2}}}{{\varepsilon ^{2} + \Gamma ^{2}}}},
\end{equation}
\noindent where $\Gamma$ stands for the level broadening and $q$
is the Fano asymmetry factor. In general $q$ is a complex number
depending on scattering and relaxation in the system. For a
ballistic quantum wire and a quantum dot without relaxation $q
\approx 0$. Here we assume that this is the case.
However, the model put forward in \cite{mourokh} does not take
into account the possibility of spin flip, i.e. Kondo-like
phenomenon. Indeed, the spin exchange between an incident electron
and an electron in quantum dot takes the same time required for an
incident electron to be backscattered. Therefore, the initial spin
state of the measured electron becomes demolished after only one
incident electron is backscattered. Here we endeavor to circumvent
such a shortcoming introducing a spin-orbit interaction into the
system.
\begin{figure*}
\includegraphics{Fig2}
\caption{\label{fig:wide} An electron tunnels from a quantum wire
(QW) to the excited state in a quantum dot (QD) with total spin
$S=1$.}
\end{figure*}
However, for the sake of clarity we firstly discard a spin-orbit
interaction to highlight the disadvantages of such an approach to
measurement. The general way of measurement looks like follows. A
conducting electron (an electron at the Fermi level) in the
quantum wire tunnels to an excited state in a quantum dot which is
split with respect to the total spin $S$ because just a total spin
determines the exchange energy. We also suppose that the state
$S=1$ has a smaller exchange energy as the state $S=0$ (Fig. 2).
This is valid, at least, for the excited state with orbital moment
$L=1$. It should be noted that the Lieb-Mattis theorem is not
valid for excited states. This theorem only claims that the ground
state definitely has $S=0$ for even number of electrons. Anyhow,
this suggestion is not crucial for the measurement.
We suppose that the initial state of the system is the following.
For the sake of definiteness, the spin of electron in quantum wire
is always directed up ($\uparrow $) and the spin of target
electron in the ground state of quantum dot is directed up
($\uparrow$) or down ($\downarrow$). Evidently, only mutual spin
orientation of both electrons matters. Due to tunneling this state
evolves into some excited state of two electrons in the quantum
dot.
The Hamiltonian describing tunneling of an electron from the
quantum wire into an exited level in the quantum dot in absence of
spin-orbit interaction reads
\begin{equation}
\label{eq3} H_{0} = \varepsilon _{0} + (\varepsilon _{1} + U_{C} -
J\vec {S}_{0} \vec {S}_{1} )a_{1}^{ +} a_{1} +
\\{\sum\limits_{j}
{}} \{T_{j} (a_{1}^{ +} b_{j} + b_{j}^{ +} a_{1} ) + \varepsilon
_{j} b_{j}^{ +} b_{j}\},
\end{equation}
\noindent where $\varepsilon _{{\rm 0}}$ is a ground level energy
of a single electron in the quantum dot, $\varepsilon _{{\rm 1}}$
is an excited level energy, $U_{C}$ is a direct Coulomb
interaction between an electron in the ground state and an
electron in the excited state, $a_{1}^{ +}$ and $a_{1}^{}$ are
operators of creation and annihilation of an electron in the
excited level in the quantum dot, $b_{j}^{ +}$ and $b_{j}$ are
operators of creation and annihilation of an electron in the
quantum wire in j-th state, in general, the longitudinal momentum
and the transversal quantization subband (mode) number are
ascribed to the index $j$, $J$ is the strength of exchange
interaction, $\vec {S}_{0}$ and $\vec {S}_{1}$ are spin operators
for an electron in the ground level and that in excited level,
respectively. In general, the sign of exchange energy (therefore,
the sign of $J)$ could be positive or negative. It is only known
for sure that the ground state of the system composed of two
electrons definitely corresponds to the total spin $S=0$ owing to
Lieb-Mattis theorem.
It is convenient to present the spin Hamiltonian in the form
\begin{equation}
\label{eq4} \vec {S}_{0} \vec {S}_{1} = S_{0}^{z} S_{1}^{z} +
S_{0}^{ +} S_{1}^{ -} ,
\end{equation}
\noindent where $S_{0}^{z}$ and $S_{1}^{z}$ are operators of
$z$-projection, $S_{0}^{ +}$ and $S_{1}^{ -}$ are raising and
lowering operators of $z$-projection. The second term describes
spin-flip processes, i.e. Kondo-like phenomena.
Hereafter, we incorporate into consideration only three lowest
states of two electrons in the quantum dot: the ground state with
total spin $S=0$ and two excited states corresponding to the same
space wave function but the different total spin $S=0$ and $S=1$.
The energies of these excited states differ due to exchange
interaction.
We suppose that the tunneling coupling $T$ of both
states with a quantum wire continuum is the same as the space wave
functions are the same. Hereafter, we employ the designations
analogous to that in Ref. \cite{engel}.
There are two triplet states with $S=1$ which can originate after
an electron with spin up tunnels from a quantum wire into a
quantum dot
\begin{equation}
\label{eq-1} \uparrow _{D} \uparrow _{D}
\end{equation}
\noindent for parallel spins and
\begin{equation}
\label{eq-2}{{{\left[ { \uparrow _{D} \downarrow _{D} + \downarrow
_{D} \uparrow _{D}} \right]}} \mathord{\left/ {\vphantom {{{\left[
{ \uparrow _{D} \downarrow _{D} + \downarrow _{D} \uparrow _{D}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
\noindent for antiparallel spins. There could also occur a singlet
state with $S=0$
\begin{equation}
\label{eq-3}{{{\left[ { \uparrow _{D} \downarrow _{D} - \downarrow
_{D} \uparrow _{D}} \right]}} \mathord{\left/ {\vphantom {{{\left[
{ \uparrow _{D} \downarrow _{D} - \downarrow _{D} \uparrow _{D}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
\noindent for antiparallel spins. The above formulae are sensitive
to position: the first place corresponds to the ground state.
Tunneling to the ground state from a quantum wire seems
inappropriate for the measurements because the target electron may
escape from the quantum dot the same way as the reference
electron.
Therefore, tunneling to the excited state is preferable as it
leaves the target electron in the dot.
First of all we must discuss the initial state of the system when
only a target electron is situated in the ground state in a
quantum dot and there is a reference electron in a quantum wire.
For parallel spin orientation
\begin{equation}
\label{eq-4} \uparrow _{D} \uparrow _{W}
\end{equation}
This state relates to a definite value of total spin $S=1$ and
there is no entanglement between spin and space coordinates.
For antiparallel spins the state is
\begin{equation}
\label{eq-5} \downarrow _{D} \uparrow _{W} = {{{\left[ {
\downarrow _{D} \uparrow _{W} - \uparrow _{D} \downarrow _{W}}
\right]}} \mathord{\left/ {\vphantom {{{\left[ { \downarrow _{D}
\uparrow _{W} - \uparrow _{D} \downarrow _{W}} \right]}} {\sqrt
{2}}} } \right. \kern-\nulldelimiterspace} {\sqrt {2}}} +
{{{\left[ { \uparrow _{D} \downarrow _{W} + \downarrow _{D}
\uparrow _{W}} \right]}} \mathord{\left/ {\vphantom {{{\left[ {
\uparrow _{D} \downarrow _{W} + \downarrow _{D} \uparrow _{W}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
Indeed, this state is a superposition of $S=0$ and $S=1$,
moreover, there exists an entanglement of space and spin
coordinates. Worth noting both space components of this state do
not interfere because they are orthogonal with respect to total
spin.
When the conditions shown in Fig. 2 are effectuated and electrons
from the Fermi level in quantum wire can tunnel to the state with
$S=1$ in a quantum dot the reflection coefficient for the
antiparallel spins (10) is exactly twice smaller than that for
parallel spins (9). This looks like the basis of a spin state
measurement.
Actually, it is an illusion that foregoing procedure is already
satisfactory for the measurement. Really, Kondo-like phenomena,
i.e. spin-flip events should be involved into consideration. The
spin-flip occurs within the period of time $\tau \sim (J / \hbar
)^{ - 1}$. The broadening of levels with $S=0$ and $S=1$ depends
on tunneling rate, so that $\Gamma \approx T$. The claim for these
levels to be clearly distinguished requires $J \gg T$. It follows
that the spin flip process must be much faster than tunneling.
We propose to employ spin-orbit interaction inside a quantum dot
to prevent spin-flip. Hereafter, we analyze the simplest case when
a coin-like quantum dot is formed of a quantum well. It is
sketched in Fig. 3. The arrows there indicate the interface
electric field at different interfaces caused by conduction (or
valence) band discontinuity. For instance, this kind of dot could
be fabricated by etching with a subsequent overgrowing of a host
semiconductor with a wider gap. Rashba spin-orbit interaction
\cite{rashba} originates in interface electric field and strong
conduction and valence bands coupling in narrow-gap semiconductors
\cite{zakharova,wissinger}. This mechanism of spin-orbit
interaction turned out to be several orders stronger than that
based on electric field in one-band model. It seems as the most
suitable to get the goal of non-demolishing measurement of a spin
qubit state.
The original Rashba Hamiltonian describing spin-orbit interaction
in a two-dimensional electron gas (2DEG) reads \cite{rashba}:
\begin{equation}
\label{eq5} \hat {H}_{R} = \alpha _{R} [\hat {\vec {S}}\times \hat
{\vec {k}}]\vec {\upsilon} \equiv \alpha _{R} [\hat {\vec
{k}}\times \vec {\upsilon} ]\hat {\vec {S}},
\end{equation}
\noindent where $\alpha _{{\rm R}}$ is Rashba constant, $\vec {k}
= - i{\frac{{\partial}} {{\partial \vec {r}}}}$ is an operator of
in-plane moment, $\vec {S}$ is a spin operator, and a unit vector
$\vec {\upsilon}$ is directed perpendicular to 2DEG plane. The
Rashba constant $\alpha _{{\rm R}}$ is non-zero for a 2DEG in
non-symmetric quantum well.\textbf{ }Unfortunately, Rashba
Hamiltonian (\ref{eq5}) results in an entanglement of spin and
space variables for an electron state in a quantum dot cut of a
quantum well. It occurs even in a ground state. This kind of a
quantum dot is not suitable for a spin qubit application. The
appropriate quantum dot should be made of a symmetric quantum well
with zero original Rashba term (\ref{eq5}).
Rashba Hamiltonian (\ref{eq5}) is widely used for a 2DEG
originating at a unique interface in a heterostructure, for
example, at common GaAs/AlGaAs interface. We adopt this
description to introduce a spin-orbit interaction caused by
interface field at side wall of the dot. To that end, we propose a
model Rashba-like Hamiltonian
\begin{equation}
\label{eq6} \hat {H}_{RS} = \beta [\hat {\vec {S}}\times \hat
{\vec {k}}]\hat {\vec {r}} = \beta [\hat {\vec {k}}\times \hat
{\vec {r}}]\hat {\vec {S}} = \beta \hat {\vec {L}}\hat {\vec {S}},
\end{equation}
\noindent where $\hat {\vec {L}} = [\hat {\vec {k}}\times \hat
{\vec {r}}]$ is an operator of angular momentum of an electron in
the dot which appears after permutation, $\hat {\vec {S}}$ is a
spin operator, $\beta $ is a coefficient analogous to Rashba
constant, it could be roughly evaluated as $\beta \approx \alpha
_{{\rm R}}/D$, where D is a dot diameter.
\begin{figure}
\includegraphics{Fig3
\caption{\label{fig:epsart} A coin-like quantum dot formed of a
symmetric quantum well. Arrows indicate the interface electric
field.}
\end{figure}
\begin{figure*}
\includegraphics{Fig4
\caption{\label{fig:wide} Two-electron energy levels in a quantum
dot split by exchange and spin-orbit interaction. The arrow
indicates the level to which an electron tunnels from a quantum
wire.}
\end{figure*}
For the measured electron in a ground state with L=0 spin-orbit
interaction equals zero. The electron can possess any orientation
of spin. Therefore, this a good quantum dot for a spin qubit
manipulation. For the excited state with angular momentum $L=1$
the Hamiltonian (\ref{eq6}) looks like
\begin{equation}
\label{eq7} H_{RS} = \beta L_{1Z} S_{1Z} ,
\end{equation}
\noindent where $L_{1Z} = \pm 1$ is a z-projection of orbital
moment, $S_{1Z} = \pm 1 / 2$ is z-projection of electron spin in
the excited state. The spin-orbit term (\ref{eq5}) should be added
to the Hamiltonian (\ref{eq3}) to acquire the resultant
Hamiltonian
\begin{equation}
\label{eq8} H = H_{0} + H_{RS} .
\end{equation}
The eigen-states of the Hamiltonian (\ref{eq8}) relevant to two
electrons in the dot are depicted in Fig. 4. Tunneling of an
electron from a quantum wire to a quantum dot occurs if only the
latter has the same spin orientation as the measured electron in
dot (the arrow in Fig. 4 marks the proper level).
Spin-orbit splitting makes impossible the spin-flip due to energy
conservation law. As it was reported in Ref. \cite{nitta} Rashba
spin-orbit splitting in $A_{{\rm I}{\rm I}{\rm I}}B_{{\rm V}}$
heterostructures may attain to several meV. At the same time, the
Zeeman energy splitting proposed in Ref. \cite{engel} for the same
reason to suppress spin-flip processes in a dot approximately
equals $0.3 meV$ even in rather big magnetic field $5T$.
In accordance with expression (3) the mean reflection coefficient
$R=1-T$ in the range $-\Gamma<\varepsilon<+\Gamma$ is
approximately $1/3$. It means that the relative decrease in
current for parallel spins is around $1/3$ if the bias $V \le
\Gamma/e$. For antiparallel spins it is twice smaller, i.e. around
$1/6$. For the sake of better sensitivity the optimal bias $V$ for
a ballistic quantum wire should be chosen around $V=\Gamma/e$.
Then the absolute value of current could be roughly estimated for
a single mode quantum wire as $I=G_{0}V$, where $G_{0}=e^{2}/h=(26
kOhm)^{{\rm -} {\rm 1}}$ is a conductance quantum for
spin-polarized current in the wire. The possible level broadening
$\Gamma $ is restricted only by spin-orbit splitting. Supposing
the latter as several meV we are able to choose the broadening as
1meV. Substituting $V=1mV$ one arrives at the current equal to
$4\cdot10^{{\rm -} {\rm 8}} A$ which could be easily measured by
up to date equipment. Moreover, this current exceeds that in a
single electron transistor (SET). The greater is the current the
faster is its measurement. One more significant advantage of a
quantum wire is that it can be emptied during computing and,
therefore, unlike to a SET the quantum wire does not introduce an
additional decoherence in the system that time.
Worth noting a quantum wire allows to perform a partial
measurement of the state of two adjacent spin qubits like in
\cite{engel} or even distant ones: whether they are parallel or
antiparallel. This also provides a possibility of quantum
computation without organizing a perfectly controllable
interaction between qubits.
If for some reason the reflection coefficient is too low,
hopefully, the sensitivity may be augmented when N identical
qubits are placed in series along the wire. When qubits are
situated randomly and, therefore, interference does not matter the
sensitivity may rise as $\sim N$. When there is an order in qubit
positions and the interference is significant the sensitivity
increases as $\sim N^{{\rm 2}}$.
The opportunity to perform the proposed measurement is confirmed
by findings in Ref. \cite{sato}. Schematically the structure was
almost the same as that in Fig. 1. The combined Fano-Kondo
anti-resonances were observed in the $I-V$ curve and exploited to
test relaxation in multi-electron quantum dot. In principle, this
set up could serve as a prototype of our proposal.
\section{\label{sec:level1}Conclusion}
We have examined the possibility to use Fano-Rashba resonances for
non-demolishing measurement of a spin state (whether it is up or
down) of a single electron in a quantum dot (spin qubit) via a
spin-polarized current in an adjacent quantum wire. The spin-orbit
interaction in a quantum dot prohibits spin-flip events
(Kondo-like phenomenon). That makes the measurement
non-demolishing.
\begin{acknowledgments}
The research was supported by NIX Computer Company
([email protected]), grant F793/8-05, via the grant of The Royal
Swedish Academy of Sciences, and also by Russian Basic Research
Foundation, grants \# 08-07-00486-a-a and \# 06-01-00097-a.
\end{acknowledgments}
|
2,877,628,088,595 | arxiv | \section*{Supplementary material}
\subsection*{The teacher/student example}
In this article we have described the conceptual significance of the theory using a detailed narrative of a student sitting a series of tests. Here we present a table relating all the symbols to each object in the narrative to aid the reader.
\begin{table*}
\begin{tabular}{||c|c|c|c||}
\hline
Narrative & Classical & Quantum & Dimension \\
\hline
Book & $Y$ & $Y$ & $d^{2}$\\
Answers to each chapter & $Y_{0}$, $Y_{1}$ & $Y_{0}$, $Y_{1}$ & d\\
Possible answers to test & $y$ &$y$ & $d^{2}$\\
Individual answers &$y_{0}$, $y_{1}$ &$y_{0}$, $y_{1}$ & $d$\\
Study notes & $\mathcal{P}_{y}$ &$\rho_{y}^{E}$ & $d$\\
Question & $\mathcal{M}$ &$M_{y}$ & $d$ \\
Outcome of test & $P(y\vert \mathcal{M}, \mathrm{P_{y}})$ &$\mathrm{tr}(\rho_{y}^{E} M_{y})$ & -\\
Teacher's correlated system & $C$ &$\sigma_{y}^{C}$ & $2$\\
\hline
\end{tabular}
\caption{This table shows the object in the narrative and the corresponding mathematical object in both the classical and quantum case. We also include the dimension of the system. }
\label{Tab:table}
\end{table*}
Common language can sometimes unintentionally obscure the more nuanced aspects of the problem. Here is a much more brief and direct overview of the protocol.
\begin{itemize}
\item A random dit string of length 2 is selected uniformly at random $y=y_{0}y_{1}$.
\item It must be encoded into a single qudit $\rho_{y}^{E}$ in the register $E$. This process erases some information, therefore it is impossible to know the whole $y$.
\item The qudit is then measured using the positive operator valued measurement POVM $M_{y}$. The measurement operator $M_{y}$ is maximised to reveal either the parts $y_{0}$ or $y_{1}$ or the whole $y$ depending on the test.
\item The probability of successfully guessing $y$ is then given by the trace operator $\mathrm{tr}(\rho_{y}^{E} M_{y})$.
\item To obtain a measurement of the whole $Y$, we sum over all combinations of $y$ multiplying each probability of the outcome by the probability that $y$ was selected. Hence $p_\mathrm{guess}(Y|E) = \underset{\lbrace M_{y} \rbrace}{\max} \sum_{y} P_{Y}(y)\mathrm{tr}(\rho_{y}^{E} M_{y})$ where $P_{Y}(y)=1/d^{2}$. Similarly, we can measure the parts in the same way e.g $p_{\mathrm{guess}}(Y_{0}|E) = \underset{\lbrace M_{y_{0}} \rbrace}{\max} \sum_{y} P_{Y_{0}}(y_{0})tr(\rho_{y}^{E} M_{y_{0}})$ where $P_{Y}(y)=1/d$.
\item The min-entropy $H_{\infty}(Y|E) = - \log p_{\mathrm{guess}}(Y|E)$ can then be computed by taking the log of each guessing probability.
\end{itemize}
\subsection*{The encoding}
The encoding used to violate the inequality is an equal superposition of two mutually unbiased bases (MUB)
\begin{equation}
\vert \Psi_{y} \rangle {=} \frac{1}{\sqrt{2\left(1+1/\sqrt{d}\right)}}X_{d}^{y_{0}}Z_{d}^{y_{1}}\left(\mathbb{I} + F\right)\vert{0}\rangle
\end{equation}
where $F$ is the quantum Fourier transform and $X_{d}$ and $Z_{d}$ are the generalised Pauli operators that are the generators of the Heisenberg-Weyl group. They are defined as
\begin{equation}
X_{d} \vert y_{0} \rangle = \vert y_{0}+1\,\mathrm{mod} \,d \rangle \quad \mathrm{and} \quad Z_{d} \vert y_{0} \rangle = \omega^{y_{0}} \vert y_{0} \rangle\,,
\end{equation}
where $\omega = \exp \left(2\pi i /d\right)$. The Pauli operators form a canonical conjugate pair and are related by $Z_{d} = F^{\dag} X_{d} F$. In quantum mechanics the notion of canonically conjugate quantities is central irrespective of Hilbert space dimension. If the state of a system is such that one canonical variable takes a definite value, then the conjugate must be maximally uncertain. In the original proof of the VW-inequality, the authors assumed $d$ was prime to complete the proof \cite{vidick_does_2011}. This assumption is required due to the unanswered questions relating to MUBs for non-prime dimensions \cite{durt_mutually_2010}. Currently, it is not known how many MUBs exist for composite Hilbert dimensions, but is well known for prime powers and the continuous limit.
This type of encoding is needed to create a quantum superposition, of the $X_{d}$ and $Z_{d}$ eigenstates. The inequality holds in all non-contextual-hidden variable models where $E$ is a classical distribution. Hence, violation of the VW-inequality can only be achieved if $E$ is not in a deterministic combination of $X_{d}$ and $Z_{d}$.
\subsection*{$d$-rail qudits}
Here we present an overview of the $d$-rail qudits analysis. Let $d$ represent the number of available modes, then the total Hilbert space is the tensor product of Fock space spanned by the states
\begin{equation}
\vert n_{1}, n_{2}, ..., n_{d} \rangle \equiv \vert n_{1} \rangle \otimes \vert n_{2} \rangle \otimes ... \otimes \vert n_{d} \rangle\,.
\end{equation}
We will further assume that we are working in a subspace in which every state is an eigenstate of the total photon number operator $\hat{N} \vert \psi \rangle = N \vert \psi \rangle$ where $\hat{N} = \sum_{j=1}^{d} a_{j}^{\dag}a_{j}$ where $N$ is a positive integer and $a_{j}^{\dag},a_{j}$ are the creation and annihilation operators which satisfy $[a_{i},a_{j}^{\dag}] = \delta_{ij}$. A single photon $N=1$ has a two dimensional Hilbert space and two modes spanned by the Fock states $\lbrace | 1,0 \rangle, | 0,1\rangle\rbrace$, often called a dual-rail qubit \cite{kok_linear_2007}. Here, we consider the case of a single photon $N=1$ photon with $d$ modes, hence a Hilbert space of dimension $d$ will be spanned by the Fock states of one photon in each mode.
A reliable single-photon source is required to produce the Fock states described above. We will show that a weak single photon coherent state is sufficient for our purposes. In the case of a single mode, a coherent state is defined as
\begin{equation}
\vert \alpha \rangle = e^{-\vert \alpha \vert^{2}/2} \sum_{n=0}^{\infty} \frac{\alpha^{n}}{\sqrt{n!}} \vert n \rangle\,,
\end{equation}
where $\alpha$ is an arbitrary complex number. Coherent states do not have a fixed photon number but the average is bounded and equal to $\vert \alpha \vert^{2} = \langle n \rangle$. We generate $d$ modes each in a coherent state
\begin{equation}
\label{eq:state}
\vert \Psi \rangle = \vert \alpha_{1} \rangle \otimes \vert \alpha_{2} \rangle \otimes...\otimes \vert \alpha_{d} \rangle = \vert \alpha_{1}, \alpha_{2},..., \alpha_{d} \rangle\,.
\end{equation}
We can see that this state will be contaminated by undesired multi-photon Fock states. Furthermore linear optics can only transform coherent product states into coherent products states: no entanglement is possible.
It is possible to get around this restriction by introducing an imaginary measurement device that makes a total photon measurement on $d$ modes without absorbing any photons or mode mixing. We can now ask: what is the conditional state if such a measurement is made on a $d$-fold product of coherent states, conditioned on the result of the measurement being $N$? We can write this conditional state as
\begin{equation}
\label{eq:conditional}
\vert \Psi : N \rangle = \mathcal{N} \hat{\Pi}_{N} \vert \alpha_{1}, \alpha_{2},..., \alpha_{d} \rangle\,,
\end{equation}
where $\hat{\Pi}_{N}$ is the projection operator onto the subspace of total photon number $N$ and $\mathcal{N} = p_{N}^{-1/2}$.
Because all our modes are generated at the SLM from the same input beam, we will make the assumption that the coefficients of each mode is equal up to multiplicative factor $\alpha_{i} = \alpha \beta_{i}$ where $\vert \beta_{i} \vert < 1$ and satisfies $\sum_{j=1}^{d}|\beta_{i}|^{2} = 1$. We can interpret $|\beta_{i}|^{2}$ as the probability amplitude associated with each of the $d$ modes.
This allows us to define the probability of having $N$ photons in the experiment as
\begin{equation}
p_{N} = \mathrm{tr}\left( \vert \Psi : N \rangle \langle \Psi: N \vert\right) = \frac{\vert \alpha\vert ^{2N}}{N!}e^{-\vert \alpha \vert^{2}} \,,
\end{equation}
which is equivalent to the single mode case as we would expect.
We are interested in the single photon case $N=1$ where $\hat{\Pi}_{1} = \vert 1 \rangle\langle 1\vert$. The probability of measuring a single photon state is $p_{1} = e^{-\vert \alpha \vert^{2}}\vert \alpha \vert^{2}$. In our experiment, vacuum states are not counted and do not contribute to the statistics. We measure a mean photon number $\vert \alpha \vert^{2} \sim 0.01$ which means $p_{1}\approx0.01$ and $p_{2}\approx5\times10^{-5}$. For every 60,000 we would expect 3 photons i.e $~17$ instances per $10^{6}$ counts which is negligible. Therefore, will only consider the possibility that we have up to two photons in the experiment.
These two photon events will create two types of errors in our counting statistics. The first error occurs when two photons were in the experiment but only one was counted i.e photon loss. This is simulated as using a beam splitter model with transmissivity $\eta$ standing in for the role of non-unit quantum efficiency. Such a filter is a linear optical device that transforms the input annihilation operator $a_{k}\rightarrow \sqrt{\eta}a + \sqrt{1-\eta} b_{k}$ where $b_{k}$ is an auxiliary mode that is initially in the vacuum state. The second error occurs when two photons are both detected and show up as a single event. We do not use number resolving detectors so we cannot distinguish photon number and must account for it.
Our input beam is a highly attenuated coherent state $\vert \alpha \vert^{2} \ll 1$ where we will truncate to only include two photon modes at most. In this limit, our product state can be approximated by
\begin{equation}
\vert \Psi \rangle \approx e^{-\vert \alpha \vert^{2}/2}\left(1+ \alpha\sum_{i=1}^{d} \beta_{i}a_{i}^{\dagger} + \frac{\alpha^{2}}{2}\left[\sum_{i=1}^{d}\beta_{i} a_{i}^{\dagger}\right]^{2}\right)\bigotimes_{i}^{n}\vert 0\rangle_{i}\,.
\end{equation}
Making the beam splitter transformations $\vert \Psi \rangle \rightarrow \vert \Psi' \rangle$ we obtain
\begin{widetext}
\begin{equation}
\vert \Psi'\rangle= e^{-\vert \alpha \vert^{2}/2}\left(1 + \alpha\sum_{i=1}^{d} \beta_{i} \left(\sqrt{\eta}a_{i}^{\dagger} + \sqrt{1-\eta}b_{i}^{\dagger}\right) + \frac{\alpha^{2}}{2}\left[\sum_{j=1}^{d}\, \left(\sqrt{\eta}a_{i}^{\dagger} + \sqrt{1-\eta}b_{i}^{\dagger}\right)\right]^{2} \right)
\bigotimes_{i}^{d}\vert 0 \rangle_{a,i}\vert 0 \rangle_{b,i}\,,
\end{equation}
\end{widetext}
Because we cannot resolve the photon number, we will not be able to distinguish between one and two photon events. As a result, we will have the mixed distribution over conditional states defined in (\ref{eq:conditional})
\begin{equation}
\rho_{N} = p\vert \psi':1 \rangle\langle \psi':1\vert + (1-p)\vert \psi':2 \rangle\langle \psi':2\vert\,,
\end{equation}
where we ignore the vacuum state because it never enters our counting statistics.
Here $p=\left(1+ \vert \alpha \vert^{2} /2\right)$ and can be interpreted as probability we measured a single photon, conditioned on the event that a detection was made.
We also cannot measure the lost photons in the $b$ mode and must partially trace out this system of $\rho_{N}$. We again condition on the events where there was a detection in and the $a$ mode---dropping all independent $b$ modes---leading us to a final mixed state,
\begin{equation}
\rho = p\vert \Psi \rangle\langle \Psi \vert + (1-p)\left[\eta \vert \Psi\rangle \langle \Psi \vert_{2a} + (1-\eta) \vert \Psi\rangle \langle \Psi \vert_{ab}\right]\,.
\end{equation}
Here $\vert \Psi \rangle$ is our desired single photon state; $\vert \Psi \rangle_{2a}$ corresponds to the case when 2 were detected as a single event; $\vert \Psi \rangle_{ab}$ is the case where one photon was correctly detected but the other was lost. These last two states will corrupt our statistics.
We can now compute the effect that non-unit quantum efficiency $\eta$ has on our results using the encoding $\vert \Psi_{y} \rangle$ shown for $\vert \alpha \vert^{2} = 0.01$ and $\eta = 0.6$---the approximate quantum efficiency of our APD---in Fig \ref{fig:measure}. Here we plot the effect that the contamination states have on the absolute difference in the guessing probability using the ideal vs the mixed states $\Delta\% = \vert P_{\mathrm{guess}} - P_{\mathrm{guess}}'\vert$. The contamination states introduce an extremely small error---on the order of $\sim ~-0.2 \%$--- to our measurement statistics. This can easily be understood by computing the fidelity $\mathcal{F}$ of the mixed state we observe in the lab to our desired state which is also show in Fig \ref{fig:measure}.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{fid.png}
\caption{The difference in guessing probability $\Delta\%$ (blue) and the fidelity $\mathcal{F}$ (Red) of the desired state with the mixed state as a function of dimension. The guessing probability is only minutely corrupted by the presence of the contamination states in the single photon limit for a coherent state with $\vert \alpha \vert^{2}=0.01$ and increases with dimension but begins to taper off. A similar behaviour is seen in the fidelity, begining high but then decreasing with dimension. }
\label{fig:measure}
\end{figure}
\end{document} |
2,877,628,088,596 | arxiv | \section{Introduction}
Group-level duality of $G(N)_K$ Chern-Simons theory \cite{r1,r2,r3,r4,r5} has provided a number of insights into the structure of topological theories. Properties of pairs of theories are tightly constrained by duality maps $G(N)_{K}\longleftrightarrow G(K)_N$, where $N$ is the dimension of the defining representation of $SU(N)$ and $K$ the dimension of the associated affine Lie algebra, with analogous definitions for $SO(N)_K,Sp(N)_{K}$, and the exceptional groups.
There are dualities of primary fields, conformal dimensions, braid matrices, skein relations, and knot and link invariants \cite{r3,r4,r5}. Specifically every knot in $S^3$ is either a torus knot, a satellite knot, or a hyperbolic knot \cite{r6}. A torus knot is a knot that can be embedded without crossings on the surface of a torus in $S^3$. A satellite knot is a knot that can be embedded in the regular neighborhood of another knot in $S^3$ \cite{r7,r8}. Every non-split alternating link that is not a torus link is hyperbolic. An example of a hyperbolic knot is the figure 8 knot, and that of hyperbolic links are the Whitehead link and the Borromean link.
Chern-Simons knot theory was pioneered by Witten \cite{r10,r11}. The level-rank dualities of knot and link invariants are presented in section \ref{sec:2}; with explicit examples for $SU(2)_K \longleftrightarrow SU(K)_2$ given in subsection \ref{sub23}. Section \ref{sec:3} considers hyperbolic knots and links. A conjectured criterion for distinguishing torus knots and links from hyperbolic knots and links is based on systematic features of the tables of Kaul \cite{r13}. This criterion is satisfied by the Whitehead link and the Borromean link. In section \ref{sec:4}, the level-rank duality of minimal models is discussed.
In appendix \ref{sec:A} we give evidence for symmetries of $U(N)$ hyperbolic knot and link invariants.
\section{$G(N)_{K}\longleftrightarrow G(K)_N$}\label{sec:2}
The conformal dimension of a $\lambda$-primary of $G(N)_K$ \cite{r3},
\begin{equation}
h_\lambda =\frac{1}{2}Q_{\lambda}(N) / (K+g)
\end{equation}
is related to
\begin{equation}
\tilde{h}_{\tilde{\lambda}} =\frac{1}{2}Q_{\tilde{\lambda}}(K)/ (N+\tilde{g})
\end{equation}
where $g$ is the dual Coxeter number for $G(N)$, and $\tilde{g}$ is the same for $G(K)$, with $Q_{\lambda}(N)$ the quadratic Casimir operator for representation $\lambda$, and similarly $Q_{\tilde{\lambda}}(K)$ for representation $\tilde{\lambda}$. The Young tableau for representation $\lambda$ is related to $\tilde{\lambda}$, which is the transpose of the tableau $\lambda$.
The conformal dimensions of $SU(N)_K$ satisfy the duality relation \cite{r4,r5}
\begin{equation}
h(a)+\tilde{h}(\tilde{a})=\frac{1}{2}r(a)\left[1-\frac{r(a)}{2NK}\right]
\end{equation}
where $r(a)$ is the number of boxes in the Young tableau for representation $a$. Also relevant is the root of unity
\begin{equation}
q=e^{\frac{2 \pi i}{N+K}}
\end{equation}
for $SU(N)_K$.
The modular transformation and fusion matrices of $SU(N)_K$ and $SU(K)_N$ satisfy \cite{r4}
\begin{equation}\label{eq:25}
S_{ab}=\sqrt{\frac{K}{N}}\exp\left[\frac{-2\pi i r(a)r(b)}{NK}\right]\tilde{S}^*_{\tilde{a}\tilde{b}}
\end{equation}
and
\begin{equation}
N_{ab}^c=\tilde{N}_{\tilde{a}\tilde{b}}^{\sigma^\Delta(\tilde{c})}
\end{equation}
for
\begin{equation}\label{eq:27}
\Delta=\frac{r(a)+r(b)-r(c)}{N}
\end{equation}
In general the nuber of boxes of $c$ may be less than the sum of those of $a$ and $b$, because $c$ has been ``reduced". In that case $c$ is dual to the representation $\sigma^\Delta(\tilde{c})$ which is ``cominamally equivalent'' to $\tilde{c}$. For example, for $SU(2)_K$ the fusion matrix is
\begin{equation}
N_{ab}^c, \quad \text{where } c=a+b\mod K
\end{equation}
If $(a+b)<K$, then $c=(a+b)$. But if $(a+b)\geq K$, then $c=(a+b)-K\leq (a+b)$.
In general, to construct the Young tableau for $\sigma^\Delta(\tilde{c})$, there is a specific procedure.
\begin{enumerate}
\item First transpose the reduced diagram c. The row lengths (not necessarily reduced) transform to $\tilde{l}_i(\tilde{c})$.
\item Reduce this diagram.
\item Add a row of length $N$ boxes to the top of this diagram.
\item Reduce this diagram.
\item Repeat these steps $(\Delta-1)$ times.
\end{enumerate}
The final reduced diagram will have
\begin{equation}
r(c)+N\Delta-K\tilde{l}_{K-\Delta}
\end{equation}
boxes. That is
\begin{equation}
(a,b,c) \longrightarrow (\tilde{a},\tilde{b},\sigma^\Delta(\tilde{c}))
\end{equation}
for the transform of the fusion matrix.
\subsection{Knot dualities}\label{sub21}
With these ingrediants one can compute $S_{0a}/S_{00}$ and it's level-rank dual to show that \cite{r3}
\begin{equation}
\langle\text{unknot};a\rangle_{G(N)_K}=\langle\text{unknot};\tilde{a}\rangle_{G(K)_N}
\end{equation}
More generally for knots that can be untied by skein relations (not effective in all cases)\cite{r3}
\begin{equation}\label{eq:212}
\langle \mathcal{K},\lambda\rangle_{G(N)_{K}}=\langle \tilde{\mathcal{K}},\tilde{\lambda}\rangle_{G(K)_N}
\end{equation}
in vertical framing, where $\tilde{\mathcal{K}}$ is the mirror image of knot $\mathcal{K}$.
\subsection{Link dualities}\label{sub22}
The duality \cite{r5} for two strands is
\begin{equation}\label{eq:213}
\mathcal{L}(a,b;\{n_i\})_{G(N)_K} = e^{-i\pi \Phi(a,b)\sum_in_i}\tilde{\mathcal{L}}(\tilde{a},\tilde{b};\{-n_i\})_{G(K)_N}
\end{equation}
for vertical framing, where $\sum n_i= \text{even}$, and each $n_i$ is the sum of crossings of the neighboring braids, and
\begin{equation}\label{eq:214}
\Phi(a,b)=\frac{r(a)r(b)}{NK}
\end{equation}
for $SU(N)_K$. More generally, in vertical framing
\begin{equation}\label{eq:215}
\mathcal{L}(\{a_i\})_{G(N)_K}=e^{i\pi \sum_iw(i,i)r(a_i)}e^{-i\pi\sum_{i,j}w(i,j)\Phi(a_i,a_j)}\tilde{\mathcal{L}}(\{\tilde{a}_i\})_{G(K)_N}
\end{equation}
The mirror image of link $\mathcal{L}$ is $\tilde{\mathcal{L}}$, and $w(i,j)$ is the sum of crossings between components $i$ and $j$.
It is to be emphasized that the derivation of \eqref{eq:212} and \eqref{eq:215} require the reduction to planar graphs and planar pentagons. Is this a coincidence, since \emph{hyperbolic} knots and links involve non-planar polyhedra? Or is there a deeper connection \cite{r7,r8}?
\subsection{$SU(2)_K\longleftrightarrow SU(K)_2$}\label{sub23}
In this subsection we present some examples of link invariants to illustrate the dualities discusses in subsections \ref{sub21} and \ref{sub22}.
\paragraph{Hopf link: $2^2_1$ in Rolfson notation}
The Hopf link for two strands belonging to representation $j_1$ and $j_2$ in $SU(2)_K$ is \cite{r11}
\begin{equation}
|j_1,j_2\rangle=S_{j_1,j_2}/S_{00},
\end{equation}
where $j_1=\frac{a_1}{2}$ and $j_2=\frac{a_2}{2}$, with $a_1$ and $a_2$ the number of boxes in the representation. From equations \ref{eq:25} and \eqref{eq:214}
\begin{equation}
\begin{split}
|j_1,j_2\rangle &= \exp \left[-\frac{2\pi ir(a_1)r(a_2)}{2K}\right]\frac{\tilde{S}^*_{\tilde{a}_1\tilde{a}_2}}{\tilde{S}^*_{00}}\\
&=\exp \left[-2\pi i\Phi(a_1,a_2)\right]\frac{\tilde{S}^*_{\tilde{a}_1\tilde{a}_2}}{\tilde{S}^*_{00}}\\
&=\exp \left[-2\pi i\Phi(a_1,a_2)\right]|\tilde{j}_1,\tilde{j}_2\rangle
\end{split}
\end{equation}
in agreement with \eqref{eq:213}, with $
\sum_in_i=2$.
\paragraph{Sum of two Hopf links}
\cite{r11,r12}
\begin{equation}
\begin{split}
|2^2_1+2^2_1\rangle &= |j_1,j_2,j_3\rangle\\
&=\frac{S_{j_1j_2}S_{j_3j_2}}{S_{0j_2}S_{00}}
\end{split}
\end{equation}
using equations \eqref{eq:25} and \eqref{eq:214}, we find
\begin{equation}
|j_1j_2j_3\rangle =\exp\left(-\frac{2\pi i}{2K}r(a_2)\left[r(a_1)+r(a_2)\right]\right)|\tilde{j}_1\tilde{j}_2\tilde{j}_3\rangle
\end{equation}
in agreement with \eqref{eq:25} and \eqref{eq:214}, with $w(1,2)=w(2,3)=2$.
\paragraph{Link $6^3_3$}
For fixed framing \cite{r12},
\begin{equation}
|j_1j_2j_3\rangle =\sum_m\exp\left[h_m-h_{j_1}-h_{j_3}\right]\frac{N_{a_1a_3}^mS_{mj_2}}{S_{00}^3}
\end{equation}
From \eqref{eq:25}-\eqref{eq:27} and \eqref{eq:214}, the result is
\begin{equation}
|j_1j_2j_3\rangle =\exp\left[-i\pi\left(\Phi(a_1,a_2)+\Phi(a_2,a_3)\right)\right]\exp\left[-2\pi i\left(\tilde{h}_{a_1+a_3}-\tilde{h}_{a_1}-\tilde{h}_{a_3}\right)\right]|\tilde{j}_1\tilde{j}_2\tilde{j}_3\rangle.
\end{equation}
for the special case where $a_1+a_3<K$, and more generally
\begin{equation}
|j_1j_2j_3\rangle =\exp\left[-i\pi\left(\Phi(a_1,a_2)+\Phi(a_2,a_3)\right)\right]\exp\left[-2\pi i\left(\tilde{h}_{\sigma^\Delta}(\tilde{c})-\tilde{h}_{a_1}-\tilde{h}_{a_3}\right)\right]|\tilde{j}_1\tilde{j}_2\tilde{j}_3\rangle.
\end{equation}
This satisfies \eqref{eq:213} up to framing, where \eqref{eq:213} is for vertical framing. Vertical framing can be restored using
\begin{equation}
T|m\rangle = e^{2\pi ih_m}|m\rangle
\end{equation}
where $T$ is a modular transformation operator.
These examples of torus links illustrate that they transform under level-rank duality in the expected way.
\section{Hyperbolic knots and links: $SU(2)_K$}\label{sec:3}
The purpose of this chapter is to present evidence for a criterion which distinguishes hyperbolic knots and links from torus knots and links. Begin with the hyperbolic Whitehead link with two strands and the Borromean link with three strands.
\paragraph{Whitehead link: $5_1^2$}
The result of the construction of Kaul \cite{r12,r13} is
\begin{equation}\label{eq:41}
\begin{split}
C_{5_1^2}(j_1j_2)&=\left[2j_1+1\right]^2\left[2j_2+1\right]\sum_{m,n,p}\lambda^{-1}_{p_1,-}(j_1,j_2)\lambda_{p_2,+}(j_1,j_2) \lambda^{-1}_{n_1,+}(j_1,j_2) \lambda^{-1}_{m_1,-}(j_1,j_2) \lambda^{-1}_{m_2,+}(j_1,j_2) \\
&\times a_{(0,p)}\begin{pmatrix}
j_1 & j_1\\
j_2 & j_2\\
j_1 & j_1
\end{pmatrix}
a_{(n,p)}\begin{pmatrix}
j_1 & j_2\\
j_1 & j_1\\
j_2 & j_1
\end{pmatrix}
a_{(n,m)}\begin{pmatrix}
j_1 & j_2\\
j_1 & j_1\\
j_2 & j_1
\end{pmatrix}
a_{(0,m)}\begin{pmatrix}
j_1 & j_1\\
j_2 & j_2\\
j_1 & j_1
\end{pmatrix}.
\end{split}
\end{equation}
where $a_{(n,p)}$'s are duality transformations acting on 6-point conformal blocks on $S^2$, and the $\lambda$'s are phases which these blocks pick up under the action of braid generators. Equivalently, the $a_{(n,p)}$ are related to the quantum $6j$ symbols. Explicit calculations verify that there is no $SU(2)_K \longleftrightarrow SU(K)_2$ duality for \eqref{eq:41}. The bracket $[2j+1]$ is the quantum dimension
\begin{equation}
[x]=\frac{q^{\frac{x}{2}}-q^{-\frac{x}{2}}}{q^{\frac{1}{2}}-q^{-\frac{1}{2}}}
\end{equation}
where $q=\exp[2\pi i/(K+N)]$ for $SU(N)_K$. See also \cite{r20}.
\paragraph{Borromean link: $6_2^3$}
The construction of Kaul \cite{r12,r13} gives
\begin{equation}\label{eq:43}
\begin{split}
C_{6_2^3}(j_1j_2)&=\left[2j_1+1\right]\left[2j_2+1\right]\left[2j_3+1\right]\sum_{l,m,n,p,q}\lambda_{l_1,-}(j_1,j_2)\lambda_{l_2,-}(j_1,j_3) \lambda^{-1}_{m_1,-}(j_2,j_3)\lambda^{-1}_{n_2,+}(j_1,j_2)\\
&\times\lambda_{p_0,+}(j_1,j_2)\lambda^{-1}_{p_1,-}(j_1,j_3)\lambda^{-1}_{q_1,}(j_1,j_2)\lambda_{q_2,-}(j_2,j_3) \\
&\times a_{(0,l)}\begin{pmatrix}
j_2 & j_2\\
j_1 & j_1\\
j_3 & j_3
\end{pmatrix}
a_{(m,l)}\begin{pmatrix}
j_2 & j_1\\
j_2 & j_3\\
j_1 & j_3
\end{pmatrix}
a_{(m,n)}\begin{pmatrix}
j_2 & j_1\\
j_3 & j_2\\
j_1 & j_3
\end{pmatrix}\\
&\times a_{(p,n)}\begin{pmatrix}
j_2 & j_1\\
j_3 & j_1\\
j_2 & j_3
\end{pmatrix}
a_{(p,q)}\begin{pmatrix}
j_1 & j_2\\
j_1 & j_3\\
j_2 & j_3
\end{pmatrix}
a_{(0,q)}\begin{pmatrix}
j_1 & j_1\\
j_2 & j_2\\
j_3 & j_3
\end{pmatrix}.
\end{split}
\end{equation}
Again there is no level-rank duality for \eqref{eq:43}.
\paragraph{Knot invariants}
Table \RNum{2}A of Kaul \cite{r13} provides a list of knot invariants where all knots carry spin $j$ of $SU(2)_K$. The torus knots listed are $3_1$, $5_1$, and $7_1$, which do not have factors of $a_{ml}$. All other knots are hyperbolic and have one or more factors of $a_{ml}$.
\paragraph{Link invariants}
Table \RNum{2}B of Kaul \cite{r13} lists the invariants of two component links up to seven crossings for spins $j_1$ and $j_2$ on the component strands. The torus links $0_1$, $2_1$, $4_1$, and $6_1$ do not have factors of $a_{(m)(l)}$.
The consistent overview is that torus knots and links are distinguished from hyperbolic knots and links by the absence or presence of factors of $a_{ml}$ or $a_{(m)(l)}$. It would be interesting to prove this observation. See also \cite{r14,r15,r16,r17}. Brunnian links are also a class of hyperbolic links \cite{r35}.
\paragraph{Symmetry}
The torus knot and link invariants exhibit a level-rank duality, as discussed in sections \ref{sec:2}, but the hyperbolic invariants do not have this duality. Since the hyperbolic invariants are related to $SL(2,\mathbb{C})$ Chern-Simons theory, one may ask if there is an analogous symmetry for $SL(2,\mathbb{C})$ (see Appendix \ref{sec:A}). For other applications of $SU(N)_K$ Chern-Simons theory see \cite{r17,r18,r19,r24,r25,r26}.
\section{Minimal models}\label{sec:4}
Minimal models can be described by the coset \cite{r31,r32,r36}
\begin{equation}\label{eq:441}
SU(2)_K \times SU(2)_1 / SU(2)_{K+1}
\end{equation}
Since the central charge of $SU(N)_K$ is
\begin{equation}
c^{N,K}=\frac{(N^2-1)K}{N+K},
\end{equation}
it is easy to show that
\begin{equation}
c_{min}=1-\frac{6}{(K+2)(K+3)}
\end{equation}
For low values of $K$, one has
\begin{equation}
\begin{split}
&K=1;\quad c=\frac{1}{2} \quad \text{Ising model,}\\
&K=2;\quad c=\frac{7}{10} \quad \text{Tri-critical Ising Model,}\\
& K=3;\quad c=\frac{4}{5} \quad \text{Potts model.}
\end{split}
\end{equation}
The level-rank dual of \eqref{eq:441} is
\begin{equation}\label{eq:445}
SU(K+1)_2 / \left[SU(K)_2 \times U(1)_2\right]
\end{equation}
with central charge
\begin{equation}
c_{dual}=c_{min}
\end{equation}
In this sense, the minimal models are level-rank self-dual.
Recall from \eqref{eq:215}, for link $L$, with representations $r_{1},r_2,\cdots r_n$ of the component knots, with $a(r_{i})$ denoting the number of boxes of representation $r_i$, so that
\begin{equation}\label{eq:447}
\mathcal{L}(\{r_i\})_{SU(2)_K}=e^{i\pi \sum_iw(i,i)r(a(r_i)}e^{-i\pi\sum_{i,j}w(i,j)\Phi(r_i,r_j)}\tilde{\mathcal{L}}(\{\tilde{r}_i\})_{SU(K)_2}
\end{equation}
The mirror image of Theorem 1 equation (25) of reference \cite{r30a} for minimal models is
\begin{equation}
V_{(\bar{r}_1,\bar{s}_1)\cdots (\bar{r}_n,\bar{s}_n)}[L]=V^{(K)}_{\bar{r}_1\cdots \bar{r}_n}[\bar{L}]V^{(K+1)}_{\bar{s}_1\cdots \bar{s}_n}[L]V^{(1)}_{\bar{\epsilon}_1\cdots \bar{\epsilon}_n}[\bar{L}]
\end{equation}
where $V^{(K)}_{\{\bar{r}_i\}}[\bar{L}]$, $V^{(K+1)}_{\{\bar{s}_i\}}[L]$, and $V^{(1)}_{\{\bar{\epsilon}_i\}}[\bar{L}]$, are the mirror image links for $SU(K)_2$,$SU(K+1)_2$, and $U(1)_2$ respectively, with $\bar{L}$ the mirror image link.
Analogous to \eqref{eq:447} one has
\begin{equation}\label{eq:449}
\tilde{\mathcal{L}}(\{s_i\})_{SU(2)_{K+1}}=e^{-i\pi \sum_iw(i,i)r(a(s_i)}e^{i\pi\sum_{i,j}w(i,j)\Phi(s_i,s_j)}\mathcal{L}(\{\tilde{s}_i\})_{SU(K+1)_2}.
\end{equation}
Further $V^{(1)}_{\epsilon_1,\cdots , \epsilon_n}(L)$ is the torus link invariant for $SU(2)_1$, where $\epsilon_i=1$ or 2 for $(r_i-s_i)$ even or equivalently of $U(1)_2$, and $\epsilon_i=2$ for the representation with one box or odd, with $\epsilon_i=1$ for the identity representation of $SU(2)_1$.
Putting this all together,
\begin{equation}\label{eq:4410}
V^{(K)}_{r_1\cdots r_n}[L]V^{(K+1)}_{s_1\cdots s_n}[\bar{L}]V^{(1)}_{\epsilon_1\cdots \epsilon_n}[L]=(phases)\tilde{V}^{(K)}_{\tilde{r}_1\cdots \tilde{r}_n}[\bar{L}]\tilde{V}^{(K+1)}_{\tilde{s}_1\cdots \tilde{s}_n}[L]\tilde{V}^{(1)}_{\epsilon_1\cdots \epsilon_n}[\bar{L}]
\end{equation}
where $\Phi$ is as in \eqref{eq:214}, and $w(i,j)$ is the sum of the crossings between components $i$ and $j$. Therefore $(phases)$ is obtained from the product of phases from \eqref{eq:447},\eqref{eq:449}, and for $SU(2)_1\longrightarrow U(1)_2$. Thus \eqref{eq:4410} relates the link invariants of \eqref{eq:441} to the mirror image link invariants of \eqref{eq:445}; explicitly expressing the self-duality of the minimal models.
An interesting question is how these invariants could emerge from lattice theories of minimal models? This subject is not as well developed as that obtained directly from rational conformal field theory described by the coset models. However special cases may be relevant. For example, non-abelian anyons on a spin $\frac{1}{2}$ honeycomb lattice has been studied by Kitaev \cite{r34}\footnote{We thank Djordje Radicevic for informing us of this reference}. The fusion and braid rules for the non-abelian anyons are described in Secs. 6 and 8 of that paper, where the boundary CFT is the Ising model. Once one has the braid group, one can construct torus knot and link invariants following Kaul and collaborators \cite{r14,r15,r16,r30a}. It is likely that to construct hyperbolic knots with links in lattice theories, one will require the lattice version of $6-j$ symbols as suggested by Sec. 3 above.
\section{Concluding remarks}
In this paper we discussed the level-rank duality for $G(N)_K\longleftrightarrow G(K)_N$ Chern-Simons theory, with examples which distinguish torus knot and link invariants from hyperbolic knot and link invariants. An open question is whether there is a similar duality for $SL(N,\mathbb{C})$ Chern-Simons theory.
\acknowledgments
We thank Jonathan Harper and Isaac Cohen-Abbo for their help in preparing the manuscript, and Djordje Radicevic for insightful comments.
|
2,877,628,088,597 | arxiv | \section{Introduction}
\vspace{.1in}
\paragraph{} This paper is a companion to \cite{BS}, where a categorical approach to the rings of diagonal invariants
$$\boldsymbol{\Lambda} \hspace{-.06in} \boldsymbol{\Lambda}=\mathbb{C}[x_1^{\pm 1}, \ldots, y_1^{\pm 1}, \ldots]^{\mathfrak{S}_{\infty}}, \qquad
\boldsymbol{\Lambda} \hspace{-.06in} \boldsymbol{\Lambda}^+=\mathbb{C}[x_1^{\pm 1}, \ldots, y_1, \ldots ]^{\mathfrak{S}_{\infty}}$$
was described in terms of the so-called Hall algebra $\mathbf{U}^+_{{X}}$ of the category of coherent sheaves on an elliptic curve ${X}$ (defined over a finite field $\mathbb{F}_q$).
This Hall algebra turns out to be a two-parameter deformation of $\boldsymbol{\Lambda} \hspace{-.06in} \boldsymbol{\Lambda}^+$, the two deformation parameters being the Frobenius eigenvalues $\sigma, \bar{\sigma}$ of the particular elliptic curve. In fact, the structure constants for $\mathbf{U}^+_{{X}}$ are Laurent polynomials in $\sigma$ and $\bar{\sigma}$, and $\mathbf{U}^+_{{X}}$ is the specialization of some ``universal'' Hall algebra $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ defined over the ring $\mathbf{R}=\mathbb{C}[\sigma^{\pm 1/2}, \bar{\sigma}^{\pm 1/2}]$. This provides a generalization, to the rings of diagonal invariants, of Steinitz and Hall's well-known realization of the ring of symmetric functions in terms of the classical Hall algebra (see \cite{Mac}, Chap.II).
\vspace{.1in}
The aim of the present work is to turn the above ``categorical'' construction into a purely geometric one. For this, rather than the category $Coh({X})$ of coherent sheaves on ${X}$, we consider the moduli spaces (stacks) $\underline{Coh}^{\alpha}$ of coherent sheaves on ${X}$, where $\alpha$ runs among the set of all possible pairs $(r,d)$ of ranks and degrees. We construct (in a manner reminiscent of \cite{Luszchar}, \cite{L1}, and also of \cite{S1}) a certain category $\mathcal{Q}^{\alpha}$ of constructible complexes on $\underline{Coh}^\alpha$ along with two convolution functors
\begin{align*}
\text{Ind}^{\alpha,\beta}&: \mathcal{Q}^{\alpha} \boxtimes \mathcal{Q}^{\beta} \to \mathcal{Q}^{\alpha+\beta},\\
\text{Res}^{\alpha,\beta}&: \mathcal{Q}^{\alpha+\beta} \to \mathcal{Q}^{\alpha} \; \hat{\boxtimes}\; \mathcal{Q}^{\beta}.
\end{align*}
This allows us to endow the Grothendieck group $\mathfrak{U}^+_{{X}}={\bigoplus}_{\alpha} K_0(\mathcal{Q}^{\alpha})$ with an algebra and a coalgebra structure. Moreover, working over $k=\overline{\mathbb{F}_q}$ and using Grothendieck's faisceaux-function correspondence yields a trace morphism
$$Tr_1: \widehat{\mathfrak{U}}^+_{{X}} \stackrel{\sim}{\to} \widehat{\mathbf{U}}^+_{{X}}$$
which is compatible with the bialgebra structures on both sides. Here $\widehat{\mathfrak{U}}^+_{{X}}$, $\widehat{\mathbf{U}}^+_{{X}}$ are certain explicit completions of $\mathfrak{U}^+_{{X}}$ and ${\mathbf{U}}^+_{{X}}$.
\vspace{.1in}
The collection $\mathcal{P}$ of all simple objects (simple perverse sheaves) in the categories $\mathcal{Q}^{\alpha}$ can be completely determined and yields, via the above trace map a ``canonical'' basis of $\widehat{\mathbf{U}}^+_{{X}}$. In fact, we show that this canonical basis comes from a unique ``universal'' basis $\{{\mathbf{b}}_{\mathbb{P}}\}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ and that this latter basis admits a purely algebraic characterization in terms of an involution and a lattice.
This may be viewed as a generalization, in the setting of diagonal invariants, of Lusztig's realization in \cite{LusGreen} of the Schur polynomials as the trace of some simple perverse sheaves on the nilpotent variety of $\mathfrak{gl}(n)$. Motivated by this analogy, we define the \textit{elliptic Kostka polynomials} as the coefficients $\daleth_{\mathbb{P},\mathbb{Q}}(\sigma,\bar{\sigma})$ of the transition matrix between the canonical basis $\{{\mathbf{b}}_{\mathbb{P}}\}$ and a natural ``PBW''-type basis $\{\rho_{\mathbb{Q}}\}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ which plays the role, in our context, of the basis of Hall-Littlewood symmetric functions. By construction, these polynomials satisfy some positivity property, but should not be confused with the $(q,t)$-Kostka polynomials of \cite{Mac}, Chap.VI. The relation between these two families of polynomials is explained in \cite{SV}. We hope and expect that the polynomials $\daleth_{\mathbb{P},\mathbb{Q}}(\sigma,\bar{\sigma})$ will play an interesting role in combinatorics and representation theory.
Finally, we note that it is crucial for us to work with the \textit{whole} moduli stack $\underline{Coh}^{\alpha}$ and not just its (semi)stable locus; indeed it is precisely the singularities of the unstable locus which the polynomials $\daleth_{\mathbb{P},\mathbb{Q}}(\sigma,\bar{\sigma})$ describe.
\vspace{.1in}
To finish this introduction, we state several more possible interpretations of the above constructions.
\vspace{.05in}
\noindent
First of all, as shown in \cite{SV}, the algebra $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ projects onto the (positive) \textit{spherical double affine Hecke algebra} $\mathbf{S}\ddot{\mathbf{H}}^+_n$ of type $\mathfrak{gl}(n)$ for any $n$. In analogy with the case of the spherical affine Hecke algebras, $\mathbf{S}\ddot{\mathbf{H}}^+_n$ should be isomorphic to a certain convolution algebra of perverse sheaves on a Schubert variety $X_n$ of a ``double affine Grassmanian'' $\ddot{Gr}_n$. Though such an object doesn't exist at the moment, it appears that the stacks $\underline{Coh}^{\alpha}$ provide a model for $X_n$ in the stable limit $n \mapsto \infty$ (to be more precise, the categories $\mathcal{Q}^{\alpha}$ provide a model for the categories of equivariant perverse sheaves on the limit of $X_n$ as $n \mapsto \infty$). There are however two noteworthy differences between our present situation and the classical picture of the affine Grassmanian~: the presence of simple perverse sheaves associated to nontrivial local systems, and the fact that the Frobenius eigenvalues of the stalks of these perverse sheaves are not all equal to $q^{i/2}$ for some integer $i$ but rather belong to $\sigma^{\mathbb{Z}}\bar{\sigma}^{\mathbb{Z}}$. This accounts for the fact that $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ depends on two parameters rather than one.
\vspace{.05in}
\noindent
The second interpretation is based on an observation of Loojienga (unpublished, see \cite{GV}).
Let $\mathcal{L}G$ be the \textit{holomorphic} loop group of $GL(n)$ and let ${\mathcal{L}G}\rtimes \mathbb{C}^*$ denotes its one-dimensional universal central extension. Fix $q \in \mathbb{C}^*$ and denote by ${X}=\mathbb{C}^*/q^{\mathbb{Z}}$ the associated elliptic curve. Then there is a one-to-one correspondence between conjugacy classes in $(\mathcal{L}G \times \{q\})$ and isomorphism classes of holomorphic vector bundles on ${X}$ of rank $n$. Thus, the holomorphic analogs of the categories $\mathcal{Q}^{\alpha}$ may (heuristically !) be thought of as a category of $\mathcal{L}G$-equivariant perverse sheaves on $(\mathcal{L}G \times \{q\})$.
For a finite-dimensional reductive group such categories of perverse sheaves were considered and studied in details by Lusztig (the \textit{character sheaves}, see \cite{Luszchar}) and have been shown to be of fundamental importance in representation theory. The $\mathcal{Q}^{\alpha}$ thus provide a possible model for an extension of Lusztig's character sheaf theory to holomorphic loop groups of type $A$. We thank Victor Ginzburg for kindly explaining to us this point of view.
\vspace{.05in}
\noindent
The final interpretation is in terms of the geometric Langlands program. Recall that this program for the group $GL(n)$ aims at setting up a correspondence between rank $n$ local systems on a smooth projective curve ${X}$ defined over a finite field $\mathbb{F}_q$ and a certain collection of perverse sheaves on the moduli stack $\underline{Bun}_n({X})$ of rank $n$ vector bundles on $X$. In \cite{Lau}, Laumon constructed a category of perverse sheaves on the stacks $\underline{Bun}_n({X})$ of rank $n$ vector bundles (the so-called \textit{Eisenstein sheaves}) which should be relevant to the above Langlands correspondence (in the case of the \textit{trivial} local system on ${X}$, or more precisely the formal neighborhood of the trivial local system on ${X}$). Our categories $\mathcal{Q}^{\alpha}$ are precisely the categories generated by the simple factors of the Eisenstein sheaves when ${X}$ is an elliptic curve. In particular, we obtain a complete description of all the simple factors of the automorphic sheaves in that case, closed formulas for their induction/restriction products, and an algorithm to compute the Poincar\'e polynomial of their cohomology stalks. We refer the reader to \cite{SV2} for more in this direction.
\vspace{.1in}
\paragraph{\textbf{Plan of the paper.}} In Section 1 we recall the main notions and results of \cite{BS} concerning the Hall algebra $\mathbf{U}^+_{{X}}$ and its generic version $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$. In Section 2 we give a first--purely algebraic--definition of a canonical basis $\mathbf{B}=\{{\mathbf{b}}_{\mathbf{p}}\}$ of a completion $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ in terms of an involution and a lattice. Section 3 introduces the stacks of coherent sheaves $\underline{Coh}^{\alpha}$ and the convolution functors $\text{Ind}$ and $\text{Res}$. This part closely follows \cite{S1}, which was in turn inspired by \cite{L1}. We also define the category $\mathcal{A}^{\alpha}$ of semisimple complexes and provide a complete description of the simple perverse sheaves appearing there. In Section 4 we study these simple perverse sheaves, and in particular we prove that they are all pointwise pure. In Section 5 we use the trace map to relate the (completed) Grothendieck group $\widehat{\mathfrak{U}}^+_{{X}}={\bigoplus}_{\alpha} \widehat{K_0}(\mathcal{Q}^{\alpha})$ and the (completed) Hall algebra
$\widehat{\mathbf{U}}^+_{{X}}$. This allows us to define a second canonical basis $\{{\mathbf{b}}_{\mathbb{P}}\}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ by taking the traces of the simple perverse sheaves in $\bigsqcup_{\alpha}\mathcal{Q}^{\alpha}$. We then show the equality of the two canonical bases in Section 6, using some support and degree argument. Finally, in the last section, we give the definition of the elliptic Kostka polynomials $\daleth_{\mathbf{p},\mathbf{q}}(\sigma,\bar{\sigma})$ as well as some of their first properties (like $SL(2,\mathbb{Z})$-invariance).
\vspace{.1in}
\paragraph{\textbf{Notations.}} We consider a smooth elliptic curve ${X}$ defined over a finite field $\boldsymbol{{k}}=\mathbb{F}_q$ and we set $\overline{{X}}={X} \times_{Spec\; \boldsymbol{{k}}} Spec\; \bar{\boldsymbol{{k}}}$. We denote by $Coh({X})$ and $Coh(\overline{{X}})$ the categories of coherent sheaves on ${X}$ and $\overline{{X}}$ respectively. We fix a line bundle $\mathcal{L}$ of degree one on ${X}$ and let $x_0$ be the $\mathbb{F}_q$-rational point of ${X}$ such that $\mathcal{L}=\mathcal{O}(x_0)$. We will also use the standard notations for partitions: if $\lambda$ is a partition then $l(\lambda)$ and $|\lambda|$ are the length and size of $\lambda$ respectively. Finally, since we will never consider higher extension groups, we denote $\mathrm{Ext}^1$ simply by $\mathrm{Ext}$.
\pagebreak
\section{Reminder on coherent sheaves over an elliptic curve}
\vspace{.1in}
In this section we very briefly recall various classical results describing the categories $Coh({X})$ and $Coh(\overline{{X}})$.
\vspace{.1in}
\paragraph{\textbf{1.1.}} Here we let $Y$ stand for either ${X}$ or $\overline{{X}}$.
Define the slope of a sheaf $\mathcal{F} \in Coh(Y)$ by $\mu(\mathcal{F})=\frac{\textbf{deg}(\mathcal{F})}{\textbf{rank}(\mathcal{F})} \in \mathbb{Q} \cup \{\infty\}$. Recall that a sheaf $\mathcal{F}$ is semistable (resp. stable) if for any $\mathcal{G} \subset \mathcal{F}$ we have
$\mu(\mathcal{G}) \leq \mu(\mathcal{F})$ (resp. $\mu(\mathcal{G}) < \mu(\mathcal{F})$). The category
$\textbf{C}_{\nu}$ of all semistable sheaves of slope $\nu$ is abelian, artinian, and stable under extensions. We will use the following facts which go back to Atiyah \cite{A}~:
\vspace{.1in}
\paragraph{\textbf{a)}} If $\nu > \mu$ then $\text{Hom}(\textbf{C}_{\nu}, \textbf{C}_{\mu})=\text{Ext}(\textbf{C}_{\mu},\textbf{C}_{\nu})=0$.
\vspace{.1in}
\paragraph{\textbf{b)}} Every sheaf $\mathcal{F}$ possesses a canonical filtration (the Harder-Narasimhan filtration)
$$0=\mathcal{F}_0 \subset \mathcal{F}_1 \subset \cdots \subset \mathcal{F}_r=\mathcal{F}$$
for which $\mathcal{F}_i/\mathcal{F}_{i-1}$ is semistable of slope, say $\mu_i$, and $\mu_1 > \cdots > \mu_r$. By the $\text{Ext}$-vanishing property stated above, this filtrations splits. In particular, every indecomposable sheaf belongs to $\textbf{C}_{\nu}$ for some $\nu$.
\vspace{.1in}
\paragraph{\textbf{c)}} For any pair $\nu < \nu'$, let $\textbf{C}_{[\nu,\nu']}$ be the full subcategory whose objects are the sheaves isomorphic to direct sums of indecomposables in $\textbf{C}_{\tau}$ for $\tau \in [\nu,\nu']$. Then $\textbf{C}_{[\nu,\nu']}$ is closed under extensions.
\vspace{.1in}
\paragraph{\textbf{d)}} For any $\nu,\mu \in \mathbb{Q} \cup \{\infty\}$ there is an exact equivalence $\epsilon_{\nu,\mu}: \textbf{C}_{\mu} \stackrel{\sim}{\to} \textbf{C}_{\nu}$.
In particular, any $\textbf{C}_{\mu}$ is equivalent to $\textbf{C}_{\infty}=\mathcal{T}or$, the category of torsion sheaves.
For later use we now give, following \cite{Ku} and \cite{LM}, an explicit construction of such an equivalence $\epsilon_{\mu_2,\mu_1}:~\textbf{C}_{\mu_1} \stackrel{\sim}{\to} \textbf{C}_{\mu_2}$ using the concept of \textit{mutations}. If $\mathcal{F}, \mathcal{G}$ are coherent sheaves we define the left (resp. right) mutation of $\mathcal{G}$ with respect to $\mathcal{F}$ via the following canonical sequences~:
\begin{equation}\label{E:leftmut}
\xymatrix{ \mathrm{Hom} (\mathcal{F}, \mathcal{G}) \otimes \mathcal{F} \ar[r] & \mathcal{G} \ar[r] & L_{\mathcal{F}} \mathcal{G} \ar[r] & 0},
\end{equation}
\begin{equation}\label{E:rightmut}
\xymatrix{ 0 \to \mathrm{Ext} (\mathcal{G}, \mathcal{F})^* \otimes \mathcal{F} \ar[r] & R_{\mathcal{F}}\mathcal{G} \ar[r] & \mathcal{G} \ar[r] & 0}.
\end{equation}
Put
\begin{align*}
\mathcal{F}^{\Vdash}&=\{\mathcal{H} \in Coh({X})\;|\; \mathrm{Hom}(\mathcal{H},\mathcal{F})=0\;\},\\
\mathcal{F}^{\vdash}&=\{\mathcal{H} \in \mathcal{F}^{\Vdash}\;|\;\ \mathrm{Hom}(\mathcal{F}, \mathcal{H}) \otimes
\mathcal{F} \to \mathcal{H} \;\mathrm{is\;a\;monomorphism\;}\}.
\end{align*}
The functor $L_{\mathcal{F}}$ induces
an equivalence of categories
$\mathcal{F}^{\vdash} \to \mathcal{F}^{\Vdash}$, with inverse given by $R_{\mathcal{F}}$ (see \cite{LM} Theorem~4.4).
\vspace{.1in}
Let $F_m$, $m \in \mathbb{N}$ be the collection of all Farey sequences; that is we have $F_0=\{\frac{0}{1}, \frac{1}{0}\}$ and if $F_n=\{\frac{a_1}{b_1}, \ldots, \frac{a_l}{b_l}\}$ then
$$F_{n+1}=\{\frac{a_1}{b_1}, \frac{a_1+a_2}{b_1+b_2}, \ldots, \frac{a_i}{b_i}, \frac{a_i+a_{i+1}}{b_i+b_{i+1}} ,\frac{a_{i+1}}{b_{i+1}}, \ldots, \frac{a_l}{b_l}\}.$$
It is a standard fact that every positive rational number belongs to $F_n$ for some $n \gg 0$ and that
$a_{i+1}b_{i}-a_{i}b_{i+1}=1$ for any $i$ and $n$.
Let us call a stable sheaf $\mathcal{F}$ absolutely simple if $\frac{\textbf{deg}(\mathcal{F})}{\textbf{rank}(\mathcal{F})}$ is a reduced fraction (in particular, a torsion sheaf is absolutely simple if and only if it is of degree one).
\begin{prop}[see \cite{LM}] Let $\mu_1=\frac{a}{b}, \mu_2=\frac{c}{d}$ be two consecutive entries of $F_n$ and let $\mathcal{F} \in \textbf{C}_{\mu_1}$ be an absolutely simple sheaf. Then $R_{\mathcal{F}}$ restricts to an exact equivalence $\textbf{C}_{\mu_2} \stackrel{\sim}{\to} \textbf{C}_{\mu}$ where $\mu=\frac{a+c}{b+d}$. \end{prop}
Using this Proposition we first construct a distinguished absolutely simple object $S_{\mu} \in \textbf{C}_{\mu}$ for all $\mu \in \mathbb{Q}^+ \cup \{\infty\}$ : we put $S_{\infty}=\mathcal{O}_{x_0}, S_0=\mathcal{O}$, and if $S_\mu$ is defined for all entries in $F_n$ and $\{\mu_1,\mu',\mu_2\}$ are consecutive entries in $F_{n+1}$ with $\mu_1,\mu_2 \in F_n$ then we set $S_{\mu'}=R_{S_{\mu_1}} S_{\mu_2}$. From this we may define an equivalence $\epsilon_{\mu,\infty}: \textbf{C}_{\infty} \stackrel{\sim}{\to} \textbf{C}_{\mu}$ inductively for entries $\mu \in \mathbb{Q}^+$~: $\epsilon_{\infty,\infty}=Id$ and if $\epsilon_{\mu,\infty}$ is known for all $0 \neq \mu \in F_n$ and $\{\mu_1,\mu',\mu_2\}$ are consecutive entries in $F_{n+1}$ with $\mu_1,\mu_2 \in F_n$ then we put $\epsilon_{\mu',\infty}=R_{S_{\mu_1}} \circ
\epsilon_{\mu_2,\infty}$. Finally, it is easy to check that for $\mu \in \mathbb{Q}^+$ we have
$\epsilon_{\mu,\infty} \simeq ( \cdot \otimes \mathcal{L}^*) \circ \epsilon_{\mu+1,\infty}$; we may thus define unambiguously $\epsilon_{\mu, \infty}= ( \cdot \otimes (\mathcal{L}^*)^{\otimes N}) \circ \epsilon_{\mu+N,\infty}$ for any $\mu \in \mathbb{Q}$. We may also define an inverse equivalence $\epsilon^{-1}_{\mu,\infty}~: \textbf{C}_{\mu} \stackrel{\sim}{\to} \textbf{C}_{\infty}$ in a similar way using left mutations and
we put $\epsilon_{\mu_1,\mu_2}=
\epsilon_{\mu_1,\infty} \circ \epsilon^{-1}_{\mu_2,\infty}$. We extend the definition of the object
$S_{\mu}$ to an arbitrary $\mu$ by putting $S_{\mu}=(\mathcal{L}^*)^{\otimes N} \otimes S_{\mu+N}$ for $N \gg 0$. Observe that $\epsilon_{\mu_1,\mu_2}(S_{\mu_2})=S_{\mu_1}$ for any $\mu_1, \mu_2$.
\vspace{.1in}
\paragraph{\textbf{e)}} By the class of a sheaf $\mathcal{F}$ we will mean the pair $\overline{\mathcal{F}}=(\textbf{rank}(\mathcal{F}), \textbf{deg}(\mathcal{F})) \in \mathbb{Z}^2$. More of the structure of $\mathcal{F}$ is encoded in its HN type, which is defined as $HN(\mathcal{F})=(\overline{\mathcal{H}_1}, \ldots, \overline{\mathcal{H}_r})$, if
$\mathcal{F}=\mathcal{H}_1 \oplus \cdots \oplus \mathcal{H}_r$ where $\mathcal{H}_i$ belongs to $\textbf{C}_{\mu_i}$ and $\mu_1 < \cdots < \mu_r$. We introduce an order on the set of HN types as follows : $((r_1,d_1), \ldots (r_s,d_s)) \preceq ((r'_1,d'_1), \ldots,
(r'_t,d'_t))$ if there exists $l$ such that $(r_{s-i},d_{s-i})=(r'_{t-i},d'_{t-i})$ for $i <l$ while
$\frac{d_{s-l}}{r_{s-l}} > \frac{d'_{t-l}}{r'_{t-l}}$ or $\frac{d_{s-l}}{r_{s-l}} = \frac{d'_{t-l}}{r'_{t-l}}$ and $d_{s-l} > d'_{t-l}$. For such an order, the maximal HN type of a given class $\alpha \in \mathbb{Z}^2$ is simply $(\alpha)$ (which corresponds to semistable sheaves of class $\alpha$).
\vspace{.2in}
\paragraph{\textbf{1.2.}} We will freely use the definitions and notations of \cite{BS}. We briefly recall the basic notions for the reader's convenience. Let $\mathbf{H}_{{X}}$ be the Hall algebra of ${X}$ and let $\mathbf{U}^+_{X}$ be the spherical subalgebra of $\mathbf{H}_{X}$ introduced in \cite{BS}, Section 4. The definition of $\mathbf{H}_{{X}}$ and $\mathbf{U}^+_{X}$ requires the choice of a square root $v$ of $q$; it is important for us to take $v=-q^{-1/2}$ here.
Recall that $\mathbf{H}_{{X}}=\bigoplus_{\mathcal{F} \in Coh({X})} \mathbb{C} [\mathcal{F}]$ has a basis indexed by isoclasses of objects in $Coh(X)$, and that $\mathbf{U}^+_{{X}}$ is the subalgebra generated by elements $\{ \mathbf{1}^{\textbf{ss}}_{\alpha}\;|\; \alpha \in \mathbf{Z}^+\}$ where
$$\mathbf{Z}^+=\{(r,d) \in \mathbb{Z}^2\;|\; r >0\;\text{or}\;r=0, d>0\},$$
$$ \mathbf{1}^{\textbf{ss}}_{\alpha}=\sum_{\substack{\overline{\mathcal{F}}=\alpha \\ \mathcal{F} \in \textbf{C}_{\mu(\alpha)}}} [\mathcal{F}].$$
One also introduces elements $\{ \widetilde{T}_{\alpha}\;|\; \alpha \in \mathbf{Z}^+\}$ and $\{ \mathbf{1}_{\alpha}\;|\; \alpha \in \mathbf{Z}^+\}$ satisfying
$$1+\sum_{l \geq 1} \mathbf{1}^{\textbf{ss}}_{l\alpha_0}s^l=exp\bigg( \sum_{l \geq 1} \widetilde{T}_{l\alpha_0}s^l\bigg)$$
for any $\alpha_0=(r,d)$ with $\textbf{g.c.d}(r,d)=1$, and
$$ \mathbf{1}_{\alpha}=\sum_{\overline{\mathcal{F}}=\alpha} [\mathcal{F}].$$
Note that $\mathbf{1}_\alpha$ only belongs to a completion of $\mathbf{U}^+_{X}$. The notion of a path $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_l))$ in $\mathbb{Z}^+$ is defined in \cite{BS}, Section 5. The set of all convex paths in $\mathbb{Z}^+$ is denoted $\textbf{Conv}^+$. For any path $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_l))$ we set $\widetilde{T}_\mathbf{p}=\widetilde{T}_{\mathbf{x}_1} \cdots \widetilde{T}_{\mathbf{x}_l}$. Then
$$\mathbf{U}^+_{X}=\bigoplus_{\mathbf{p} \in \textbf{Conv}^+} \mathbb{C} \widetilde{T}_\mathbf{p}.$$
\vspace{.15in}
Put $\mathbf{R}=\mathbb{C}[\sigma^{\pm 1/2}, \bar{\sigma}^{\pm 1/2}]$ where $\sigma, \bar{\sigma}$ are formal variables. A generic version $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$ of $\mathbf{U}^+_{X}$ is defined in \cite{BS}, Section 6, which specializes to $\mathbf{U}^+_{X}$ when $\sigma, \bar{\sigma}$ are set to be equal to the Frobenius eigenvalues in $H^1(\overline{{X}}, \overline{\mathbb{Q}_l})$. The algebra $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$ does not depend on ${X}$. Generic forms $\mathbf{1}^{\textbf{ss}}_{\alpha}, \tilde{t}_{\alpha}, \mathbf{1}_{\alpha}$ of the elements $\mathbf{1}^{\textbf{ss}}_{\alpha}, \widetilde{T}_{\alpha}, \mathbf{1}_{\alpha}$ are also defined for any $\alpha \in \mathbf{Z}^+$, and we have $$\boldsymbol{\mathcal{E}}^+_\mathbf{R}=\bigoplus_{\mathbf{p} \in \textbf{Conv}^+} \mathbf{R} \tilde{t}_\mathbf{p}.$$
\vspace{.1in}
Both $\mathbf{U}^+_{{X}}$ and $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$ are $\mathbf{Z}^+$-graded. The components of degree $\alpha$ are equal to
$$\mathbf{U}^+_{X}[\alpha]= \bigoplus_{\substack{\mathbf{p} \in \textbf{Conv}^+ \\ wt(\mathbf{p})=\alpha}} \widetilde{T}_\mathbf{p}, \qquad \boldsymbol{\mathcal{E}}^+_\mathbf{R}[\alpha]= \bigoplus_{\substack{\mathbf{p} \in \textbf{Conv}^+ \\ wt(\mathbf{p})=\alpha}} \mathbf{R} \widetilde{T}_\mathbf{p}$$
where by definition $wt((\mathbf{x}_1, \ldots, \mathbf{x}_l))=\sum \mathbf{x}_i$.
\vspace{.2in}
\section{Algebraic construction of $\mathbf{B}$}
\vspace{.1in}
\paragraph{\textbf{2.1.}} The basis $\{\tilde{t}_{\mathbf{p}}\;|\; \mathbf{p} \in \mathbf{Conv}^+\}$
plays the role of a monomial basis for the algebra $\boldsymbol{\mathcal{E}}_{\mathbf{R}}^+ $.
In analogy with the case of the Hall algebra of a quiver we provide here a tentative definition for a ``canonical basis'' of $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ using an involution and a lattice. We will show in Section~6 that this ``canonical basis'' can also be realized geometrically.
\vspace{.1in}
For our purpose, it is necessary to consider a certain formal completion of $\boldsymbol{\mathcal{E}}_{\mathbf{R}}^+ $. Define an adic valuation $\nu$ on $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$ by $ \nu(\tilde{t}_{\mathbf{p}})=-\mu(\mathbf{x}_1)$ if $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_l) \in \textbf{Conv}^+$. Fix some $C>0$ and denote by $|\;| : u \mapsto C^{- \nu(u)}$ the associated adic norm on $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$. For any $\alpha \in \mathbf{Z}^+$ we let $\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}[\alpha]$ be the completion of $\boldsymbol{\mathcal{E}}^+_\mathbf{R}$ with respect to $|\;|$ and set $\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}=\bigoplus_\alpha \widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}[\alpha]$. It is easy to see that, for any fixed $\alpha$ and $n \in \mathbb{Z}$ there exist finitely many convex paths $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_l)$ of weight $\alpha$ satisfying $\mu(\mathbf{x}_1) =\geq n$. Hence there is an identification
$$\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}[\alpha] =\prod_{\substack{\mathbf{p} \in \textbf{Conv}^+\\ wt(\mathbf{p})=\alpha}} \mathbf{R} \tilde{t}_\mathbf{p}.$$
As proved in \cite{BS}, Section 2 the multiplication map is continuous with respect to $|\;|$ and thus $\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}$ is an algebra as well.
\vspace{.1in}
Note that $\mathbf{1}_\alpha \in \widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}$ for any $\alpha$. For any path $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_l)$ we set ${\mathbf{1}}_{\mathbf{p}}={\mathbf{1}}_{\mathbf{x}_1} \cdots {\mathbf{1}}_{\mathbf{x}_l}$. We also put
${\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}={\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_1} \cdots {\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_l}$.
The elements $\{{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}\}_{\mathbf{p}}$ are obtained from $\{\tilde{t}_{\mathbf{p}}\}_{\mathbf{p}}$ by an invertible matrix. Therefore $\{{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}\;|\; \mathbf{p} \in \mathbf{Conv}^+\}$ is also a basis of $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$.
\vspace{.2in}
\paragraph{\textbf{2.2.}} We define a weak partial order on $\mathbf{Conv}^+$ as follows. For any $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_r)$ and any slope $\mu \in \mathbb{Q} \cup \{\infty\}$ we put $deg_{\mu}(\mathbf{p})=\sum_{\mu(\mathbf{x}_i)=\mu} deg(\mathbf{x}_i)$. The symbols $deg_{\geq \mu}(\mathbf{p}), deg_{> \mu}(\mathbf{p}), etc.$ have similar meanings. Now, for two convex paths $\mathbf{p}, \mathbf{q}$ we write $\mathbf{p} \preceq \mathbf{q}$ if there exists $\mu$ such that $deg_{\kappa}(\mathbf{p})=deg_{\kappa}(\mathbf{q})$ for any $\kappa > \mu$ while $deg_{\mu}(\mathbf{p})\geq deg_{\mu}(\mathbf{q})$. There is a natural projection map from the set of convex paths $\mathbf{Conv}^+$ to the set of HN types (see Section~1.1.), which simply assigns to a path $\mathbf{p}=(\mathbf{x}_1, \ldots)$ the HN type $HN(\mathbf{p})=(\alpha_1, \ldots, \alpha_l)$ with
$$\alpha_1=\mathbf{x}_1 + \cdots + \mathbf{x}_{i_1}, \; \alpha_2=\mathbf{x}_{i_1+1} + \cdots + \mathbf{x}_{i_2}, \ldots$$
$$\mu(\mathbf{x}_1)=\cdots=\mu(\mathbf{x}_{i_1}) < \mu(\mathbf{x}_{i_1+1})= \cdots=\mu(\mathbf{x}_{i_2}) < \cdots.$$
The weak order on $\mathbf{Conv}^+$ coincides with the pullback of the order on HN types defined in Section~1.1.e).
We make the following useful observation, which may be deduced from the above together with
1.1.c)~: for any convex paths $\mathbf{p}, \mathbf{q}$ and any slope $\mu$ we have
\begin{equation}\label{E:observation}
deg_{\geq \mu}(\mathbf{o}) \geq deg_{\geq \mu}(\mathbf{q})
\end{equation}
for any $\tilde{t}_\mathbf{o}$ appearing in the product $\tilde{t}_{\mathbf{p}}\tilde{t}_{\mathbf{q}}$.
Finally, note that we may have $\mathbf{p} \sim \mathbf{q}$ (i.e $\mathbf{p} \preceq \mathbf{q}$ and $\mathbf{q} \preceq \mathbf{p}$) but $\mathbf{p} \neq \mathbf{q}$. In fact for any $\mathbf{p}$ there holds
\begin{equation}\label{E:psimq}
{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}} \in \bigoplus_{\mathbf{q} \sim \mathbf{p}}\mathbf{R}\tilde{t}_{\mathbf{q}}, \qquad
\tilde{t}_{\mathbf{p}} \in \bigoplus_{\mathbf{q} \sim \mathbf{p}} \mathbf{R}{\mathbf{1}}^{\textbf{ss}}_{\mathbf{q}}.
\end{equation}
\vspace{.1in}
\begin{prop}\label{P:basisone}
The set $\{{\mathbf{1}}_{\mathbf{p}}\;|\; \mathbf{p} \in \mathbf{Conv}^+\}$ is a topological basis
of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$, i.e any element $z \in \widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ may be written in a unique way as a convergent sum
$z=\sum_{i} a_i {\mathbf{1}}_{\mathbf{p}_i}$ with $a_i \in \mathbf{R}$ and $\mathbf{p}_i \in \mathbf{Conv}^+$.
\end{prop}
\noindent
\textit{Proof.} We will prove by induction that there exists a converging sum ${\mathbf{1}}_{\mathbf{q}} = {\mathbf{1}}^{\textbf{ss}}_{\mathbf{q}} + \sum_{\mathbf{p} \prec \mathbf{q}} a_\mathbf{p}\tilde{t}_{\mathbf{p}}$, with $a_\mathbf{p} \in \mathbf{R}$. The statement is obvious if $\mathbf{q}=(\mathbf{x})$ is of length one. So let us fix some $\mathbf{q}=(\mathbf{x}_1, \ldots \mathbf{x}_r)$ and let us assume that the statement holds for any $\mathbf{q}'=(\mathbf{x}'_1, \ldots, \mathbf{x}'_{r'})$ with $r'<r$. As
$${\mathbf{1}}_{\mathbf{x}_r}\in{\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_r} \oplus \prod_{deg_{>\mu(\mathbf{x}_r)}(\mathbf{o})>0} \mathbf{R}\tilde{t}_{\mathbf{o}},$$
we deduce using (\ref{E:observation}) that ${\mathbf{1}}_{\mathbf{q}} \in
{\mathbf{1}}_{(\mathbf{x}_1, \ldots, \mathbf{x}_{r-1})} {\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_r} \oplus
\prod_{\mathbf{p} \prec \mathbf{q}}\mathbf{R} \tilde{t}_{\mathbf{p}}$.
Using (\ref{E:observation}) again, we have $\tilde{t}_{\mathbf{p}'} {\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_r}
\in \prod_{\mathbf{p} \prec \mathbf{q}} \mathbf{R}\tilde{t}_{\mathbf{p}}$ for any $\mathbf{p}' \prec (\mathbf{x}_1, \ldots, \mathbf{x}_{r-1})$. By the induction hypothesis it thus follows that
${\mathbf{1}}_{\mathbf{q}} \in {\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_1} \cdots {\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}_r} \oplus \prod_{\mathbf{p} \prec \mathbf{q}} \mathbf{R}\tilde{t}_{\mathbf{p}}$
as desired. \\
Using (\ref{E:psimq}) we have shown that $\{{\mathbf{1}}_{\mathbf{p}}\}$ is related to $\{{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}\}$ by an upper triangular matrix with ones on the diagonal. The Proposition is proved since
$\{{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}\}_{\mathbf{p}}$ is a basis of $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$.$\hfill \checkmark$
\begin{cor}\label{C:unssun}
For any $\mathbf{q} \in \mathbf{Conv}^+$,
$${\mathbf{1}}_{\mathbf{q}} \in {\mathbf{1}}^{\textbf{ss}}_{\mathbf{q}} \oplus \prod_{\mathbf{p} \prec \mathbf{q}} \mathbf{R} {\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}},\qquad
{\mathbf{1}}^{\textbf{ss}}_{\mathbf{q}} \in {\mathbf{1}}_{\mathbf{q}} \oplus
\prod_{\mathbf{p} \prec \mathbf{q}} \mathbf{R}{\mathbf{1}}_{\mathbf{p}}.$$
\end{cor}
\noindent
\textit{Proof.} This is a consequence of (\ref{E:psimq}). $\hfill \checkmark$
\vspace{.2in}
\paragraph{\textbf{2.3.}} By Proposition~\ref{P:basisone}, there exists a unique antilinear involution $z \mapsto \overline{z}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ such that $\sigma \mapsto \sigma^{-1}, \bar{\sigma}\mapsto \bar{\sigma}^{-1}$ and $\overline{{\mathbf{1}}_{\mathbf{p}}}={\mathbf{1}}_{\mathbf{p}}$ for all $\mathbf{p}$. By
Corollary~\ref{C:unssun}, we have
\begin{equation}\label{E:unssbar}
\overline{{\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}} \in {\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}} \oplus \prod_{\mathbf{q} \prec \mathbf{p}} \mathbf{R}{\mathbf{1}}^{\textbf{ss}}_{\mathbf{q}}.
\end{equation}
\vspace{.1in}
Recall that by \cite{BS} Section 4.2, for any fixed $\mathbf{y} \in \mathbf{Z}^+$ satisfying $\textbf{deg}(\mathbf{y})=1$, the subalgebra $\boldsymbol{\mathcal{E}}^{+,(\mu(\mathbf{y}))}_{\mathbf{R}}$ generated by $\{\tilde{t}_{r\mathbf{y}}\;|\; r \in \mathbb{N}\}$ is canonically isomorphic to the ring of symmetric polynomials $\boldsymbol{\Lambda}^+ \otimes \mathbf{R}= \mathbf{R}[x_1, x_2, \ldots]^{\mathfrak{S}_r}$. This isomorphism $i_{\mu(\mathbf{y})}$ is determined by the condition $i_{\mu(\mathbf{y})}({\mathbf{1}}^{\textbf{ss}}_{r\mathbf{y}})=s_r$ with $s_r$ being the Schur function. More generally, if $\lambda=(\lambda_1, \ldots, \lambda_s)$ is any partition we set $\beta_{(\lambda_1 \mathbf{y}, \ldots, \lambda_s \mathbf{y})}=i_{\mu(\mathbf{y})}^{-1}(s_{\lambda})$. Finally, for any convex path $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_r)$ with $\mu(\mathbf{x}_1)= \cdots
= \mu(\mathbf{x}_{i_1})< \mu(\mathbf{x}_{i_1+1})= \cdots < \mu(\mathbf{x}_{i_t+1})= \cdots =\mu(\mathbf{x}_r)$ we put
$$\beta_{\mathbf{p}}=\beta_{(\mathbf{x}_{i_1}, \ldots, \mathbf{x}_{i_1})} \cdots \beta_{(\mathbf{x}_{i_t+1}, \ldots, \mathbf{x}_r)}.$$
Observe that $\beta_{\mathbf{x}}={\mathbf{1}}^{\textbf{ss}}_{\mathbf{x}}$ for any $\mathbf{x} \in \mathbf{Z}^+$. Moreover, from
(\ref{E:unssbar}) we see that
\begin{equation}\label{E:lastsal}
\overline{\beta_{\mathbf{p}}} \in \beta_{\mathbf{p}} \oplus \prod_{\mathbf{q} \prec \mathbf{p}} \mathbf{R} \beta_{\mathbf{q}}.
\end{equation}
\vspace{.1in}
We are now at last ready to give the definition of the canonical basis. Let $\mathbf{R}^> \subset \mathbf{R}$ be the $\mathbb{C}$-linear span of monomials
$\sigma^a\bar{\sigma}^b$ with $a+b <0$.
\vspace{.1in}
\addtocounter{theo}{1}
\noindent
\textbf{Definition \thetheo.} For any $\mathbf{p} \in \mathbf{Conv}^+$ we denote by ${\mathbf{b}}_{\mathbf{p}}$ the unique element of $\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}$ satisfying
$$\overline{{\mathbf{b}}_{\mathbf{p}}}={\mathbf{b}}_{\mathbf{p}}, \qquad {\mathbf{b}}_{\mathbf{p}} \in \beta_{\mathbf{p}} \oplus \prod_{\mathbf{p}' \prec \mathbf{p}} \mathbf{R}^> \beta_{\mathbf{p}'}.$$
We call the set ${\mathbf{B}}=\{{\mathbf{b}}_{\mathbf{p}}\;|\; \mathbf{p} \in \mathbf{Conv}^+\}$ the \textit{canonical basis} of
$\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$.
\vspace{.1in}
The existence and uniqueness of ${\mathbf{b}}_{\mathbf{p}}$ results, by a classical argument of Kazhdan and Lusztig, from (\ref{E:lastsal}), together with the fact that all the structure constants in $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ are symmetric in $\sigma, \bar{\sigma}$. We leave the details to the reader.
\vspace{.1in}
\addtocounter{theo}{1}
\noindent
\textbf{Examples \thetheo.} i) For any $\mathbf{p}=(\mathbf{x}_1, \ldots, \mathbf{x}_r)$ for which $\mu(\mathbf{x}_i)=\infty$ for all $i$ we have
$\mathbf{b}_{\mathbf{p}}=\beta_{\mathbf{p}}$ (indeed, there are no paths $\mathbf{p}'$ of the same weight as $\mathbf{p}$ such that $\mathbf{p}' \prec \mathbf{p}$). In other words, the restriction of the canonical basis to the algebra corresponding to the vertical line in $\mathbf{Z}^+$ coincides with the usual canonical basis of $\boldsymbol{\Lambda}^+$ (constructed from the Jordan quiver or the nilpotent variety).\\
ii) For any $\mathbf{x} \in \mathbf{Z}^+$ we have ${\mathbf{b}}_{\mathbf{x}}={\mathbf{1}}_{\mathbf{x}}$. Indeed,
$${\mathbf{1}}_{\mathbf{x}}=\beta_{\mathbf{x}} + \sum_r \sum_{\underset{\mu(\mathbf{x}_1)< \cdots <\mu(\mathbf{x}_r)}{\mathbf{x}_1 + \cdots + \mathbf{x}_r=\mathbf{x}}}\nu^{\sum_{i<j}\langle \mathbf{x}_i,\mathbf{x}_j\rangle} \beta_{(\mathbf{x}_1, \ldots \mathbf{x}_r)}$$
where $\nu=-(\sigma\bar{\sigma})^{-1/2}$ and $\langle\;,\;\rangle$ is the Euler form of ${X}$.
\vspace{.2in}
\section{Stacks of coherent sheaves and convolution functors}
\vspace{.1in}
In this section, we employ the method of \cite{S1} to construct a geometric incarnation of $\widehat{\boldsymbol{\mathcal{E}}}^+_\mathbf{R}$ along with ``canonical bases'' which enjoy some integrality and positivity properties. This algebra will turn out to be isomorphic to the spherical Hall algebra $\widehat{\mathbf{U}}^+_{{X}}$, and the specialization morphism will map ${\mathbf{B}}$ to the canonical basis constructed in this fashion. This will give us in turn some positivity and integrality properties of ${\mathbf{B}}$.
Whenever possible, we refer to \cite{S1} where a similar construction is given in the context of weighted projective lines of genus one. Until the end of the paper we set $\mathbf{k}=\overline{\mathbb{F}_q}=\overline{\boldsymbol{{k}}}$.
\vspace{.2in}
\paragraph{\textbf{3.1.}} For the notions of algebraic stacks, we refer to the book \cite{LauMo}. We view stacks as sheaves of categories and work in the fppf topology. Hence to define a stack over a field $l$ it is enough the give the functor of $T$-valued points for any scheme $T$ over $l$. If $\mathcal{C}$ is any category, we write $\langle \mathcal{C} \rangle$ for the category with the same objects but in which the morphisms are the isomorphisms of $\mathcal{C}$. By the descent property of stacks, an algebraic stack is uniquely determined by the corresponding functor of $T$-valued points for affine schemes over $l$.
\vspace{.1in}
For any $\alpha \in \mathbf{Z}^+$, we let $\underline{Coh}^\alpha$ be the stack of coherent sheaves on $\overline{X}$ of class $\alpha$, given by the functor $(Aff/\mathbf{k}) \to Groupoids$
\begin{equation*}
\begin{split}
\texttt{Coh}^{\alpha}:~T \mapsto &\langle T-{flat,\;coherent\;sheaves\;} \mathcal{F} {\;on\;}T \times \overline{X}\;{such\;that\;} \\ &\qquad \qquad \qquad \qquad \overline{\mathcal{F}_{|t}}=\alpha\;{for\;any\;closed\;point\;} t \in T\rangle
\end{split}
\end{equation*}
The stack $\underline{Coh}^{\alpha}$ is smooth, locally of finite type, and is an increasing union of smooth open substacks $\underline{Coh}_n^{\alpha}$ defined as follows.
For $\mathcal{E} \in Coh(\overline{X})$ and $\alpha \in \mathbf{Z}^+$
consider the functor $\texttt{Quot}_{\mathcal{E}}^{\alpha}$ from the category of smooth schemes over $\mathbf{k}$ to the category of sets defined by
\begin{equation*}
\begin{split}
\texttt{Quot}_{\mathcal{E}}^{\alpha}~: \Sigma \mapsto&\{\phi:
\mathcal{E} \boxtimes \mathcal{O}_\Sigma
\twoheadrightarrow \mathcal{F}\;|
\mathcal{F}\;is\;a\;coherent\;\Sigma-flat\; sheaf\;on\;\Sigma \times \overline{X},\\
& \qquad \qquad\;
\mathcal{F}_{|\sigma}\;is\;
of\;class\;\alpha\;for\;all\;closed\;points\;\sigma \in \Sigma\}
\end{split}
\end{equation*}
In the above, two maps $\phi,\phi'$ are identified if their
kernels coincide. It is a well-known theorem of Grothendieck that $\texttt{Quot}_{\mathcal{E}}^{\alpha}$ is represented by a projective scheme $Quot_{\mathcal{E}}^{\alpha}$ (see e.g. \cite{LP}). In particular, if
$n \in \mathbb{Z}$ and $\alpha \in \mathbf{Z}^+$ are such that $\langle [\mathcal{O}(n)], \alpha\rangle \geq 0$ we set $\mathcal{L}_n^\alpha=\mathcal{O}(n) \otimes \mathbf{k}^{\langle [\mathcal{O}(n)], \alpha \rangle}$ and put $Quot_{n}^{\alpha}=Quot_{\mathcal{L}_n^\alpha}^{\alpha}$.
The scheme $Quot_{n}^{\alpha}$ is singular in general; the open subfunctor of $\texttt{Quot}_{\mathcal{L}_n^\alpha}^{\alpha}$ defined by
\begin{equation*}
\begin{split}
{}'{\texttt{Quot}}_{\mathcal{L}_n^\alpha}^{\alpha}~: \Sigma\mapsto &\{(\phi:
\mathcal{L}_n^\alpha \boxtimes \mathcal{O}_\Sigma
\twoheadrightarrow \mathcal{F}) \in \texttt{Quot}^\alpha_{\mathcal{L}_n^\alpha}\;|\; \phi_{*|\sigma}: \mathbf{k}^{\langle [\mathcal{O}(n)], \alpha \rangle} \stackrel{\sim}{\to} \mathrm{Hom}(\mathcal{O}(n), \mathcal{F}_{\sigma})\\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad {\;for\;all\;closed\;points}\;\sigma \in \Sigma\}
\end{split}
\end{equation*}
is represented by an open subset $Q_n^\alpha \subset Quot_{n}^{\alpha}$. The group $G_n^\alpha=
\mathrm{Aut}(\mathcal{E}_n^\alpha) \simeq GL(\langle [\mathcal{O}(n)], \alpha \rangle)$ naturally acts on $Q_n^\alpha$.
We will say that a sheaf $\mathcal{F}$ is strictly generated by $\mathcal{O}(n)$ if $\mathcal{F}$ is generated by $\mathcal{O}(n)$ and $\mathcal{F}$ has no direct summand belonging to $\textbf{C}_n$.
\begin{lem}\label{L:1} The scheme $Q_n^\alpha$ is smooth and the set of $G_n^\alpha$-orbits is in natural bijection $\mathcal{F} \leftrightarrow \mathbf{O}_{\mathcal{F},n}$ with the set of sheaves $\mathcal{F}$ of class $\alpha$ strictly generated by $\mathcal{O}(n)$.
\end{lem}
\begin{proof} See e.g. \cite{LP}, Section~8.2, or \cite{S1}, Section~2.2.
\end{proof}
\vspace{.1in}
We let $\underline{Coh}_n^{\alpha}$ be the quotient stack $Q_n^{\alpha}/G_n^{\alpha}$. As $\mathcal{O}(2)$ is generated by $\mathcal{O}$, there are open embeddings of stacks $\underline{Coh}_n^{\alpha} \subset \underline{Coh}_{m}^{\alpha}$ for any $n,m$ with $n \geq m+2$, and $\underline{Coh}^{\alpha}$ is the limit of the corresponding direct system. For any sheaf $\mathcal{F}$ of class $\alpha$, we denote by $\mathbf{O}_{\mathcal{F}}=\mathbf{O}_{\mathcal{F},n}/G_n^{\alpha} $ the locally closed substack of $\underline{Coh}^\alpha$ parametrizing coherent sheaves isomorphic to $\mathcal{F}$ (this definition is independent of $n$ for $n$ sufficiently negative).
\vspace{.15in}
The following remark will be useful.
\begin{lem}\label{L:localsyst} Any $G_n^{\alpha}$-invariant local system on an orbit $\mathbf{O}_{\mathcal{F},n}$ is constant.
\end{lem}
\noindent
\textit{Proof.} For any $z=(\phi: \mathcal{L}_n^{\alpha} \twoheadrightarrow \mathcal{F}) \in \mathbf{O}_{\mathcal{F},n}$ we have have $Stab_{G_n^{\alpha}}(z) \simeq \text{Aut}(\mathcal{F})$ (see \cite{S1}, Lemma~2.4.), hence $\mathbf{O}_{\mathcal{F},n} \simeq G_n^{\alpha} /\text{Aut}(\mathcal{F})$. Thus $G_n^{\alpha}$-invariant local systems on $\mathbf{O}_{\mathcal{F},n}$ are parametrized by representations of the component group of $\text{Aut}(\mathcal{F})$. We now prove that $\text{Aut}(\mathcal{F})$ is connected.
Let us write $\mathcal{F}=\mathcal{H}_1 \oplus \cdots \oplus \mathcal{H}_r$ with $\mathcal{H}_i$ semistable and $\mu(\mathcal{H}_1) < \cdots < \mu(\mathcal{H}_r$. The group $\text{Aut}(\mathcal{F})$ is an affine fibration over
$\text{Aut}(\mathcal{H}_1) \times \cdots \times \text{Aut}(\mathcal{H}_r)$. Hence it is enough to show that
$\text{Aut}(\mathcal{G})$ is connected for any semistable $\mathcal{G}$. This is easily checked if $\mathcal{G}$ is a torsion sheaf, and follows for an arbitrary $\mathcal{G}$ using the equivalences $\epsilon_{\mu,\infty}$.$\hfill \checkmark$
\vspace{.1in}
Let $\mathbb{P}$ be a constructible sheaf of $\overline{\mathbb{Q}_l}$-vector spaces on $\underline{Coh}^{\alpha}$. By restriction it gives rise, for any $n$, to a $G_n^{\alpha}$-equivariant constructible sheaf on $Q_n^{\alpha}$, and hence to a $G_n^{\alpha}$-equivariant local system $\mathfrak{L}_n$ on $\mathbf{O}_{\mathcal{F},n}$. By the above Lemma, such a local system is constant. Moreover, by construction, there are canonical maps $\Gamma(\mathbf{O}_{\mathcal{F},m},\mathfrak{L}_m) \to \Gamma(\mathbf{O}_{\mathcal{F},n},\mathfrak{L}_n)$ for $n \geq m+2$, which are isomorphisms for $n,m \ll 0$. We may now define the \textit{stalk} of $\mathbb{P}$ over $\mathbf{O}_{\mathcal{F}}$ to be
$\mathbb{P}_{|\mathbf{O}_{\mathcal{F}}}=\underset{\longleftarrow}{\text{Lim}}\;\Gamma(\mathbf{O}_{\mathcal{F},n},\mathfrak{L}_n)$ (a $\overline{\mathbb{Q}_l}$-vector space).
\vspace{.2in}
\paragraph{\textbf{3.2.}} Let ${D}^b(\underline{Z})$ stand for the derived category of constructible $\overline{\mathbb{Q}_l}$-sheaves on an algebraic stack $\underline{Z}$ defined over $\mathbf{k}$. All the stacks $\underline{Z}$ considered in this paper will be increasing unions of open substacks $\underline{Z}_n$, $n \in \mathbb{Z}$, each of which will be a quotient stack $\underline{Z}_n=Q_n/G_n$ of a $\mathbf{k}$-scheme $Q_n$ by a reductive group $G_n$. For such stacks $\underline{Z}$ there exists a six operations formalism, as well as a notion of a dualizing sheaf, and a category of perverse sheaves
(see \cite{LauMo}, Chap. 18.8). In that situation, we let $D^b(\underline{Z})^{ss}$ stand for the category of semisimple $\overline{\mathbb{Q}_l}$-constructible complexes of geometric origin.
Such a complex $\mathbb{P}$ gives rise by restriction (and is essentially equivalent) to the data of a collection of $G_n$-equivariant semisimple complexes $\mathbb{P}_n \in D^b_{G_n}(Q_n)^{ss}$ for $n \in \mathbb{Z}$, together with certain maps between them satisfying some compatibility conditions. Here $D^b_{G_n}(Q_n)$ is the equivariant derived category of constructible sheaves over $Q_n$, as in \cite{BL}. Given a smooth locally closed substack $\underline{Y} \subset \underline{Z}$ together with a semisimple local system $\mathfrak{L}$ on it we denote by $\mathbf{IC}(\underline{Y}, \mathfrak{L}) \in D^b(\underline{Z})^{ss}$ the associated intersection cohomology sheaf.
\vspace{.2in}
\paragraph{\textbf{3.3.}} Following \cite{L1}, we define functors of induction and restriction on the collection of categories $D^b(\underline{Coh}^{\alpha})^{ss}$ for $\alpha \in \mathbf{Z}^+$. Consider the diagram
\begin{equation}\label{E:diagind}
\xymatrix{\underline{Coh}^\beta \times \underline{Coh}^\alpha& \underline{\mathcal{E}}^{\alpha,\beta} \ar[l]_-{p_1} \ar[r]^-{p_2}
&
\underline{Coh}^{\alpha +\beta}}
\end{equation}
where the following notations are used~:\\
-$\underline{\mathcal{E}}^{\alpha,\beta}$ is the stack associated to the functor $(Aff/\mathbf{k}) \to Groupoids$ given by
\begin{equation*}
\begin{split}
\texttt{{E}}^{\alpha,\beta}:~T \mapsto \langle &{exact\;sequences\;} 0 \to \mathcal{G} \to \mathcal{F} \to \mathcal{H} \to 0\;\text{of\;}T-{flat\;coherent\;sheaves}\\
&{\;on\;}T \times \overline{{X}} {\;such\;that\;}\overline{\mathcal{G}_{|t}}=\alpha, \overline{\mathcal{F}_{|t}}=\alpha+\beta\;{for\;any\;closed\;point\;} t \in T\rangle
\end{split}
\end{equation*}
-The 1-morphism $p_1$ is induced by the natural transformation $\texttt{E}^{\alpha,\beta} \to \texttt{Coh}^{\beta} \times \texttt{Coh}^{\alpha}$ given on the objects by $ (0 \to \mathcal{G} \to \mathcal{F} \to \mathcal{H} \to 0) \mapsto (\mathcal{H}, \mathcal{G})$,\\
-The 1-morphism $p_2$ is induced by the natural transformation $\texttt{E}^{\alpha,\beta} \to \texttt{Coh}^{\alpha+\beta}$ given on the objects by $ (0 \to \mathcal{G} \to \mathcal{F} \to \mathcal{H} \to 0) \mapsto (\mathcal{F})$.
The morphism $p_1$ is smooth with connected fibers (see \cite{S1}, Lemma~3.2), while the morphism $p_2$ is proper (its fibers are isomorphic to certain projective Quot schemes).
\vspace{.15in}
We set
$$\widetilde{\mathrm{Ind}}^{\beta,\alpha}=
p_{2!} p_1^*: D^b(\underline{Coh}^{\beta})^{ss} \boxtimes D^b(\underline{Coh}^{\alpha})^{ss} \to D^b(\underline{Coh}^{\alpha+\beta})^{ss}$$
and
$\mathrm{Ind}^{\beta,\alpha}=\widetilde{\mathrm{Ind}}^{\beta,\alpha}[-\langle\beta,\alpha\rangle]$. Recall that the $\underline{Coh}^{\alpha}$ are locally quotient stacks $Q_n^{\alpha}/G_n^{\alpha}$, and all semisimple complexes considered here are assumed to be of geometric origin, so that the equivariant version of the Decomposition
theorem \cite{BBD} (see \cite{BL}) implies that
$\mathrm{Ind}^{\beta,\alpha}$ does indeed take its values in
$D^b(\underline{Coh}^{\alpha+\beta})^{ss}$.
The functor $\mathrm{Ind}^{\beta,\alpha}$ is associative i.e. for each triple $\alpha, \beta, \gamma \in \mathbf{Z}^+$ there are (canonical) natural transformations
$\mathrm{Ind}^{\beta+\gamma,\alpha} \circ \mathrm{Ind}^{\gamma,\beta}
\simeq \mathrm{Ind}^{\gamma,\alpha+\beta} \circ
\mathrm{Ind}^{\beta,\alpha}$. This allows us to define an iterated induction functor $\mathrm{Ind}^{\alpha_r, \ldots, \alpha_1}:\; D^b(\underline{Coh}^{\alpha_r})^{ss}
\times \cdots \times D^b(\underline{Coh}^{\alpha_1})^{ss}
\to D^b(\underline{Coh}^{\alpha_1 + \cdots + \alpha_r})^{ss}.$
Note that the functor $\text{Ind}^{\beta,\alpha}$ commutes with Verdier duality.
\vspace{.15in}
Similarly, we set
$$\widetilde{\text{Res}}^{\beta,\alpha}=p_{1!}p_{2}^*~:D^b(\underline{Coh}^{\alpha+\beta})^{ss} \to D^b({\underline{Coh}}^{\beta} \times {\underline{Coh}}^{\alpha})$$
and
$\mathrm{Res}^{\beta,\alpha}_n=\widetilde{\mathrm{Res}}_n^{\beta,\alpha}
[-\langle \beta,\alpha\rangle]$. The functor $\text{Res}^{\beta,\alpha}$ in fact restricts to a functor $D^b_{G_n^{\alpha+\beta}}(Q_n^{\alpha+\beta})^{ss} \to {D}^b_{G_n^{\beta} \times G_n^{\alpha}}(Q_n^{\beta} \times Q_n^{\alpha})$, whose definition we unravel in detail for later purposes. For this, we fix
a subspace $V \subset \mathbf{k}^{\langle \mathcal{O}(n), \alpha+\beta\rangle}$ of dimension $\langle \mathcal{O}(n),\alpha \rangle$ along with isomorphisms
$a: V \stackrel{\sim}{\to} \mathbf{k}^{\langle \mathcal{O}(n),\alpha \rangle}, b:
\mathbf{k}^{\langle \mathcal{O}(n),\alpha+\beta\rangle}/V \stackrel{\sim}{\to}
\mathbf{k}^{\langle \mathcal{O}(n), \beta \rangle}$, we put $\underline{V}=V \otimes \mathcal{O}(n)$ and we consider the diagram
\begin{equation}\label{E:12}
\xymatrix{
Q_n^{\alpha+\beta} & F \ar[l]_-{i} \ar[r]^-{\kappa} & Q_n^\beta
\times Q_n^\alpha}
\end{equation}
where\\
- $F$ is the subvariety of $Q_n^{\alpha+\beta}$ whose points are the quotients $(\phi:
\mathcal{L}^{\alpha+\beta}_n
\twoheadrightarrow \mathcal{F})$ such that $\overline{\phi(\underline{V})}=\alpha$
and $i: F \hookrightarrow Q_n^{\alpha+\beta}$ is the (closed) embedding,\\
- $\kappa(\phi)=( b_*
\phi_{|\mathcal{L}^{\alpha+\beta}_n/\underline{V}},a_{*} \phi_{|\underline{V}})$.
By \cite{S1}, Lemma~3.2., $\kappa$ is a vector
bundle of rank $\langle \mathcal{E}^\beta_n-\beta,
\alpha \rangle$. If $\mathbb{P}$ is an arbitrary complex in $D^b(\underline{Coh}^{\alpha+\beta})^{ss}$ and if $\mathbb{P}_n$ denotes its restriction to $\underline{Coh}_n^{\alpha+\beta}$ then the restriction of $\widetilde{\text{Res}}^{\beta,\alpha}(\mathbb{P})$ to $\underline{Coh}_n^{\beta} \times \underline{Coh}_n^{\alpha}$ is isomorphic to $\widetilde{\text{Res}}^{\beta,\alpha}_n(\mathbb{P}_n)$ where by definition
$$\widetilde{\mathrm{Res}}^{\beta, \alpha}_n=\kappa_{!}i^*:
D^b_{G_n^{\alpha+\beta}}(Q_n^{\alpha+\beta})
\to {D}^b_{G_{n^\beta \times G_n^\alpha}}(Q_n^{\beta} \times Q_n^\alpha).$$
As $\mathrm{Res}^{\alpha,\beta}$ does not a priori preserve semisimple complexes, it does not lift to a functor from $D^b(\underline{Coh}^{\alpha+\beta})^{ss}$ to $D^b(\underline{Coh}^\beta)^{ss} \boxtimes D^b(\underline{Coh}^\alpha)^{ss}$.
\vspace{.2in}
\paragraph{\textbf{3.4.}} The collection of constant complexes $\big(\overline{\mathbb{Q}_l}_{Q_n^\alpha}[\mathrm{dim}\;Q_n^\alpha]\big)_{n \in \mathbb{Z}}$ gives rise to a simple perverse sheaf on $\underline{Coh}^\alpha$ which we denote by $\mathbbm{1}_\alpha$.
Let $\mathcal{P}^\alpha$ be the set of all simple objects in $D^b(\underline{Coh}^\alpha)^{ss}$ which appear (up to a shift) in an induction product
$\mathrm{Ind}^{\alpha_r,\ldots, \alpha_1}(\mathbbm{1}_{\alpha_r} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_1})$
for some $\alpha_1, \ldots, \alpha_r$ such that $\sum_i \alpha_i=\alpha$ and $rank(\alpha_i) \leq 1$ for all $i$. We also set
$\mathcal{P}=\bigsqcup_{\alpha} \mathcal{P}^\alpha$. We will be concerned here with the full triangulated subcategory $\mathcal{Q}^\alpha$ of $D^b(\underline{Coh}^\alpha)^{ss}$ whose objects are the complexes isomorphic to a locally finite sum ${\bigoplus}_i \mathbb{P}^i[d_i]$ with $\mathbb{P}^i \in \mathcal{P}^\alpha$ for all $i$. We also define triangulated categories $\mathcal{Q}^\beta \hat{\boxtimes}
\mathcal{Q}^\alpha$ and $\mathcal{Q}^\beta {\boxtimes}
\mathcal{Q}^\alpha$ whose objects respectively consist of locally finite, resp. finite sums of objects of the form $\mathbb{P}_\beta \boxtimes
\mathbb{P}_{\alpha}$ with
$\mathbb{P}_\beta \in \mathcal{Q}^\beta$ and $\mathbb{P}_\alpha \in
\mathcal{Q}^\alpha$ (see \cite{S1}, Section~4.2).
\vspace{.1in}
\begin{lem} For any $\alpha, \beta \in \mathbf{Z}^+$, the induction and restriction functors induce functors
\begin{align*}
\mathrm{Ind}^{\beta,\alpha}&:\; \mathcal{Q}^\beta \boxtimes
\mathcal{Q}^\alpha \to \mathcal{Q}^{\alpha+\beta},.\\
\mathrm{Res}^{\beta,\alpha}&:\; \mathcal{Q}^{\alpha+\beta} \to \mathcal{Q}^\beta
\hat{\boxtimes} \mathcal{Q}^\alpha.
\end{align*}
\end{lem}
\noindent
\textit{Proof.} Identical to \cite{S1}, Lemma~4.1.$\hfill \checkmark$
\vspace{.1in}
In particular, the collection of categories $\mathcal{Q}^\alpha, \alpha \in \mathbf{Z}^+$ is stable under the restriction functor. As shown in \cite{S1}, Section~4, the functor $\text{Res}^{\beta,\alpha}$ is coassociative. We will often need to consider the iterated restriction functor
$$\mathrm{Res}^{\alpha_r,\ldots,\alpha_1}:
\mathcal{Q}^{\alpha_1+\cdots+\alpha_r} \to \mathcal{Q}^{\alpha_r}
\hat{\boxtimes}
\cdots \hat{\boxtimes} \mathcal{Q}^{\alpha_1}.$$
\vspace{.2in}
\paragraph{\textbf{3.5.}} In this section we provide a parametrization and a complete description of the perverse sheaves appearing in $\mathcal{P}$ (that is, we give for each of these simple perverse sheaves a corresponding smooth locally closed subvariety along with an irreducible local system on it).
\vspace{.15in}
\paragraph{\textbf{3.5.1}} We first introduce certain stratifications of the stacks
$\underline{Coh}^{\alpha}$. Recall from Section~1.1. e). that the HN type of a sheaf $\mathcal{F}$ with HN filtration $0 \subset \mathcal{F}_1 \subset \cdots \subset \mathcal{F}_{r-1} \subset \mathcal{F}$ is $HN(\mathcal{F})=(\overline{\mathcal{F}_1}, \overline{\mathcal{F}_2/\mathcal{F}_1}, \ldots \overline{\mathcal{F}/\mathcal{F}_{r-1}}) \in (\mathbf{Z}^+)^r$. The stack $\underline{Coh}^\alpha$ admits a locally finite stratification by locally closed substacks
$$\underline{Coh}^\alpha = \bigsqcup_{(\alpha_1, \ldots, \alpha_r)} \underline{HN}^{-1}(\alpha_1, \ldots, \alpha_r)$$
where $\underline{HN}^{-1}(\alpha_1, \ldots, \alpha_r) \subset \underline{Coh}^\alpha$ is the substack parametrizing sheaves
$\mathcal{F} \in Coh({X})$ whose HN type is $(\alpha_1, \ldots, \alpha_r)$.
For simplicity, the open substack $\underline{HN}^{-1}(\alpha) \subset \underline{Coh}^\alpha$ corresponding to semistable sheaves will be denoted $\underline{Coh}^{(\alpha)}$. Note
that $\underline{Coh}^{\alpha} \setminus \underline{Coh}^{(\alpha)}=\bigcup_{\underline{\beta}
\prec \alpha} \underline{HN}^{-1}(\underline{\beta})$, where $\prec$ stands for the order defined in Section~1.1.e).
\vspace{.1in}
Any $\alpha \in \mathbf{Z}^+$ may be written in a unique way as $\alpha=l \delta_{\mu(\alpha)}$ where $l \geq 1$ and $\delta_{\mu(\alpha)}=(p,q) \in \mathbf{Z}^+$ with $p,q$ relatively prime (so that $\mu(\alpha)=\frac{p}{q}$). If $\mu \in \mathbb{Q} \cup \{\infty\}$
then $\underline{Coh}^{(\delta_{\mu})}$ actually corresponds to stable sheaves and $\underline{Coh}_n^\alpha$ is a (smooth) geometric quotient $Q_n^{(\delta_{\mu})} / G_n^{\delta_{\mu}}$ for $n \ll 0$. Arguing (using mutations) in the same way as in \cite{S1}, Section~10., we obtain, for each $\mu_1, \mu_2 \in \mathbb{Q} \cup \{\infty\}$ a canonical isomorphism $\rho_{\mu_1,\mu_2}: \underline{Coh}^{(\delta_{\mu_2})} \stackrel{\sim}{\to} \underline{Coh}^{(\delta_{\mu_1})}$. In particular there is an isomorphism $\rho_{\infty,\mu}:~\underline{Coh}^{(\delta_{\mu})} \stackrel{\sim}{\to}
\underline{Coh}^{((0,1))} \simeq \overline{X}/\mathbf{k}^*$, where the multiplicative group $\mathbf{k}^*$ acts trivially. In a similar vein, fix $l \geq 1$ and $\mu \in \mathbb{Q} \cup \{\infty\}$ and let us consider the open substack $\underline{U}^{(l\delta_{\mu})}$ of $\underline{Coh}^{(l\delta_{\mu})}$ parametrizing semistable sheaves $\mathcal{F}$ isomorphic to direct sums of stable sheaves in $\textbf{C}_{\mu}$ with distinct support~:
$\mathcal{F} \simeq \epsilon_{\mu,\infty}(\bigoplus_{i=1}^l \mathcal{O}_{x_i}),\; x_i \in \overline{X}, x_i \neq x_j \; \text{if}\; i \neq j.$
The stack $\underline{U}^{(l\delta_{\mu})}$ may also be realized as a geometric quotient $Y_n^{(l\delta_{\mu})}/G_n^{l\delta_{\mu}}$ for some smooth open subscheme $Y_n^{(l\delta_{\mu})}$ of $Q_n^{l\delta_{\mu}}$ (for $n \ll 0)$ and there is a canonical isomorphism $\rho_{\infty,\mu}^l: \underline{U}^{(l\delta_\mu)} \stackrel{\sim}{\to} (S^l\overline{X} \setminus \underline{\Delta}) / (\mathbf{k}^*)^l$ where $\underline{\Delta}=\{(x_i)\;|
x_i=x_j\; \text{for\;some}\; i \neq j\}$ and again $(\mathbf{k}^*)^l$ acts trivially. As a consequence, there is a projection $\pi_1(\underline{U}^{(l\delta_\mu)}) \twoheadrightarrow \mathfrak{S}_l$ where $\mathfrak{S}_l$ is the symmetric group on $l$ letters respectively.
\vspace{.1in}
Finally, if $\mathbb{P}$ belongs to $\mathcal{P}^\alpha$ then there exists a unique HN type $(\alpha_1, \ldots \alpha_r)$ such that $supp(\mathbb{P}) \subset
\overline{\underline{HN}^{-1}(\alpha_1, \ldots , \alpha_r)}$ and $supp(\mathbb{P}) \cap \underline{HN}^{-1}(\alpha_1, \ldots, \alpha_r)$ is nonempty; we write $HN(\mathbb{P})=(\alpha_1, \ldots, \alpha_r)$ and call $(\alpha_1, \ldots, \alpha_r)$ the \textit{generic HN type} of $\mathbb{P}$. The set of all generically semistable $\mathbb{P}$ of weight $\alpha$ will be denoted $\mathcal{P}^{(\alpha)}$.
\vspace{.15in}
\paragraph{\textbf{3.5.2.}} Any representation $\sigma$ of $\mathfrak{S}_l$ gives rise to a local system on $S^l \overline{X} \setminus \underline{\Delta}$ and hence to a local system $\mathfrak{L}_{\sigma}$ on $\underline{U}^{(l\delta_{\mu})}$. The following proposition is proved in the same fashion as Proposition~9.7 in \cite{S1}.
\begin{prop}\label{P:descript-perv} For any $\mu \in \mathbb{Q} \cup \{\infty\}$ and any $l \geq 1$ we have
$$\mathcal{P}^{(l\delta_{\mu})}=\{\mathbf{IC}(\underline{U}^{(l\delta_{\mu})},\sigma)\;|\;\sigma \in Irrep\;\mathfrak{S}_l\}.$$
Furthermore, for each $\alpha$ there is a canonical bijection
$$\theta_{\alpha}:\;\mathcal{P}^\alpha \stackrel{\sim}{\to} \bigsqcup_{\underset{\mu(\alpha_1)< \cdots < \mu(\alpha_r)}{\alpha_1 + \cdots + \alpha_r=\alpha}} \mathcal{P}^{(\alpha_1)} \times \cdots \times \mathcal{P}^{(\alpha_r)}$$
such that if $\theta_{\alpha}(\mathbb{P})=(\mathbb{P}^1, \ldots, \mathbb{P}^r)$ then $\mathrm{Ind}(\mathbb{P}^1 \boxtimes \cdots \boxtimes \mathbb{P}^r)\simeq \mathbb{P} \oplus \mathbb{P}'$ with $supp(\mathbb{P}') \subset supp(\mathbb{P})$ and $\text{dim\;}supp(\mathbb{P}') < \text{dim\;}supp(\mathbb{P})$.
Finally, every $\mathbb{P} \in \mathcal{P}$ is self-dual, i.e $D(\mathbb{P})=\mathbb{P}$ where $D$ is the Verdier duality functor.
\end{prop}
\vspace{.2in}
\section{Purity}
\vspace{.1in}
\paragraph{\textbf{4.1.}} Recall that $\boldsymbol{{k}}=\mathbb{F}_q$, $\mathbf{k}=\overline{\mathbb{F}_q}$ and that $\overline{X}$ is equal to ${X} \times_{\small{Spec(\boldsymbol{{k}})}}Spec(\mathbf{k})$ for some smooth elliptic curve ${X}$ defined over $\boldsymbol{{k}}$. The functor ${}^0\texttt{Quot}_{\mathcal{L}_n^{\alpha}}^{\alpha}$ from the category of smooth $\mathbb{F}_q$-schemes to sets defined in the same manner as ${\texttt{Quot}}_{\mathcal{L}_n^{\alpha}}^{\alpha}$ but replacing $\overline{X}$ by ${X}$, is represented by a projective $\mathbb{F}_q$-scheme ${}^0Quot_{\mathcal{L}_n^{\alpha}}^{\alpha}$ and there is a canonical identification $Quot_{\mathcal{L}_n^{\alpha}}^{\alpha}\simeq {}^0Quot_{\mathcal{L}_n^{\alpha}}^{\alpha}\times_{\small{Spec(\boldsymbol{{k}})}}Spec(\mathbf{k})$. A similar statement holds for the open subschemes ${}^0Q_n^{\alpha} \subset {}^0Quot_{\mathcal{L}_n^\alpha}^{\alpha}$ and $Q_n^\alpha \subset Quot_{\mathcal{L}_{n}^\alpha}^{\alpha}$. In particular, there is a natural (geometric) Frobenius automorphism
$\tilde{F}: Q_n^\alpha \to Q_n^\alpha$. As a consequence, there are quotient stacks ${}^0\underline{Coh}_n^{\alpha}$ defined over $\boldsymbol{{k}}$ such that $\underline{Coh}_n^{\alpha} \simeq {}^0\underline{Coh}_n^{\alpha} \times_{\small{Spec(\boldsymbol{{k}})}}Spec(\mathbf{k})$. These stacks form an open cover of a stack ${}^0\underline{Coh}^{\alpha}$ defined over $\boldsymbol{{k}}$ and we have $\underline{Coh}^{\alpha} \simeq {}^0\underline{Coh}^{\alpha} \times_{\small{Spec(\boldsymbol{{k}})}}Spec(\mathbf{k})$. This equips $\underline{Coh}^\alpha$ with a geometric Frobenius automorphism still denoted $\tilde{F}$.
\vspace{.1in}
We refer to \cite{Del} and \cite{FW} for matters concerning the notions of Weil sheaves, of purity and of pointwise purity for complexes of constructible $\overline{\mathbb{Q}_l}$-sheaves on a scheme. We will say that a complex $\mathbb{P} \in D^b(\underline{Coh}^{\alpha})^{ss}$ equipped with a mixed structure $h: \mathbb{P} \stackrel{\sim}{\to} \tilde{F}^* \mathbb{P}$ is pure (resp. pointwise pure) of weight $l$ if for any $n \in \mathbb{Z}$ the corresponding $G_n^{\alpha}$-equivariant complex
$\mathbb{P}_n$ over $Q_n^{\alpha}$ is pure (resp. pointwise pure) of weight $l$.
Note that for any coherent sheaf $\mathcal{F} \in Coh({X})$ the action of the Frobenius on the stalks of $\mathbb{P}_n$ over $\mathbf{O}_{\mathcal{F},n}$ for all $n$ gives rise to a similar Frobenius action on the stalk
$\mathbb{P}_{|\mathbf{O}_{\mathcal{F}}}$ (see Section~3.1.).
A mixed complex $\mathbb{P} \in D^b(\underline{Coh}^{\alpha})^{ss}$ is pointwise pure of weight zero if and only if for any $\mathcal{F} \in Coh({X})$ the eigenvalues of $\tilde{F}^*$ on the stalks $H^i(\mathbb{P})_{|\mathbf{O}_{\mathcal{F}}}$ are all of
complex norm $q^{i/2}$.
We let $\overline{\mathbb{Q}_l}(1/2)$ stand for the square root of the Tate sheaf, whose Frobenius map has eigenvalue $q^{-1/2}$. We write $\mathbb{P}(n/2)$ for $\mathbb{P} \otimes (\overline{\mathbb{Q}_l}(1/2))^{\otimes n}$. From now on the definitions of the functors $\text{Ind}$ and $\text{Res}$ are understood to include the Tate twist as well, i.e we replace everywhere the shifts $[n]$ appearing in Section~3 by $[n](n/2)$.
\vspace{.1in}
Our aim in this section is to establish the following purity statement.
\begin{theo}\label{P:purity} For any $\alpha \in \mathbf{Z}^+$ and any $\mathbb{P} \in \mathcal{P}^\alpha$ there exists a (unique up to a scalar) isomorphism $h_{\mathbb{P}}: \mathbb{P} \stackrel{\sim}{\to} \tilde{F}^* \mathbb{P}$, with respect to which $\mathbb{P}$ is pointwise pure of weight zero.
\end{theo}
\noindent
\textit{Proof.} By definition, the complexes $\mathbbm{1}_{\alpha}$ are equipped with a mixed structure which is pointwise pure of weight zero.
From the definition of the induction product and \cite{Del}, Prop. 6.2.6, $\mathrm{Ind}^{\alpha_1, \ldots, \alpha_r}(\mathbbm{1}_{\alpha_1} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r})$ is also (globally) pure of weight zero for any $\alpha_1, \ldots, \alpha_r$.
The key to Theorem~\ref{P:purity} is to show the next statement.
\begin{prop}\label{L:purityun} For any $\alpha_1, \ldots, \alpha_r$ the complex $\mathrm{Ind}^{\alpha_1, \ldots, \alpha_r}(\mathbbm{1}_{\alpha_1} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r})$ is pointwise pure of weight zero.
\end{prop}
\noindent
\textit{Proof.} To ease the notation let us put $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}=\text{Ind}^{\alpha_1, \ldots, \alpha_r}(\mathbbm{1}_{\alpha_1} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r})$.
Set $\alpha=\sum \alpha_i$ and let $\mathcal{F} \in Coh(\overline{X})$ be a sheaf of class $\alpha$. We will work over
each open substack $Q_m^{\alpha}/G_m^{\alpha}$. There exists $e$ such that for any $z=(\phi: \mathcal{L}_m^{\alpha} \twoheadrightarrow \mathcal{F}) \in \mathbf{O}_{\mathcal{F},m}$ we have $(\tilde{F}^*)^e(z)=z$. We need to show the following property~:
\vspace{.05in}
\noindent
(a) \textit{For any $z \in \mathbf{O}_{\mathcal{F},m}$, the eigenvalues of} $(\tilde{F}^*)^e$ \textit{on the stalk} $H^i(\mathbbm{1}_{\alpha_1,\ldots,\alpha_r})_{|z}$ \textit{are all of complex norm} $q^{ei/2}$.
\vspace{.05in}
Unraveling the definitions, one has that the stalk at $z$ of the complex $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ is equal to the stalk at $z$ of $p_{!}(\overline{\mathbb{Q}_l}_{E''})$ where~: $E''$ is the variety of pairs $(\phi, V_r \subset \cdots \subset V_1 =\mathbf{k}^{\langle \mathcal{O}(m), \alpha\rangle})$ such that $(\phi: \mathcal{L}_m^{\alpha} \twoheadrightarrow \mathcal{F}) \in \mathbf{O}_{\mathcal{F},m}$, $V_i/V_{i+1}$ is of dimension $\langle \mathcal{O}(m), \alpha_i \rangle$, and $\phi(V_i \otimes \mathcal{O}(m))/\phi(V_{i+1} \otimes \mathcal{O}(m))$ is a coherent sheaf of class $\alpha_i$; $p$ is the projection on the first factor. Hence the fiber of $p$ at $z$ is identified with the projective (hyperquot) scheme $Quot_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r}$ parametrizing successive quotients $\psi: \mathcal{F}= \mathcal{G}_1 \twoheadrightarrow \cdots \twoheadrightarrow \mathcal{G}_{r}$ with $\mathcal{G}_i$ of class $\alpha_i + \cdots + \alpha_r$ (see \cite{S1}, Lemma~4.2). In particular, the stalk of $H^i( p_{!}(\overline{\mathbb{Q}_l}_{E''}))$ at $z$ is isomorphic to $H_c^i(Quot_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r}, \overline{\mathbb{Q}_l})$ and statement (a) for a sheaf $\mathcal{F}$ is equivalent to
\vspace{.05in}
\noindent
(a') \textit{The hyperquot scheme $Quot_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r}$ is cohomologically pure, i.e the eigenvalues of $(\tilde{F}^*)^e$ on $H^i_c({Quot}_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r})$ are all of complex norm} $q^{ei/2}$.
\vspace{.05in}
Note that by base change it is enough to prove this for any $\mathcal{F}$ for which $e=1$ and we will assume this from now on. We will prove (a) (or (a')) in three steps. First, we reduce (a) for an arbitrary sheaf $\mathcal{F}$ to (a) for its semistable subquotients, then to (a) for stable sheaves, and finally we prove (a) for stable sheaves.
In the course of the proof we will often need the following lemma~:
\begin{lem}\label{L:s151} Let $\alpha_1, \ldots, \alpha_r, \beta, \sigma \in \mathbf{Z}^+$ be such that $\sum \alpha_i=\beta+\sigma$. Then
$$\text{Res}^{\sigma,\beta}(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})=\bigoplus_{\underline{\beta}, \underline{\sigma}} \mathbbm{1}_{\sigma_1, \ldots, \sigma_r} \boxtimes \mathbbm{1}_{\beta_1, \ldots, \beta_r}[u(\underline{\sigma},\underline{\beta})](u(\underline{\sigma},\underline{\beta})/2)$$
where $(\underline{\beta}=(\beta_1, \ldots, \beta_r), \underline{\sigma}=(\sigma_1, \ldots, \sigma_r))$ run through the set of all tuples satisfying $\alpha_i=\sigma_i+\beta_i, \sum \beta_i=\beta, \sum\sigma_i=\sigma$, and where $u(\underline{\sigma},\underline{\beta})$ are certain integers.\end{lem}
\noindent
\textit{Proof}. Identical to \cite{S1}, Lemma~4.1. (see also \cite{S1}, Corollary~4.3).
\vspace{.1in}
\textit{Step 1.} Let $\mathcal{F}$ be an arbitrary coherent sheaf on $\overline{X}$ of class $\alpha$ and let us write $\mathcal{F}=\mathcal{H}_1 \oplus \cdots \oplus \mathcal{H}_s$ where $\mathcal{H}_i$ are semistable sheaves such that $\mu(\mathcal{H}_1)\leq \mu(\mathcal{H}_2) \cdots \leq \mu(\mathcal{H}_s)$. We may and will further assume that for any $i$, $\epsilon_{\infty,\mu(\mathcal{H}_i)}(\mathcal{H}_i)$ is a torsion sheaf supported at a single point $x_i \in \overline{X}$ and that $x_i \neq x_j$ if $\mu(\mathcal{H}_i) = \mu(\mathcal{H}_j)$ but $i \neq j$. Now let $\mathbb{P} \in D^b(\underline{Coh}^\alpha)^{ss}$ be any complex and let us consider the stalk of $\mathrm{Res}^{\overline{\mathcal{H}_1}, \ldots, \overline{\mathcal{H}_s}}(\mathbb{P})$ over a point $\underline{x}=(x_1, \ldots, x_s)$ with $x_i \in \mathbf{O}_{\mathcal{H}_i,m} $. Setting $\overline{\mathcal{H}_i}=\beta_i$ and using the notation in the restriction diagram
\begin{equation}\label{E:Proppur1}
\xymatrix{
Q_m^{\alpha} & F \ar[l]_-{i} \ar[r]^-{\kappa} & Q_m^{\beta_1} \times \cdots
\times Q_m^{\beta_s}}
\end{equation}
we have $\text{Res}^{\beta_1, \ldots, \beta_s}(\mathbb{P})_{|\underline{x}}=j_{\underline{x}}^*\kappa_{!}i^*(\mathbb{P})=\kappa_{!}(j')^*(\mathbb{P})$ where $j_{\underline{x}}: \{\underline{x}\} \to Q_m^{\beta_1} \times \cdots \times Q_m^{\beta_r}$ and
$j': \kappa^{-1}(\underline{x}) \to Q_m^{\alpha}$ are the embeddings. By construction, $\text{Ext}(\mathcal{H}_i, \mathcal{H}_j)=0$ if $i<j$ and hence $\kappa^{-1}(\underline{x}) \subset \mathbf{O}_{\mathcal{F},m}$. As $\mathbb{P}_m$ is $G_m^{\alpha}$-equivariant, its restriction to $\mathbf{O}_{\mathcal{F},m}$ is constant by Lemma~\ref{L:localsyst}. As $\kappa$ is a vector bundle, we deduce from this that $\text{Res}^{\beta_1, \ldots, \beta_r}(\mathbb{P})_{|\underline{x}}\simeq\mathbb{P}_{|z}[-2rk(\kappa)](-rk(\kappa))$ for any $z \in \mathbf{O}_{\mathcal{F},m}$. Hence $\mathbb{P}$ is pure at the point $z$ if and only if $\text{Res}^{\beta_1, \ldots, \beta_s}(\mathbb{P})$ is pure at the point $\underline{x}$. In particular, taking $\mathbb{P}=\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ we obtain that $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ is pure at any $z \in \mathbf{O}_{\mathcal{F},m}$ if and only if
$\mathrm{Res}^{\beta_1, \ldots, \beta_s}(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})$ is pure at $\underline{x}=(x_1, \ldots , x_s)$. But by Lemma~\ref{L:s151} the stalk at $\underline{x}$ of $\text{Res}^{\beta_1, \ldots, \beta_s}(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})$ is a sum of complexes which are each, up to shift, an external product of stalks of the form $(\mathbbm{1}_{\delta_1, \ldots, \delta_r})_{|x_i}$. Therefore (a) for $\mathcal{F}$ is a consequence of (a) for each of the semistable sheaves $\mathcal{H}_i$.
\vspace{.1in}
\textit{Step 2.} Let $\mathcal{F}$ be a semistable sheaf of slope $\mu$ and assume that $\epsilon_{\infty,\mu}(\mathcal{F})$ is a torsion sheaf supported at a single point $x \in \overline{X}$. Thus $\mathcal{F}$ is an iterated extension of a stable sheaf $\mathcal{H}$. Set $\delta_{\mu}=[\mathcal{H}]$ so that $[\mathcal{F}]=l\delta_{\mu}$ for some $l \in \mathbb{N}$. Assume that (a) holds for the stable sheaf $\mathcal{H}$. Then arguing as in Step 1 we see that $\text{Res}^{\delta_{\mu}, \ldots, \delta_{\mu}}(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})$ is pure over any point $\underline{x}=(x_1, \ldots, x_l)$ with $x_i \in \mathbf{O}_{\mathcal{H},m}$. In order to deduce from this that $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ is itself pure over any point $z \in \mathbf{O}_{\mathcal{F},m}$, we consider for any $n >0$ the closed subscheme $Q_{m,x}^{n\delta_{\mu}} \subset Q_{m}^{(n\delta_{\mu})}$ parametrizing quotients
$(\psi:\mathcal{L}_m^{n\delta_{\mu}} \twoheadrightarrow \mathcal{G})$ for which $\mathcal{G}$ is semistable and $\epsilon_{\infty,\mu}(\mathcal{G})$ is supported at $x$. The quotient stack $Q_{m,x}^{n\delta_{\mu}}/G^{n\delta_{\mu}}_m$ is isomorphic to the quotient stack $\mathcal{N}_n/G_n$ where $\mathcal{N}_n \subset \mathfrak{gl}(n,k)$ is the nilpotent cone and $G_n=GL(n,k)$ acts by conjugation. The restriction functor $\text{Res}^{\delta_{\mu}, \ldots, \delta_{\mu}}$ induces a functor
$T: D^b_{G_l}(\mathcal{N}_l) \to D^b_{G_1 \times \cdots \times G_1}(\mathcal{N}_1 \times \cdots \times \mathcal{N}_1)$.
\begin{lem} Let $\mathbb{P} \in D^b_{G_l}(\mathcal{N}_l)$ be a semisimple complex equipped with a mixed structure. Assume that
$T(\mathbb{P})$ is pointwise pure of weight zero. Then $\mathbb{P}$ is also pointwise pure of weight zero.\end{lem}
\noindent
\textit{Proof.} Since any direct summand of a pointwise pure complex is again pointwise pure, it is enough to consider the case of a simple complex $\mathbb{P}=\mathbf{IC}(\mathcal{O}_{\lambda})[c](d/2)$, where $\mathcal{O}_{\lambda}$ is a nilpotent orbit and $c, d$ are integers. By \cite{KazhLusztig} Theorem~5.5 (see also \cite{LusGreen}), $\textbf{IC}(\mathcal{O}_{\lambda})$ is pointwise pure of weight zero. Moreover, we have $T(\mathbf{IC}(\mathcal{O}_{\lambda}))\simeq (\overline{\mathbb{Q}_l} \boxtimes \cdots \boxtimes \overline{\mathbb{Q}_l})^{\oplus d_{\lambda}}$ where $d_{\lambda}$ is the multiplicity of the irreducible $gl(l)$-module $V_{\lambda}$ of highest weight $\lambda$ in the tensor product $V_{(1)} \otimes \cdots \otimes V_{(1)}$. Hence $T(\mathbb{P})$ is pointwise pure if and only if $c=d$. But then $\mathbb{P}$ is itself pointwise pure. $\hfill \checkmark$
\vspace{.05in}
Now, the restriction of $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ to $Q_{m,x}^{l\delta_{\mu}}$ is a semisimple complex and since we have assumed that (a) holds for stable sheaves, it satisfies the conditions of the above Lemma. Hence it is pointwise pure as desired.
\vspace{.1in}
\textit{Step 3.} Finally, we deal with the case of a stable sheaf $\mathcal{F}$. If $\mathcal{F}$ is a stable torsion sheaf then $\mathcal{F}=\mathcal{O}_x$ for some $x \in \overline{X}$ and $Quot_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r}$ is either empty or reduced to a point so that (a') clearly holds.
Let us now suppose that $\mathcal{F}$ is a line bundle. In that case, the stalk
$(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})_{|z}$ is nonzero only if $\alpha_2, \ldots, \alpha_{r}$ are torsion classes. For any line bundle $\mathcal{L}$ and any torsion class $\beta$, the scheme $Quot_{\mathcal{L}}^{\beta}$ is isomorphic to the symmetric product $S^{deg(\beta)}\overline{X}$. It follows that $Quot_{\mathcal{F}}^{\alpha_2, \ldots, \alpha_r}$ is an iterated fibre bundle with fibres $S^{deg(\alpha_i)}\overline{X}$. Thus it is smooth projective and by \cite{DeligneWeilI}, Theorem I.6, its cohomology is pure and (a') holds. We now argue by induction on the rank of $\mathcal{F}$. Let $\mathcal{F}$ be a stable sheaf of rank $r>1$ and assume that the stalk of any complex $\mathbbm{1}_{\beta_1, \ldots, \beta_s}$ is pure over a point $(\phi: \mathcal{L}_m^{\alpha} \twoheadrightarrow \mathcal{G})$ whenever $\mathcal{G}$ is a stable sheaf of rank less than $r$. By Atiyah's construction (see Section 1.1 d)) $\mathcal{F}$ is the universal extension of two stable sheaves $\mathcal{G}, \mathcal{H}$ of smaller rank satisfying $\text{dim}\;\text{Ext}(\mathcal{G},\mathcal{H})=1$. Consider the complex $\mathbb{R}=\text{Res}^{\overline{\mathcal{G}},\overline{\mathcal{H}}}(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})$. By Lemma~\ref{L:s151} and the induction hypothesis the stalk $\mathbb{R}_{|(x,y)}$ is pure of weight zero when $x \in \mathbf{O}_{\mathcal{G},m}$ and $y \in \mathbf{O}_{\mathcal{H},m}$. By definition, $\mathbb{R}=\kappa_{!}i^*(\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})$ where
\begin{equation}
\xymatrix{
Q_m^{\alpha} & F \ar[l]_-{i} \ar[r]^-{\kappa} & Q_m^{\overline{\mathcal{G}}}
\times X_m^{\overline{\mathcal{H}}}}
\end{equation}
is the restriction diagram. Since $\mathcal{F}$ is the only nontrivial extension of $\mathcal{G}$ by $\mathcal{H}$ we have $\kappa^{-1}((x,y)) \subset \mathbf{O}_{\mathcal{F},m} \cup \mathbf{O}_{\mathcal{G} \oplus \mathcal{H},m}$. Moreover $V:=\kappa^{-1}(x,y)$ is a vector space of dimension $d:=\langle \overline{\mathcal{L}^{{\mathcal{G}}}_m}-\overline{\mathcal{G}},\overline{\mathcal{H}}\rangle$ and $W:=\kappa^{-1}(x,y) \cap \mathbf{O}_{\mathcal{G} \oplus \mathcal{H},m}$ is a vector subspace of $\kappa^{-1}(x,y)$ of dimension $\langle \overline{\mathcal{L}^{{\mathcal{G}}}_m},\overline{\mathcal{H}}\rangle=d-1$. Thus from the decomposition $V=W \sqcup (V \setminus W)$ we deduce a long exact sequence in cohomology
with compact support
$$
\xymatrix{
\cdots \ar[r] &H_c^{l-1}(j_{!}j^*\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}) \ar[r]^-{x_{l-1}} & H^l_c(\iota_{!}\iota^*\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}) \ar[r]^-{y_l} & H^l(\mathbb{R})_{|(x,y)} \ar[r] & \cdots}$$
where $\iota: V\setminus W \to V$ and $j: W \to V$ are the embeddings. Put $\mathbb{P}=\iota^* \mathbbm{1}_{\alpha_1, \ldots, \alpha_r}, \mathbb{P}'=j^*\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$. By Lemma~\ref{L:localsyst}, $\mathbb{P}$ and $\mathbb{P}'$ are constant so that
$$H^{l-1}_c(j_{!}j^*\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})=\bigoplus_h H^{l-1-h}_c(W) \otimes H^h(\mathbb{P}')\simeq H^{l+1-2d}(\mathbb{P}')(2d-2)$$
and
\begin{equation*}
\begin{split}
H^l_c(\iota_{!}\iota^*\mathbbm{1}_{\alpha_1, \ldots, \alpha_r})&=\bigoplus_h H^{l-h}_c(V\setminus W) \otimes H^h(\mathbb{P})\\
&\simeq H^{l-2d}(\mathbb{P})(2d) \oplus H^{l-2d+1}(\mathbb{P})(2d-2).
\end{split}
\end{equation*}
In addition since we have assumed that (a) holds for $\mathcal{G}, \mathcal{H}$ it follows from Step 1. that (a) holds also for $\mathcal{G} \oplus \mathcal{H}$ and hence $\mathbb{P}'$ is pure. Now, since
$\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ is globally pure of weight zero, the Frobenius weights in $H^{l-2d+1}(\mathbb{P})(2d-2)$ are all at most $l-1$. On the other hand $\mathbb{R}_{(x,y)}$ is pure hence the weights in $H^l(\mathbb{R})_{(x,y)}$ are all equal to $l$ and therefore $H^{l-2d+1}(\mathbb{P})(2d-2) \subset \text{Ker}\;y_l=\text{Im}\; x_{l-1}$. But all the weights in
$H^{l+1-2d}(\mathbb{P}')(2d-2)$ are equal to $l-1$ as $\mathbb{P}'$ is pure and hence the weights
in $H^{l-2d+1}(\mathbb{P})(2d-2)$ are all equal to $l-1$ so that $\mathbb{P}$ is itself pure. By definition this means that $\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}$ is pure over $\mathbf{O}_{\mathcal{F},m}$ and the induction argument is closed. Thus Step 3. is finished, and Proposition~\ref{L:purityun} is proved.$\hfill \checkmark$
\vspace{.2in}
\noindent
\textit{Proof of Theorem~\ref{P:purity}.}
Let us first assume that $\mathbb{P} \in \mathcal{P}^{(\alpha)}$. By Proposition~\ref{P:descript-perv}, $\mathbb{P}_m$ is of the form $\mathbf{IC}(Y_m^{(\alpha)}, \mathfrak{L}_\sigma)$ for some irreducible $\mathfrak{S}_l$-module $\sigma$. Since $\mathfrak{L}_{\sigma}$ is the extension to $\mathbf{k}$ of a local system ${}^0\mathfrak{L}_{\sigma}$ on the $\boldsymbol{{k}}$-scheme ${}^0Y^{(\alpha)}_m$, it follows from \cite{FW} Section III, Cor. 9.2 that the unique isomorphism
$h_{\mathbb{P}_m}: \mathbb{P}_m \stackrel{\sim}{\to} \tilde{F}^*\mathbb{P}_m$ whose restriction to the stalk of an $\boldsymbol{{k}}$-point
of ${}^0Y^{(\alpha)}_m$ is the identity endows $\mathbb{P}_m$ with a mixed structure which
is globally pure of weight zero. In the case of a general $\mathbb{P} \in \mathcal{P}^\alpha$ we may, by Proposition~\ref{P:descript-perv}, identify
$\mathbb{P}$ with an isotypical component of a product $\mathrm{Ind}(\mathbb{P}^1 \boxtimes \cdots \boxtimes \mathbb{P}^r)$ for certain simple perverse sheaves $\mathbb{P}^i \in \mathcal{P}^{(\alpha_i)}$. By the above, each $\mathbb{P}^i$ is equipped with a mixed structure, globally pure of weight zero. From the definition of the induction product and \cite{Del}, Prop. 6.2.6 it follows that $\mathrm{Ind}(\mathbb{P}^1 \boxtimes \cdots \boxtimes \mathbb{P}^r)$ is also pure of weight zero. The same holds for its isotypical components, and hence for $\mathbb{P}$.
\vspace{.05in}
We now turn to the pointwise purity property. By construction, each $\mathbb{P} \in \mathcal{P}$ appears in some induction product $\mathrm{Ind}^{\alpha_1, \ldots, \alpha_r}(\mathbbm{1}_{\alpha_1} \boxtimes
\cdots \boxtimes \mathbbm{1}_{\alpha_r})$. By Proposition~\ref{L:purityun} above, the complex $\mathbb{R}=\mathrm{Ind}^{\alpha_1, \ldots, \alpha_r}(\mathbbm{1}_{\alpha_1}\boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r})$ is pointwise pure of weight zero. On the other hand, it decomposes as $\mathbb{R}=
\bigoplus_{\mathbb{P}} \mathbb{V}_{\mathbb{P}} \otimes \mathbb{P}$, where $\mathbb{V}_{\mathbb{P}}=
\bigoplus_j\mathrm{Hom}(\mathbb{P},\;^p \hspace{-.03in}H^j(\mathbb{R}))[-j]$ and $\;^p \hspace{-.03in}H^j$ denotes perverse cohomology. Restricting to the stalk over a point $x \in Q_m^\alpha$ for some $m \in \mathbb{Z}$, we deduce that for any $i$,
$H^i(\mathbb{R}_m)_{|x}=\bigoplus_{h+l=i}\bigoplus_{\mathbb{P}} H^h(\mathbb{V}_{\mathbb{P}}) \otimes H^l(\mathbb{P}_m)_{|x}$ (as $\tilde{F}$-modules).
Since $\mathbb{P}$ and $\mathbb{R}$ are both globally pure, the complex $\mathbb{V}_{\mathbb{P}}$, viewed as a sheaf over a point, is pure of weight zero. The purity of $(\mathbb{P}_m)_{|x}$ is now a consequence of the purity of $\mathbb{V}_{\mathbb{P}}$ and of $(\mathbb{R}_m)_{|x}$.
The Theorem is proved. $\hfill \checkmark$
\vspace{.2in}
\section{The trace map}
\vspace{.1in}
\paragraph{\textbf{5.1.}} We may at this point use Theorem~\ref{P:purity} to construct a collection of (topological) bialgebras $\widehat{\mathfrak{U}}^{+}_{{X},e}$ over $\mathbb{C}$ indexed by the set of positive integers $e \in \mathbb{N}$. Let us fix an isomorphism $ \overline{\mathbb{Q}_l} \simeq \mathbb{C}$, and for any complex of finite-dimensional $\overline{\mathbb{Q}_l}$-vector spaces $V^\bullet$ equipped with an action of $\tilde{F}$ let us set $tr_e(V)=\sum_i (-1)^itr_{H^i(V)}(\tilde{F}^e) \in \mathbb{C}$.
We consider the $\mathbb{C}$-vector space $\mathfrak{U}^{+}_{{X},e}:= \bigoplus_{\mathbb{P} \in \mathcal{P}} \mathbb{C} \mathfrak{b}_{\mathbb{P}}$ with a basis $\{\mathfrak{b}_{\mathbb{P}}\}_{\mathbb{P}}$ indexed by $\mathcal{P}$. We say that an element
$\mathfrak{b}_{\mathbb{P}}$ is of $h$-degree $n$ if
$\mathbb{P}=(\mathbb{P}_m)_{m \in \mathbb{Z}}$ with $\mathbb{P}_m=0$ for $m > -n$ and
$\mathbb{P}_{-n} \neq 0$. Denote by
$\widehat{\mathfrak{U}}_{{X},e}^{+}$
the completion of
$\mathfrak{U}^{+}_{{X},e}$ with respect to the $h$-adic topology.
There is a natural multiplication map $\mathfrak{U}^{+}_{{X},e}
\otimes\mathfrak{U}^{+}_{{X},e} \to
\widehat{\mathfrak{U}}^{+}_{{X},e}$ defined by
$\mathfrak{b}_{\mathbb{P}'} \mathfrak{b}_{\mathbb{P}''}=\sum_{\mathbb{P} \in \mathcal{P}} tr_e(\mathbb{V}_{\mathbb{P}}) \mathfrak{b}_{\mathbb{P}}$ where $\mathrm{Ind}(\mathbb{P}' \boxtimes \mathbb{P}'')=\bigoplus_{\mathbb{P}} \mathbb{V}_{\mathbb{P}} \otimes \mathbb{P}$. Here, the multiplicity spaces are equipped with a mixed structure coming from that on $\mathbb{P}, \mathbb{P}', \mathbb{P}''$ and
on $\mathrm{Ind}(\mathbb{P}' \boxtimes \mathbb{P}'')$.
In a similar way, there is a comultiplication map
$\mathfrak{U}^{+}_{{X},e} \to
\widehat{\mathfrak{U}}^{+}_{{X},e} \hat{\otimes}
\widehat{\mathfrak{U}}^{+}_{{X},e}$ defined by $\Delta(\mathfrak{b}_{\mathbb{P}})=\sum_{\mathbb{P}', \mathbb{P}''} tr_e(\mathbb{V}^{\mathbb{P}',\mathbb{P}''}) \mathfrak{b}_{\mathbb{P}'} \otimes \mathfrak{b}_{\mathbb{P}''}$, where
$\mathrm{Res}(\mathbb{P}) =\bigoplus_{\mathbb{P}', \mathbb{P}''} \mathbb{V}^{\mathbb{P}',\mathbb{P}''}\otimes( \mathbb{P}' \boxtimes \mathbb{P}'')$. Moreover, these maps are continuous (see \cite{S1}, Section~10.1.) and we may extend them
to
$$m_e: \widehat{\mathfrak{U}}^{+}_{{X},e}
\otimes\widehat{\mathfrak{U}}^{+}_{{X},e} \to
\widehat{\mathfrak{U}}^{+}_{{X},e},$$
$$\Delta_{e}: \widehat{\mathfrak{U}}^{+}_{{X},e} \to
\widehat{\mathfrak{U}}^{+}_{{X},e} \hat{\otimes}
\widehat{\mathfrak{U}}^{+}_{{X},e}.$$
The associativity and coassociativity of $m_e$ and $\Delta_e$ follow from the analogous properties of the functors $\mathrm{Ind}$ and $\mathrm{Res}$ (see \cite{S1}, Section~3).
\vspace{.1in}
\paragraph{\textbf{Remark.}} The choice of the identification $\overline{\mathbb{Q}_l} \simeq \mathbb{C}$ is not essential~: as all complexes considered here are pure, the eigenvalues of Frobenius are all algebraic integers and in addition the set of eigenvalues is invariant under $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. In particular, the algebra $\widehat{\mathfrak{U}}_{{X},e}^{+}$ is independent of that choice.
\vspace{.1in}
For any fixed elliptic curve ${X}$ we define the completions $\widehat{\mathbf{H}}_{{X}}$ and $\widehat{\mathbf{U}}^+_{{X}}$ of $\mathbf{H}_{{X}}$ and $\mathbf{U}^+_{{X}}$ respectively in the same way as this was done for $\boldsymbol{\mathcal{E}}^+_{\mathbf{R}}$ in Section~2.1 (these completions are also used in \cite{BS}, Section~2.).
Recall that for any coherent sheaf $\mathcal{F} \in Coh(\overline{{X}})$ of class $\alpha$ there is a substack $\mathbf{O}_{\mathcal{F}} \subset \underline{Coh}^{\alpha}$ parametrizing sheaves isomorphic to $\mathcal{F}$, and that to any complex $\mathbb{P} \in D^b(\underline{Coh}^{\alpha})^{ss}$ is associated its stalk $\mathbb{P}_{|\mathbf{O}_{\mathcal{F}}}$, which is a complex of $\overline{\mathbb{Q}_l}$-vector spaces (see Section~3.1). The same holds for any sheaf $\mathcal{F} \in Coh({X})$ if we replace $\underline{Coh}^{\alpha}$ by ${}^0\underline{Coh}^{\alpha}$. Moreover, if $\mathcal{F} \in Coh({X})$ then $\mathbf{O}_{\mathcal{F}} \subset \underline{Coh}^{\alpha}$ is (pointwise) fixed by $\tilde{F}$ and if $\mathbb{P} \in \mathcal{P}^{\alpha}$ is equipped with the mixed structure provided by Theorem~\ref{P:purity} then there is an action of $\tilde{F}$ on the cohomology stalks $H^i(\mathbb{P}_{|\mathbf{O}_{\mathcal{F}}})$. This allows us to define a $\mathbb{C}$-linear trace function $Tr_1: \widehat{\mathfrak{U}}^+_{{X},1} \to \widehat{\mathbf{H}}_{{X}}$ given by
$$Tr_1(\mathfrak{b}_{\mathbb{P}})=\sum_{\mathcal{F}} tr_1(H^{\bullet}(\mathbb{P}_{|\mathbf{O}_{\mathcal{F}}}))[\mathcal{F}].$$
By definition, $Tr_1(\mathfrak{b}_{\mathbb{P}})$ is equal to the limit as $n$ tends to $-\infty$ of the elements
$$Tr^{(n)}_1(\mathfrak{b}_{\mathbb{P}})=v^{-\mathrm{dim}\;G_n^\alpha}\sum_{\mathbf{O} \in Q_n^\alpha/G_n^\alpha} tr_1(H^\bullet(\mathbb{P}_{n\;|\mathbf{O}})) [\mathcal{F}_{\mathbf{O}}]
$$
where $v=-q^{-1/2}$ (the factor $v^{-\mathrm{dim}\;G_n^\alpha}$ guarantees that for any $m < n$, $Tr^{(n)}_1(\mathfrak{b}_{\mathbb{P}})$ and
$Tr^{(m)}_1(\mathfrak{b}_{\mathbb{P}})$ coincide on the set of objects of $Coh({X})$ strictly generated by $\mathcal{O}(n)$). By Grothendieck's trace formula, this map is a bialgebra homomorphism (see e.g. \cite{SLecturesII}, Section~3.6.). Replacing $1$ by $e \in \mathbb{N}$ throughout yields a similar homomorphism $Tr_e:\widehat{\mathfrak{U}}^+_{{X},e} \to \widehat{\mathbf{H}}_{{X}/\mathbb{F}_{q^e}}$, where ${X}/\mathbb{F}_{q^e}={X} \times_{Spec\;\boldsymbol{{k}}} Spec\; \mathbb{F}_{q^e}$. Note that in defining $\widehat{\mathbf{H}}_{{X}/\mathbb{F}_{q^e}}$ and $Tr_e$ we use the parameters $q^e$ and $v_e=-{q^{-e/2}}$ instead of $q,v$.
\vspace{.1in}
For any $\alpha \in \mathbf{Z}^+$ let us set $\mathfrak{b}_{\alpha}=\mathfrak{b}_{\mathbbm{1}_{\alpha}} \in \widehat{\mathfrak{U}}^+_{{X},e}$. By construction, we have $Tr_e(\mathfrak{b}_{\alpha})=\mathbf{1}_{\alpha}$.
In the same vein, there exists a unique collection of elements $\{\mathfrak{b}^{ss}_{\alpha}\;|\alpha \in \mathbf{Z}^+\}$
of $\widehat{\mathfrak{U}}^+_{{X},e}$ satisfying
\begin{equation}\label{E:ssslip}
\mathfrak{b}_{\alpha}=\mathfrak{b}_{\alpha}^{ss} + \sum_{t > 1} \sum_{\underset{\mu(\alpha_1)< \cdots < \mu(\alpha_t)}{\alpha_1 + \cdots + \alpha_t=\alpha}} v_e^{\sum_{i<j}\langle \alpha_i,\alpha_j\rangle}\mathfrak{b}_{\alpha_1}^{ss} \cdots
\mathfrak{b}_{\alpha_t}^{ss},
\end{equation}
for any $\alpha$ and we have $Tr_e(\mathfrak{b}^{ss}_{\alpha})=\mathbf{1}^{ss}_{\alpha}$.
\vspace{.05in}
The next Theorem is proved in Section~5.2.
\begin{theo}\label{T:cano} The subalgebra of $\widehat{\mathfrak{U}}^{+}_{{X},e}$ generated by $\mathfrak{b}_{\alpha}$ for $\alpha\in \mathbf{Z}^+$ is dense (in the $h$-adic topology).\end{theo}
\begin{cor}\label{C:cano} The map $Tr_e$ restricts to an isomorphism $\widehat{\mathfrak{U}}^{+}_{{X},e} \stackrel{\sim}{\to} \widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}$.\end{cor}
\noindent
\textit{Proof.} Since $Tr_e$ is continuous and $Tr_e(\mathfrak{b}_{\alpha}) \in \widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}$, the image of $Tr_e$ belongs to
$\widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}$ by Theorem~\ref{T:cano}, and in fact
$Tr_e(\widehat{\mathfrak{U}}^{+}_{{X},e})=\widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}$ by Proposition~\ref{P:basisone}. It remains to show the injectivity of $Tr_e$. Observe that if $\alpha=(d,0)$ is a torsion class then by Proposition~\ref{P:descript-perv} and the definition of $\widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}$, we have $\mathrm{dim}\; \widehat{\mathfrak{U}}^{+}_{{X},e} [\alpha]=p(d)=\mathrm{dim}\;\widehat{\mathbf{U}}^+_{{X}/\mathbb{F}_{q^e}}[\alpha]$, where $p(d)$ stands for the number of partitions of $d$. Thus the restriction of $Tr_e$ to
$\bigoplus_d \widehat{\mathfrak{U}}^{+}_{{X},e} [(d,0)]$ is injective. More generally,
let $\mathfrak{U}^{+,\geq n}_{{X},e} \subset \widehat{\mathfrak{U}}^{+}_{{X},e}$ be the subspace linearly spanned by elements $\mathfrak{b}_{\mathbb{P}}$ such that $\mathbb{P}=(\mathbb{P}_l)_l$ with $\mathbb{P}_n \neq 0$, and
put $\mathbf{U}^{+,\geq n}_{{X}/\mathbb{F}_{q^e}}=\mathbf{U}^{+}_{{X}/\mathbb{F}_{q^e}}/( \mathbf{U}^{+}_{{X}/\mathbb{F}_{q^e}} \cap
\mathbf{H}^{\not\geq n}_{{X}/\mathbb{F}_{q^e}})$ where
$$\mathbf{H}^{\not\geq n }_{{X}/\mathbb{F}_{q^e}}=\bigoplus_{\mathcal{F} \in \mathcal{I}_{\not\geq n}} \mathbb{C} [\mathcal{F}], $$
$$ \mathcal{I}_{\not\geq n}=\{\mathcal{F}\;|\mathcal{F} \text{\;is\;not\;strictly\;generated\;by\;} \mathcal{O}(n)\}.$$
Composition with $Tr_e$ gives rise to a map $\widehat{\mathfrak{U}}^{+,\geq n}_{{X},e} \to \mathbf{U}^{+,\geq n}_{{X}/\mathbb{F}_{q^e}}$, which is still surjective. On the other hand, by Proposition~\ref{P:descript-perv}, we have
$$\mathrm{dim}\; \widehat{\mathfrak{U}}^{+,\geq n}_{{X},e}[\alpha]=\sum_{\underset{\mu(\alpha_1)>\cdots > \mu(\alpha_r) > n}{\alpha_1+\alpha_2+\cdots+\alpha_r=\alpha}} p(deg(\alpha_1)) \cdots p(deg(\alpha_r))$$
and from \cite{BS}, Theorem~5.4. and Lemma~5.6, one sees that $\mathrm{dim}\;\mathbf{U}^{+,\geq n}_{{X}/\mathbb{F}_{q^e}}[\alpha]$ is given by the same formula. Hence the restriction of $Tr_e$ to
$\widehat{\mathfrak{U}}^{+,\geq n}_{{X},e}$ is injective. Passing to the limit as $n$ tends to $-\infty$, we obtain that
$Tr_e$ is injective.$\hfill \checkmark$
\vspace{.2in}
\paragraph{\textbf{5.2.}} In this Section we give the proof of Theorem~\ref{T:cano}. It is quite similar to the proof of Theorem~4.ii) in \cite{S1}, but we provide the details for the reader's convenience.
Recall that for $\nu \in \mathbb{Q} \cup \{\infty\}$ we put $\delta_{\nu}=(p,q) \in \mathbf{Z}^+$ where $deg((p,q))=1$ and $p/q=\nu$. We extend by bilinearity the notation $\mathfrak{b}_{\mathbb{P}}$ to an arbitrary (semisimple) complex $\mathbb{S}$ belonging to some $\mathcal{Q}^\alpha$.
\vspace{.1in}
By \cite{S1}, Proposition~5.1. and Corollary~5.2., we have
\begin{equation}\label{E:semismall}
\begin{split}
{\mathrm{Ind}}^{n_1\delta_{\infty}, \cdots
,n_r\delta_{\infty}}(\mathbf{IC}&(\underline{U}^{(n_1\delta_{\infty})},{\sigma_1})
\boxtimes \cdots \boxtimes
\mathbf{IC}(\underline{U}^{(n_r\delta_{\infty})},{\sigma_r}))\\
&=\mathbf{IC}(\underline{U}^{(l\delta_{\infty})},ind_{\mathfrak{S}_{n_1}\times
\cdots \times
\mathfrak{S}_{n_r}}^{\mathfrak{S}_l}(\sigma_1 \otimes \cdots \otimes
\sigma_r))
\end{split}
\end{equation}
where $l=\sum n_i$. By construction, $\mathfrak{b}_{l \delta_{\infty}}=\mathbf{IC}(\underline{U}^{(l\delta_\infty)},Id) \in \widehat{\mathfrak{U}}^{+}_{{X},e}$ for all $l \geq 1$. It is well-known that the Grothendieck group of $\mathfrak{S}_l$ is linearly spanned (over $\mathbb{Z}$) by the class of the trivial representation $Id$ and the classes of the induced representations $ind_{\mathfrak{S}_{n_1} \times \cdots \times \mathfrak{S}_{n_r}}^{\mathfrak{S}_l} (\sigma_1 \otimes \cdots\otimes \sigma_r)$ for all tuples $(n_i)_i$ such that $\sum_i n_i=l$ and $n_i <l$ for all $i$. It thus follows by induction from (\ref{E:semismall}) that $\mathfrak{b}_{\mathbf{IC}(\underline{U}^{(l\delta_{\infty}}),\sigma)} \in \widehat{\mathfrak{U}}^{+}_{{X},e}$ for any $\sigma$. This proves Theorem~\ref{T:cano} for the weights of the form $l \delta_{\infty}$.
\vspace{.1in}
Let $\mathfrak{W}$ denote the subalgebra of $\mathfrak{U}^{+}_{{X},e}$ generated by $\{\mathfrak{b}_{\alpha}\;|\; \alpha \in \mathbf{Z}^+\}$. We also let $\widehat{\mathfrak{U}}^+_{<n}$ be the completion of ${\bigoplus}_{\mathbb{R},\; \mathbb{R}_{n}=0} \mathbb{C} \mathfrak{b}_{\mathbb{R}}$. Note that for any $\alpha$ and $n$, the space $\widehat{\mathfrak{U}}_{{X},e}^+[\alpha]/\widehat{\mathfrak{U}}^+_{<n}[\alpha]$ is finite-dimensional.
To prove Theorem~\ref{T:cano} for a general weight $\alpha$, we have to show the following.
\vspace{.05in}
\noindent
\textit{a)\;For any} $\alpha \in \mathbf{Z}^+$, $n \in \mathbb{Z}$ \textit{and for any}
$\mathbb{P} \in \mathcal{P}^{\alpha}$ for which $\mathbb{P}_n \neq 0$
\textit{there exists} $u \in \mathfrak{W}$ such that
$\mathfrak{b}_{\mathbb{P}} \equiv u\;(\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$.
\vspace{.05in}
We will prove $a)$ by induction on the rank of $\alpha$. The case of $\alpha$ of rank zero is treated above, so let us choose $\mathbb{P} \in \mathcal{P}^\alpha$ with $rank(\alpha) \geq 1$ and let us assume that $a)$ is proved for all $\beta$ with $rank(\beta) < rank(\alpha)$. Next, we fix $n \in \mathbb{Z}$, and argue by induction on the degree of $\alpha$, and finally on the HN type of $\mathbb{P}$. Hence we further assume that $a)$ is proved for all $\mathbb{P}'$ of weight $\beta$ with $rank(\beta)=rank(\alpha)$ and $deg(\beta) < deg(\alpha)$ and for all $\mathbb{P}' \in \mathcal{P}^\alpha$ such that $HN(\mathbb{P}') \prec HN(\mathbb{P})$. Note that for any given $n$ and $\mathbb{P}$ there exist only finitely many such $\mathbb{P}' $ as above for which $\mathbb{P}'_n \neq 0$ so that an induction argument is indeed justified.
\vspace{.1in}
Let us write $HN(\mathbb{P})=\underline{\alpha}=(\alpha_1, \ldots, \alpha_r)$, and let us first suppose that $\mathbb{P}$ is not generically semistable, i.e that $r >1$.
Consider the complex
$$\mathbb{R}=\mathrm{Ind}^{\alpha_1, \ldots, \alpha_r}
\circ \mathrm{Res}^{\alpha_1, \ldots, \alpha_r}(\mathbb{P}) \in
\mathcal{Q}^{\alpha}.$$
As in \cite{S1}, Lemma~7.1., we have
\begin{equation}\label{E:proofcan1}
supp(\mathbb{R}) \subset
\bigcup_{(\underline{\beta}) \preceq (\underline{\alpha})} \underline{HN}^{-1}
(\underline{\beta})
\end{equation}
and if
$j: \underline{HN}^{-1}(\underline{\alpha}) \hookrightarrow \underline{Coh}^\alpha$
is the embedding then
\begin{equation}\label{E:proofcan2}
j^*(\mathbb{R}) \simeq j^*(\mathbb{P}).
\end{equation}
Let us first assume that $\mu(\alpha_2)\neq \infty$, so that for all $i$ we have $rank(\alpha_i) < rank(\alpha)$.
By the induction hypothesis we may find
for any $m \in \mathbb{Z}$ an element $w \in \mathfrak{W} ^{\otimes r}$ such that
$$\mathfrak{b}_{\text{Res}^{\alpha_1, \ldots, \alpha_r}(\mathbb{P})} \equiv w\; (\text{mod}\; (\widehat{\mathfrak{U}}^+[\alpha_1] \otimes \cdots \otimes \widehat{\mathfrak{U}}^+[\alpha_r])_{<m}),$$
where $$(\widehat{\mathfrak{U}}^+[\alpha_1] \otimes \cdots \otimes \widehat{\mathfrak{U}}^+[\alpha_r])_{<m}=\sum_{i=1}^r \widehat{\mathfrak{U}}_{{X},e}^+[\alpha_1] \otimes \cdots \otimes \widehat{\mathfrak{U}}^+[\alpha_i]_{<m} \otimes \cdots \otimes \widehat{\mathfrak{U}}_{{X},e}^+[\alpha_r].$$
Since the induction product is continuous, we can choose $m \ll 0$ so that $\mathfrak{b}_{\mathbb{R}} =\mathfrak{b}_{\text{Ind}\circ \text{Res}(\mathbb{P})} \equiv \text{Ind}(w)\; (\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$.
Next, as $\mathbb{P}$ is simple, there exists a semisimple complex $\mathbb{P}'$ with $supp(\mathbb{P}')\subset \bigcup_{\underline{\beta} \prec \underline{\alpha}}
\underline{HN}^{-1}(\underline{\beta})$ such that $\mathbb{P} \oplus \mathbb{P}'\simeq \mathbb{R}$. Using the induction hypothesis again there exists $w' \in \mathfrak{W}$ such that $\mathfrak{b}_{\mathbb{P}'} \equiv w'\;(\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$, and we finally obtain $\mathfrak{b}_{\mathbb{P}} = \mathfrak{b}_{\mathbb{R}}-\mathfrak{b}_{\mathbb{P}'} \equiv \text{Ind}(w)-w' \;(\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$ as desired.
This closes the induction step when $r>1$ and $\underline{\alpha} \neq (\alpha_1, \alpha_2)$ with $\mu(\alpha_2)=0$.
If we are in this last case then we have $rank(\alpha_1)=rank(\alpha)$ and we need a slightly different argument. Let us write
$\mathrm{Res}^{\alpha_1, \alpha_2}(\mathbb{P})=\sum_i \mathbb{T}_i \boxtimes \mathcal{Q}_i$. According to the first part of the proof, $\mathfrak{b}_{\mathcal{Q}_i} \in \mathfrak{W}$ for any $i$. Moreover since $deg(\alpha_1) < deg(\alpha)$ , there exists by the induction hypothesis elements $w_i \in \mathfrak{W}$ such that
$\mathfrak{b}_{\mathbb{T}_i} \equiv w_i \;(\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$. As $\widehat{\mathfrak{U}}^+_{<n}$ is stable under right multiplication by $\mathfrak{U}^+_{{X},e}[\beta]$ for any $\beta$ satisfying $\mu(\beta)=\infty$, we deduce that
$$\mathfrak{b}_{\text{Ind}\circ \text{Res}(\mathbb{P})} \equiv \sum_i w_i \mathfrak{b}_{\mathcal{Q}_i}\;(\text{mod}\; \widehat{\mathfrak{U}}^+_{<n})$$
and we may conclude the proof as in the case $\mu(\alpha_2)\neq \infty$ above.
\vspace{.15in}
\paragraph{} It remains to consider the situation of a generically semistable $\mathbb{P}$.
Write $\alpha=l\delta_{\mu}$ for some $l \geq 1$ and $\mu \in \mathbb{Q}$.
By Proposition~\ref{P:descript-perv} we have $\mathbb{P}=\mathbf{IC}(U^{(l\delta_{\mu})},\sigma)$ for some irreducible representation $\sigma$ of $\mathfrak{S}_l$.
Denote by $u_{\alpha}: \underline{Coh}^{(\alpha)} \hookrightarrow \underline{Coh}^\alpha$ the open embedding. From the existence of the maps $\rho_{\mu,\infty}$ (see Section~3.5.1.) and (\ref{E:semismall})
one deduces that
\begin{equation}\label{E:stablesemismall}
\begin{split}
u_{l\delta_{\mu}}^*{\mathrm{Ind}}^{n_1\delta_{\mu}, \cdots
,n_r\delta_{\mu}}(\mathbf{IC}&(\underline{U}^{n_1\delta_{\mu}},{\sigma_1})
\boxtimes \cdots \boxtimes
\mathbf{IC}(\underline{U}^{n_r\delta_{\mu}},{\sigma_r}))\\
&=u_{l\delta_{\mu}}^*\mathbf{IC}(\underline{U}^{l\delta_{\mu}},ind_{\mathfrak{S}_{n_1}\times
\cdots \times
\mathfrak{S}_{n_r}}^{\mathfrak{S}_l}(\sigma_1 \otimes \cdots \otimes
\sigma_r)).
\end{split}
\end{equation}
On the other hand the element $\mathfrak{b}_{l\delta_{\mu}}$ belongs to $\mathfrak{W}$ for all $l \geq 1$.
As the Grothendieck group of $\mathfrak{S}_l$ is linearly spanned by the class of the trivial representation together with the classes ${ind}^{\mathfrak{S}_l}_{\mathfrak{S}_{n_1} \times \cdots \times \mathfrak{S}_{n_r}}(\sigma_1 \otimes \cdots \otimes \sigma_r)$ with $n_i >1$ for all $i$,
we conclude, using the induction hypothesis and the continuity of the product that there exists a
semisimple complex $\mathbb{T}$ which is \textit{not} generically semistable and
such that $\mathfrak{b}_{\mathbb{T} \oplus \mathbb{P}} \in \mathfrak{W} + \widehat{\mathfrak{U}}^+_{<n}$. But then $supp(\mathbb{T}) \subset \bigcup_{\underline{\beta} \prec \underline{\alpha}}
\underline{HN}^{-1}(\underline{\beta})$ and by the induction hypothesis again we have $\mathfrak{b}_{\mathbb{T}} \in \mathfrak{W} + \widehat{\mathfrak{U}}^+_{<n}$. Thus in the end $\mathfrak{b}_{\mathbb{P}} \in \mathfrak{W} + \widehat{\mathfrak{U}}^+_{<n}$ and statement $a)$ is proved for $\mathbb{P}$. This closes the induction argument and concludes the proof of Theorem~\ref{T:cano}. $\hfill \checkmark$
\vspace{.2in}
\section{Geometric construction of $\mathbf{B}$}
\vspace{.1in}
In this section, we draw some of the consequences that Theorem~\ref{T:cano} and Corollary~\ref{C:cano} have regarding the canonical basis ${\mathbf{B}}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ considered in Section~2.
\vspace{.2in}
\paragraph{\textbf{6.1.}} We will need one technical result. By Theorem~\ref{P:purity}, any $\mathbb{P} \in \mathcal{P}$ is endowed with a distinguished isomorphism $h_{\mathbb{P}}: \mathbb{P} \stackrel{\sim}{\to} \tilde{F}^*\mathbb{P}$ with respect to which it is pointwise pure of weight zero. Recall from Section~4 that we have set $v=-{q^{-1/2}}$ and more generally $v_f=-q^{-f/2}$ and let us denote by $\alpha,\overline{\alpha}$ the Frobenius eigenvalues in $H^1(\overline{X}, \overline{\mathbb{Q}_l})$ \footnote{In \cite{BS}, the Frobenius eigenvalues are denoted $\sigma, \overline{\sigma}$. In this paper we reserve the latter notation for the formal parameters in the ring $\mathbf{R}=[\sigma^{\pm 1/2}, \overline{\sigma}^{\pm 1/2}]$.} . We also put $\tau=\alpha q^{-1/2}$ and $\tau_f=\tau^f$. Note that $|\tau|=1$.
\vspace{.1in}
\begin{prop}\label{P:verypure} Assume that $\tau\neq 1$.
Let $\mathbb{P} \in \mathcal{P}^\alpha$ and let $\mathcal{F} \in Coh({X}/\mathbb{F}_{q^e})$. The eigenvalues of $\tilde{F}^e$ acting on $H^i(\mathbb{P})_{|\mathbf{O}_{\mathcal{F}}}$ all belong to $q^{ei/2}\tau^{e\mathbb{Z}}$.\end{prop}
\noindent
\textit{Proof.} We will prove this for $e=1$; the other cases can be deduced by base change.
Fix $n \ll 0$ and let $x \in \mathbf{O}_{\mathcal{F},n}$. Let $\{\lambda_{ij}\}_{j \in J_i}$ be the set of Frobenius eigenvalues (with multiplicity) of $H^i(\mathbb{P}_n)_{|x}$. By Theorem~\ref{P:purity}, we may write $\lambda_{ij}=q^{i/2}\gamma_{ij} $ for some $\gamma_{ij}$ satisfying $|(\gamma_{i,j})|=1$, and by definition the coefficient of $[\mathcal{F}]$ in $Tr_f(\mathfrak{b}_{\mathbb{P}})$ is equal to
$$Tr_f(\mathfrak{b}_{\mathbb{P}})_{|\mathbf{O}_\mathcal{F}}=\sum_{i,j}(-1)^i\lambda_{ij}^f=\sum_{i,j}(-1)^{i(f+1)}v^{-if}\gamma_{ij}^f.$$
Our approach is based on the following elementary lemma~:
\begin{lem} Let $a, b$ be two complex numbers satisfying $|a|<1$, $|b|=1$ and $b \neq 1$.
Let $P(x,y) \in \mathbb{Q}[x^{\pm 1}, y^{\pm 1}]$ and assume given finite sets $K_i$ and complex numbers $\omega_{ij}, j \in K_i$ satisfying $|\omega_{ij}|=1$ such that, for any $f \geq 1$
$$P((-1)^{f+1}a^f,b^f)=\sum_i (-1)^{i(f+1)}a^{-if} \sum_{j \in K_i} \omega_{ij}^f.$$
Then $P(x,y) \in \mathbb{N}[x^{\pm 1},y^{\pm 1}]$ and $\omega_{ij} \in b^{\mathbb{Z}}$ for all $i,j$.\end{lem}
\noindent
\textit{Proof.} We argue by induction on the number $N(P)$ of monomials $x^iy^j$ appearing in $P(x,y)$.
If $N=0$, i.e. $P=0$ then $\sum_i \sum_{j \in K_i} (-1)^{i(f+1)}a^{-if}\omega_{ij}^f=0$ for all $f$. We will prove that this forces all the sets $K_i$ to be empty. Choose $i_0$ maximal such that $K_{i_0} \neq \emptyset$. Then
\begin{equation}\label{E:sublem}
\begin{split}
0&=\sum_{i \leq i_0} (-1)^{i(f+1)}a^{(i_0-i)f} \sum_{j \in K_i}\omega_{ij}^f\\
&=(-1)^{i_0(f+1)}\sum_{j \in K_{i_0}} \omega_{i_0j}^f + \sum_{i < i_0} (-1)^{i(f+1)}a^{(i_0-i)f} \sum_{j \in K_i}\omega_{ij}^f
\end{split}
\end{equation}
for all $f$. Letting $f \mapsto \infty$ we get $\text{Lim}_{f \mapsto \infty} \sum_{j \in K_{i_0}} \omega_{i_0j}^f=0$. But this contradicts the following well known fact
\vspace{.05in}
\begin{sublem} Let $\{\omega_j\}$, $j \in K$ be a finite set of complex numbers satisfying $|\omega_j|=1$ for all $j$. If $K \neq \emptyset$ then $\sum_j \omega_j^N$ does not converge to zero as $N$ tends to infinity.
\end{sublem}
Now fix $P \neq 0$ and assume that the Lemma is proved for all $P'$ such that $N(P')<N(P)$. Let $x^{i_0}y^{j_0}$ be the monomial in $P$ of least degree, ordered lexicographically. Multiplying throughout by $(-1)^{(f+1)i_0}a^{-fi_0}b^{-fj_0}$ we may assume that $i_0=j_0=0$, i.e $P(x,y)=P_0(y) + \sum_{i>0} x^i P_i(y)$ and $P_0(y)=c_0 + \sum_{j>0}c_jy^j$. For any $l>0$ set
\begin{equation*}
\begin{split}
H_l&=\frac{1}{l} \sum_{f=1}^l P((-1)^{(f+1)}a^f, b^f)\\
&=c_0 + \sum_{j>0} c_j\big(\frac{1}{l}\sum_{f=1}^l b^{fj}\big)+\sum_{i>0} \frac{1}{l} \sum_{f=1}^l (-1)^{(f+1)i} a^{if}P_i(b^f).
\end{split}
\end{equation*}
As $|a|<1$ and $b \neq 1$ we have $\text{Lim}_{l \to \infty} H_l=c_0$. On the other hand $H_l =\sum_i \sum_{j \in K_i} \frac{1}{l} \sum_{f=1}^l (-1)^{(f+1)i}a^{-if}\omega_{ij}^f$. This converges if and only if $K_i=\emptyset$ for $i>0$ and, noting that
$$\underset{\;l \to \infty}{\text{Lim}}\; \frac{1}{l} \sum_{f=1}^l \omega_{ij}^f=\begin{cases} 1 & \text{if}\; \omega_{ij}=1\\
0 & \text{otherwise}
\end{cases}$$
we have $c_0= \text{Lim}_{l \to \infty} H_l=\#\{j \in K_0\;|\; \omega_{0j}=1\}$. In particular, $c_0 \in \mathbb{N}$. We may now use the induction hypothesis with $P'(x,y)=P(x,y)-c_0$ and
$$K'_i=\begin{cases} K_i & \text{if}\; i \neq 0\\ K_0 \setminus \{j \in K_0\;|\; \omega_{0j}=1\}& \text{if}\; i=0
\end{cases}$$
to conclude that $P'(x,y)$, hence also $P(x,y)$ belongs to $\mathbb{N}[x^{\pm 1}, y^{\pm 1}]$. The proof also gives that $\omega_{ij} \in b^{\mathbb{Z}}$ for all $i,j$. We are done. $\hfill \checkmark$
\vspace{.1in}
\noindent
\textit{End of Proof of Proposition~\ref{P:verypure}.} By the above lemma, if a semisimple complex $\mathbb{R} \in \mathcal{Q}^{\alpha}$ satisfies the property
\vspace{.05in}
\noindent
(a) \textit{for all $\mathcal{F} \in Coh({X})$ there exists $R\in \mathbb{Q}[x^{\pm 1},y^{\pm 1}]$ such that
$Tr_f(\mathfrak{b}_{\mathbb{R}})_{|\mathbf{O}_{\mathcal{F}}}=R(v_f,\tau_f)$ for all $f \geq 1$},
\vspace{.05in}
\noindent
then the Frobenius eigenvalues on all stalks of $H^i(\mathbb{R})$ belong to $q^{i/2}\tau^{\mathbb{Z}}$. The converse is also obviously true. We claim that if $\mathbb{R}$ and $\mathbb{R}'$ both satisfy (a) then so does $\mathrm{Ind}(\mathbb{R} \boxtimes \mathbb{R}')$. Indeed, if $\mathbb{R}$ and $\mathbb{R}'$ satisfy (a) then there exists
$R_i, R'_j \in \mathbb{Q}[x^{\pm 1}, y^{\pm 1}]$ and paths $\mathbf{p}_i, \mathbf{p}'_j \in \textbf{Conv}^+$ such that \begin{equation}\label{E:Trace}
Tr_f(\mathfrak{b}_{\mathbb{R}})=\sum_i R_i(v_f,\tau_f)\mathbf{1}^{\textbf{ss}}_{\mathbf{p}_i} , \qquad
Tr_f(\mathfrak{b}_{\mathbb{R}'})=\sum_j R'_j(v_f,\tau_f)\mathbf{1}^{\textbf{ss}}_{\mathbf{p}'_j}.
\end{equation}
Recall from \cite{BS}, Prop.~6.3, that $\widehat{\mathbf{U}}^+_{{X}}$ is isomorphic to the specialization at $\sigma^{1/2}=\alpha^{1/2}, \overline{\sigma}^{1/2}=\overline{\alpha}^{1/2}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$. Of course this holds for all the base field extensions $X/\mathbb{F}_{q^f}$ as well, with the corresponding specialization $\sigma^{1/2}=\alpha^{f/2}, \overline{\sigma}^{1/2}=\overline{\alpha}^{f/2}$. Set $\nu=-(\sigma \overline{\sigma})^{-1/2}, t=\sigma^{1/2}\overline{\sigma}^{-1/2}$. These specialize respectively to $v_f$ and $\tau_f$. We deduce from (\ref{E:Trace}) that
$Tr_f(\mathfrak{b}_{\mathbb{R}})$ and $Tr_f(\mathfrak{b}_{\mathbb{R}'})$ are the specialization at $\nu=v_f$ and $t=\tau_f$ of ${u}=\sum_i R_i(\nu,t) {\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}_i}$ and
${u}'=\sum_j R'_j(\nu,t) {\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}'_j}$ respectively. But then
$Tr_f(\mathfrak{b}_{\mathrm{Ind}(\mathbb{R} \boxtimes \mathbb{R}')})$ is the specialization of
${u}\cdot {u}'$, and it follows that (a) holds for $\mathrm{Ind}(\mathbb{R} \boxtimes \mathbb{R}')$ as well.
\vspace{.05in}
\noindent
We may now finish the proof of Proposition~\ref{P:verypure}. It is clear that $\mathbbm{1}_{\alpha}$ satisfies (a) for any $\alpha$, and hence the same is true for products $\text{Ind}(\mathbbm{1}_{\alpha_1} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r})$. We deduce that the Frobenius eigenvalues of any stalk of $H^i(\text{Ind}(\mathbbm{1}_{\alpha_1} \boxtimes \cdots \boxtimes \mathbbm{1}_{\alpha_r}))$ belong to
$q^{i/2}\tau^{\mathbb{Z}}$.
Observe that for any slope $\mu$ the simple perverse sheaf $\mathbf{IC}(U^{(l\delta_{\mu})},\sigma_{\lambda}) \in \mathcal{P}^{(l\delta_{\mu})}$ appears with multiplicity \textit{one} in $\text{Ind}(\mathbbm{1}_{\lambda_1\delta_{\mu}} \boxtimes \cdots \boxtimes \mathbbm{1}_{\lambda_r\delta_{\mu}})$ if $\lambda=(\lambda_1, \ldots, \lambda_r)$ and $\sigma_{\lambda}$ is the irreducible $\mathfrak{S}_l$-module of type $\lambda$. From this it follows that the Frobenius eigenvalues of any stalk of $H^i(\mathbf{IC}(U^{(l\delta_{\mu})},\sigma_{\lambda}))$
belong to $q^{i/2}\tau^{\mathbb{Z}}$. Thus $\mathbf{IC}(U^{(l\delta_{\mu})},\sigma_{\lambda})$ satisfies (a) in turn. Finally, by Proposition~\ref{P:descript-perv} any $\mathbb{P} \in \mathcal{P}$ appears with multiplicity one in some product $\text{Ind}(\mathbb{P}_1 \boxtimes \cdots \boxtimes \mathbb{P}_s)$ for certain $\mathbb{P}_i=\mathbf{IC}(U^{(l_i\delta_{\mu_i})},\sigma_{\lambda_i}))$. Arguing as above, we obtain that the Frobenius eigenvalues on $H^i(\mathbb{P})$ all belong to $q^{i/2}\tau^{\mathbb{Z}}$ as desired. We are done. $\hfill \checkmark$
\vspace{.2in}
\paragraph{\textbf{6.2.}} The proof of Proposition~\ref{P:verypure} gives us in fact the following property~:
\vspace{.05in}
\textit{For any $\mathbb{P} \in \mathcal{P}$ there exists a unique ${\mathbf{b}}_{\mathbb{P}} \in \widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ such that for any $f \geq 1$,}
\begin{equation}\label{E:star}
Tr_f(\mathfrak{b}_{\mathbb{P}})=(\mathbf{b}_{\mathbb{P}})_{\big|\substack{\sigma^{1/2}=\alpha^{f/2} \\ \overline{\sigma}^{1/2}=\overline{\alpha}^{f/2}}}.
\end{equation}
\vspace{.1in}
By Proposition~\ref{P:descript-perv}, the set $\mathcal{P}^{(\alpha)}$ is indexed by partitions of $deg(\alpha)$, and for $\mu(\alpha_1) < \cdots < \mu(\alpha_r)$, the correspondences
\begin{align*}
\mathcal{P}^{(\alpha_1)} \times \cdots \times \mathcal{P}^{(\alpha_r)} &\to \mathbf{Conv}^+\\
(\lambda_1^{(1)}, \ldots, \lambda_{s_1}^{(1)}; \cdots ; \lambda_1^{(r)}, \ldots, \lambda_{s_r}^{(r)}) &\mapsto \bigg(\lambda_1^{(1)}\frac{\alpha_1}{deg(\alpha_1)}, \ldots, \lambda_{s_r}^{(r)}\frac{\alpha_r}{deg(\alpha_r)}\bigg)
\end{align*}
all together set up a bijection $\omega: \mathcal{P} \stackrel{\sim}{\to} \mathbf{Conv}^+$.
\begin{lem}\label{L:Basis} The set $\{{\mathbf{b}}_{\mathbb{P}}\}$ is an $\mathbf{R}$-basis of
$\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$.\end{lem}
\noindent
\textit{Proof.} By construction we have, for $\mathbf{x} \in \mathbf{Z}^+$,
${\mathbf{b}}_{\mathbbm{1}_{\mathbf{x}}}={\mathbf{1}}_{\mathbf{x}}\in \beta_{\mathbf{x}} + \prod_{\mathbf{p} \prec \mathbf{x}} \mathbf{R} \beta_{\mathbf{p}}$. Moreover if $deg(\mathbf{x})=1$ and $\sigma_i$ are representations of the symmetric group $\mathfrak{S}_{l_i}$ then
\begin{equation*}
\begin{split}
\mathrm{Ind}^{l_1\mathbf{x}, \ldots, l_r\mathbf{x}}\bigg(\mathbf{IC}(U^{(l_1\mathbf{x})},\sigma_1)& \boxtimes \cdots
\boxtimes \mathbf{IC}(U^{(l_r\mathbf{x})},\sigma_r)\bigg)\\
=&\mathbf{IC}\bigg(U^{(l\mathbf{x})},ind_{\mathfrak{S}_{l_1} \times \cdots \times \mathfrak{S}_{l_r}}^{\mathfrak{S}_l} (\sigma_1 \times \cdots \times \sigma_r)\bigg) \oplus \mathbb{R},
\end{split}
\end{equation*}
where $l=\sum l_i$ and $Supp(\mathbb{R}) \subset \underline{Coh}^{l\mathbf{x}} \setminus \underline{Coh}^{(l\mathbf{x})}$.
From the above we deduce that for such an $\mathbf{x}$ and for any partition
$\lambda=(\lambda_1,\lambda_2,\ldots)$ of size $l$ we have
${\mathbf{b}}_{\mathbb{P}}\in \beta_{\mathbf{p}} \oplus \prod_{\mathbf{q} \prec (l\mathbf{x})} \mathbf{R} \beta_{\mathbf{q}}$
where $\mathbb{P}=\mathbf{IC}(U^{(l\mathbf{x})}, \sigma_{\lambda}) \in \mathcal{P}^{(l\mathbf{x})}$, $\sigma_{\lambda}$ is the irreducible $\mathfrak{S}_l$-module of type $\lambda$, and $\mathbf{p}=\omega(\mathbb{P})=(\lambda_1\mathbf{x}, \lambda_2\mathbf{x},\ldots)$.
Finally, using this and Proposition~\ref{P:descript-perv} we obtain in turn that
${\mathbf{b}}_{\mathbb{P}}\in \beta_{\omega(\mathbb{P})} \oplus \prod_{\mathbf{q} \prec \omega(\mathbb{P})} \mathbf{R} \beta_{\mathbf{q}}$ for an arbitrary $\mathbb{P}$. As a consequence, $\{{\mathbf{b}}_{\mathbb{P}}\}$ is an $\mathbf{R}$-basis of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ as wanted. $\hfill \checkmark$
\vspace{.1in}
\begin{cor} \label{C:verypure2} For any three simple complexes $\mathbb{P},\mathbb{P}',\mathbb{P}'' \in \mathcal{P}$ the eigenvalues of $\tilde{F}^e$ on $H^i(\text{Hom}(\mathbb{P}, \text{Ind}(\mathbb{P}' \boxtimes \mathbb{P}'')))$ all belong to $q^{ei/2}\tau^{e\mathbb{Z}}$. The same statement holds for
$H^i(\text{Hom}(\mathbb{P}' \boxtimes \mathbb{P}'', \text{Res}(\mathbb{P}')))$.
\end{cor}
\noindent
\textit{Proof.} Let ${\mathbf{b}}_{\mathbb{P}}, {\mathbf{b}}_{\mathbb{P}'}, {\mathbf{b}}_{\mathbb{P}''}$ be the elements of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ associated to $\mathbb{P},\mathbb{P}',\mathbb{P}''$. The coefficient of ${\mathbf{b}}_{\mathbb{P}}$ in the product ${\mathbf{b}}_{\mathbb{P}'}{\mathbf{b}}_{\mathbb{P}''}$ is given by a certain polynomial $K \in \mathbb{Q}[\nu^{\pm 1},t^{\pm 1}]$ and we have, for any $e \geq 1$,
$tr_e(\text{Hom}(\mathbb{P}, \text{Ind}(\mathbb{P}' \boxtimes \mathbb{P}'')))=K(v_e,\tau_e)$. The Corollary can now be deduced by the same argument as in Proposition~\ref{P:verypure}.$\hfill \checkmark$
\vspace{.2in}
\paragraph{\textbf{6.3.}} We presently come to the main result of this Section.
\begin{prop}\label{C:penultim} For any $\mathbb{P} \in \mathcal{P}$ we have ${\mathbf{b}}_{\mathbb{P}}={\mathbf{b}}_{\omega(\mathbb{P})}$. In particular,
\begin{enumerate}
\item[i)] For any $\mathbf{p} \in \mathbf{Conv}^+$ we have ${\mathbf{b}}_{\mathbf{p}} \in \beta_{\mathbf{p}} + \prod_{\mathbf{p}' \prec \mathbf{p}}\nu \mathbb{N}[\nu,t^{\pm 1}] \beta_{\mathbf{p}'}$,
\item[ii)] For any $\mathbf{p},\mathbf{p}' \in \mathbf{Conv}^+$ we have
${\mathbf{b}}_{\mathbf{p}}{\mathbf{b}}_{\mathbf{p}'} \in \prod_{\mathbf{p}''} \mathbb{N}
[\nu^{\pm 1}, t^{\pm 1}] {\mathbf{b}}_{\mathbf{p}''}$.
\end{enumerate}
\end{prop}
\noindent
\textit{Proof.} Lemma~\ref{L:Basis} allows us to define in a unique way an involution $x \mapsto x^{\vee}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ by $\sigma^\vee=\sigma^{-1}, \overline{\sigma}^\vee=\overline{\sigma}^{-1}$ and ${\mathbf{b}}_{\mathbb{P}}^{\vee}={\mathbf{b}}_{\mathbb{P}}$. Note that $\nu^{\vee}=\nu^{-1}$, $t^{\vee}=t^{-1}$. We claim that this involution coincides with the bar involution $x \mapsto \overline{x}$ considered in Section~2. To see this, first observe that we may naturally extend the definition of ${\mathbf{b}}_{\mathbb{P}}$ to an arbitrary semisimple complex $\mathbb{P} \in \bigsqcup \mathcal{Q}^{\alpha}$ by bilinearity and by setting ${\mathbf{b}}_{\mathbb{P}[i](i/2)}=\nu^i{\mathbf{b}}_{\mathbb{P}}$. With this convention, (\ref{E:star}) holds for an arbitrary semisimple $\mathbb{P}$. Next, it is clear that ${\mathbf{b}}_{\mathrm{Ind}(\mathbb{P} \boxtimes \mathbb{P}')}={\mathbf{b}}_{\mathbb{P}}{\mathbf{b}}_{\mathbb{P}'}$ and ${\mathbf{b}}_{\mathbb{P}}^{\vee}={\mathbf{b}}_{D(\mathbb{P})}$ (recall that by Proposition~\ref{P:descript-perv} every simple $\mathbb{P} \in \mathcal{P}$ is self-dual).
It follows that $x \mapsto {x}^{\vee}$ is a ring homomorphism, and
in particular, we have ${\mathbf{1}}_{\alpha_1, \ldots, \alpha_r}^{\vee}={\mathbf{b}}_{\mathbbm{1}_{\alpha_1, \ldots, \alpha_r}}^{\vee}={\mathbf{b}}_{\mathbbm{1}_{\alpha_1}}^{\vee} \cdots {\mathbf{b}}_{\mathbbm{1}_{\alpha_r}}^{\vee}={\mathbf{b}}_{\mathbbm{1}_{\alpha_1}}\cdots {\mathbf{b}}_{\mathbbm{1}_{\alpha_r}}={\mathbf{1}}_{\alpha_1, \ldots, \alpha_r}$. We conclude that
$x \mapsto \overline{x}$ and $x \mapsto x^{\vee}$ coincide on the basis $\{{\mathbf{1}}_{\alpha_1, \ldots, \alpha_r}\}$, hence the two involutions are equal.
\vspace{.05in}
To obtain the equality ${\mathbf{b}}_{\mathbb{P}}={\mathbf{b}}_{\omega(\mathbb{P})}$ it remains to prove that
\begin{equation}\label{E:vbase}
{\mathbf{b}}_{\mathbb{P}} \in \beta_{\omega(\mathbb{P})} + \prod_{\mathbf{q} \prec \omega(\mathbb{P})} \nu \mathbb{Q}[\nu,t^{\pm 1}] \beta_{\mathbf{q}}.
\end{equation}
Fix $\mathbb{P} \in \mathcal{P}^{\alpha}$. In order to show (\ref{E:vbase}) for $\mathbb{P}$, we will study the value of $Tr_e^{(n)}(\mathfrak{b}_{\mathbb{P}})$ over all points $x$ in $Q_n^{\alpha}$. Consider a HN type $\underline{\gamma}=(\gamma_1, \ldots, \gamma_r)$ of weight $\alpha$ and choose $x=(\phi: \mathcal{L}_n^{\alpha} \to \mathcal{F}) \in Q_n^{\alpha}$ with $HN(\mathcal{F})=\underline{\gamma}$. As in \cite{S1}, Lemma~7.1. we have
$$\mathbb{P}_{|\underline{HN}^{-1}(\underline{\gamma})}\simeq \text{Ind}^{\gamma_1, \ldots, \gamma_r} \text{Res}^{\gamma_1, \ldots, \gamma_r}(\mathbb{P})_{|\underline{HN}^{-1}(\underline{\gamma})}.$$
Let us write $\text{Res}^{\gamma_1, \ldots, \gamma_r}(\mathbb{P})=\bigoplus_i V_i \otimes \big( \mathbb{R}_1^i \boxtimes \cdots \boxtimes \mathbb{R}^i_r\big)$, where $\mathbb{R}_j^i \in \mathcal{P}^{\gamma_j}$ and $V_i$ is a complex of vector spaces. As $\mu(\gamma_1) < \cdots < \mu(\gamma_r)$ we deduce that, up to some (explicit) power of $v$,
$$Tr_1^{(n)}(\mathfrak{b}_{\mathbb{P}})(x)=\sum_i tr_1(V_i) Tr_1^{(n)}(\mathfrak{b}_{\mathbb{R}_1^i})(x_1) \times \cdots \times Tr_1^{(n)}(\mathfrak{b}_{\mathbb{R}_r^i})(x_r)$$
for any collection of semistable points $x_j=(\phi_j: \mathcal{L}_n^{\beta_j} \twoheadrightarrow \mathcal{G}_j) \in Q_n^{(\gamma_j)}$ such that $\mathcal{F}\simeq \bigoplus_j \mathcal{G}_j$. Recall that by construction
$Tr_1^{(n)}(\mathfrak{b}_{\mathbb{R}_j^i})(x_j)=0$ unless $\mathbb{R}_j^i \in \mathcal{P}^{(\gamma_j)}$ is generically semistable, in which case $Tr_1^{(n)}(\mathfrak{b}_{\mathbb{R}_j^i})(x_j)={\beta}_{\omega(\mathbb{R}_j^i)}(x_j)$.
Using Proposition~\ref{P:verypure} and Corollary~\ref{C:verypure2} we deduce that
$$Tr_1^{(n)}(\mathfrak{b}_{\mathbb{P}})(x)\in\big(\beta_{\omega(\mathbb{P})} + \bigoplus_{\mathbf{q} \prec \omega(\mathbb{P})} \mathbb{N}[v^{\pm 1}, \tau^{\pm 1}] \beta_{\mathbf{q}}\big)(x).$$
A similar result holds if we replace $1$ by any $e \geq 1$ and $(v,\tau)$ by $(v_e,\tau_e)$. This implies that ${\mathbf{b}}_{\mathbb{P}}(x) \in \big(\beta_{\omega(\mathbb{P})} + \bigoplus_{\mathbf{q} \prec \omega(\mathbb{P})} \mathbb{N}[\nu^{\pm 1}, t^{\pm 1}] \beta_{\mathbf{q}}\big)(x)$. Since this is true for all $x \in Q_n^{\alpha}$ and all $n$, we finally obtain that ${\mathbf{b}}_{\mathbb{P}} \in \beta_{\omega(\mathbb{P})} + \bigoplus_{\mathbf{q} \prec \omega(\mathbb{P})} \mathbb{N}[\nu^{\pm 1}, t^{\pm 1}] \beta_{\mathbf{q}}$.
\vspace{.05in}
Finaly, recall that by Proposition~\ref{P:descript-perv} we have $\mathbb{P}=\mathbf{IC}(Z_{\mathbb{P}}, \mathfrak{L}_{\mathbb{P}})$ where if $\mathbb{P} \in \mathcal{P}^{\alpha_1} \times \cdots \times \mathcal{P}^{\alpha_l}$ then
$Z_{\mathbb{P}}$ is the substack of $\underline{Coh}^{\alpha}$ classifying coherent sheaves $\mathcal{H}$ isomorphic to a direct sum $\bigoplus_{i=1}^l \bigoplus_{j=1}^{deg(\alpha_i)} \mathcal{H}_{ij}$ where $\mathcal{H}_{i,j}$ is a stable sheaf of class $ \alpha_i/deg(\alpha_i)$, and $\mathfrak{L}_{\mathbb{P}}$ is a certain local system on $Z_{\mathbb{P}}$. This substack is also locally of finite type, and is equal to the increasing union of quotient stacks $Z_{\mathbb{P},n}/G_n^{\alpha}$ where
\begin{equation}\label{E:XP}
\begin{split}
Z_{\mathbb{P},n}= \{\phi: \mathcal{L}_n^{\alpha} \twoheadrightarrow \mathcal{H}\;|\; \mathcal{H} \simeq \bigoplus_{i=1}^l \bigoplus_{j=1}^{deg(\alpha_i)} \mathcal{H}_{ij},\;\text{with}\;
\mathcal{H}_{ij}\;\text{a\;stable\;sheaf\;of\;class\;} \alpha_i/deg(\alpha_i)\}
\end{split}
\end{equation}
and $\mathbb{P}_n=\mathbf{IC}(Z_{\mathbb{P},n}, \mathfrak{L}_{\mathbb{P}})$.
By Proposition~\ref{P:verypure} there exists, for any fixed $y \in Q_n^{\alpha}$, Laurent polynomials $P_{i,y}(t) \in \mathbb{N}[t^{\pm 1}]$ such that
$$Tr_1^{(n)}(\mathfrak{b}_{\mathbb{P}})(y)=v^{-\mathrm{dim}\;G_n^{\alpha}}\sum_i (-1)^i tr_1(H^i(\mathbb{P}_n)_{|y})=v^{-\mathrm{dim}\;G_n^{\alpha}}\sum_i v^{-i}P_{i,y}(t).$$
By definition of the intersection cohomology complex, we have
$$(\mathbb{P}_n)_{|Z_{\mathbb{P},n}}=\mathfrak{L}_{\mathbb{P}}[\mathrm{dim}\;Z_{\mathbb{P},n}], \qquad
\mathrm{dim}\;supp(H^{-i}(\mathbb{P}_n)) <i\qquad\;\text{for\;} i < \mathrm{dim}\; Z_{\mathbb{P},n}.$$
In particular, as $\mathbb{P}_n$ is locally constant on each of the subvarieties $Z_{\mathbb{Q},n}$ we have
$P_{j,y}=0$ for any pair $(j,y)$ such that $y \in Z_{\mathbb{Q},n}$ with $\mathbb{Q} \neq \mathbb{P}$ and $j \geq -\mathrm{dim}\;Z_{\mathbb{Q},n}$. Hence the value of $Tr_1^{(n)}(\mathfrak{b}_{\mathbb{P}})$ at any $y \in Z_{\mathbb{Q},n}$ with $\mathbb{Q} \neq \mathbb{P}$ is the specialization at $\nu=v, t=\tau$ of an element in $\nu^{\mathrm{dim}\;Z_{\mathbb{Q},n}+1-\mathrm{dim}\;G_n^{\alpha}} \mathbb{N}[\nu,t^{\pm 1}]$. On the other hand, one checks that for any such $y$
$$\beta_{\omega(\mathbb{Q})}(y) \in \nu^{\mathrm{dim}\;Z_{\mathbb{Q},n}-\mathrm{dim}\;G_n^{\alpha}}\mathbb{N}[\nu], \qquad
\beta_{\omega(\mathbb{R})}(y)=0\;\text{if}\; \mathbb{R} \neq \mathbb{Q}.$$
We conclude that $\beta_{\omega(\mathbb{Q})}$ appears in ${\mathbf{b}}_{\mathbb{P}}$ with a coefficient in $\nu\mathbb{N}[\nu,t^{\pm 1}]$. Since this is true for any $\mathbb{Q}$, (\ref{E:vbase}) holds, and ${\mathbf{b}}_{\mathbb{P}}={\mathbf{b}}_{\omega(\mathbb{P})}$. Finally, statement i) was shown in the course of the above proof, and ii) is a consequence of Corollary~\ref{C:verypure2}.
$\hfill \checkmark$
\vspace{.1in}
\noindent
\textbf{Remark.} From the above proof it follows that the involution $x \mapsto \overline{x}$ of $\widehat{\mathbf{A}}^+_{\mathcal{A}}$ considered in Section~2 is a ring homomorphism.
\vspace{.2in}
\paragraph{\textbf{6.4.}}
It seems natural at this point to introduce a famlily of polynomials $\tilde{\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t) \in \mathbb{N}[\nu,t^{\pm 1}]$ indexed by pairs of convex paths by the formula
${\mathbf{b}}_{\mathbf{p}}=\sum_{\mathbf{q}} \tilde{\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t) \beta_{\mathbf{q}}$. By construction we have $\tilde{\daleth}_{\mathbf{p},\mathbf{p}}(\nu,t)=1$ and $\tilde{\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t)=0$ if $\mathbf{q} \not\prec \mathbf{p}$ and $\mathbf{q}\neq \mathbf{p}$.
In order to get something more reminiscent of Kostka polynomials, we slightly renormalize these polynomials as follows. Define a new basis $(\rho_{\mathbf{p}})_{\mathbf{p} \in \mathbf{Conv}^+}$ of $\widehat{\boldsymbol{\mathcal{E}}}^+_{\mathbf{R}}$ in exactly the same way as $(\beta_{\mathbf{p}})$ was defined, but using the Hall-Littlewood polynomials $P_{\lambda}(\nu)$ instead of the Schur function $s_{\lambda}$ (see Section 2.3.). We define the \textit{elliptic Kostka polynomial} $\daleth_{\mathbf{p},\mathbf{q}}(\nu,t)$ by the relations ${\mathbf{b}}_{\mathbf{p}}=\sum_{\mathbf{q}} {\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t) \rho_{\mathbf{q}}$. Again, we have ${\daleth}_{\mathbf{p},\mathbf{p}}(\nu,t)=1$ and now ${\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t)=0$ if $\mathbf{q} \not\preceq \mathbf{p}$. In addition, if
$\lambda=(\lambda_1, \ldots, \lambda_r)$ and $\sigma=(\sigma_1, \ldots, \sigma_s)$ are partitions and if $deg(\mathbf{x})=1$ then setting
$\mathbf{p}=(\lambda_1\mathbf{x}, \ldots, \lambda_r\mathbf{x}), \mathbf{q}=(\sigma_1\mathbf{x}, \ldots, \sigma_s \mathbf{x})$ we have $\daleth_{\mathbf{p},\mathbf{q}}(\nu,t)=
K_{\lambda,\sigma}(\nu)$ , where $K_{\lambda,\sigma}(\nu)$ denotes the usual q-Kostka polynomial.
\vspace{.1in}
We finish by describing a symmetry property which the polynomials $\tilde{\daleth}_{\mathbf{p},\mathbf{q}}(\nu,\tau)$ and $\daleth_{\mathbf{p},\mathbf{q}}(\nu,t)$ enjoy.
\begin{prop} Let $\gamma \in SL(2,\mathbb{Z})$, and assume that $\mathbf{p},\mathbf{q} \in \mathbf{Conv}^+$ are such that $\gamma(\mathbf{p}), \gamma(\mathbf{q}) \in \mathbf{Conv}^+$ (that is, $\gamma(\mathbf{p})$ and $\gamma(\mathbf{q})$ are still entirely contained in $\mathbf{Z}^+$). Then
$$\daleth_{\gamma(\mathbf{p}),\gamma(\mathbf{q})}(\nu,t)=\daleth_{\mathbf{p},\mathbf{q}}(\nu,t), \qquad
\tilde{\daleth}_{\gamma(\mathbf{p}),\gamma(\mathbf{q})}(\nu,t)=\tilde{\daleth}_{\mathbf{p},\mathbf{q}}(\nu,t).$$
\end{prop}
\noindent
\textit{Proof.} By \cite{BS} Corollary~3.2 the group $SL(2,\mathbb{Z})$ naturally acts on the Drinfeld double (with trivial center) $\boldsymbol{\mathcal{E}}_{\mathbf{R}}$ of $\boldsymbol{\mathcal{E}}_{\mathbf{R}}$. This action is compatible with the elements ${\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}$ and $\beta_{\mathbf{p}}$; if $\gamma \in SL(2,\mathbb{Z})$ is such that $\gamma(\mathbf{p}) \in \mathbf{Conv}^+$ then $\gamma \cdot {\mathbf{1}}^{\textbf{ss}}_{\mathbf{p}}={\mathbf{1}}^{\textbf{ss}}_{\gamma(\mathbf{p})}$ and $\gamma \cdot \beta_{\mathbf{p}}=\beta_{\gamma(\mathbf{p})}$. Now fix $\gamma,\mathbf{p},\mathbf{q}$ as in the
Proposition. Recall from Section~2.2 that for a convex path $\mathbf{r}$ for which $\omega^{-1}(\mathbf{r}) \in \mathcal{P}^{\alpha_1} \times \cdots \times \mathcal{P}^{\alpha_r}$ we set $HN(\mathbf{r})=(\alpha_1, \ldots, \alpha_r)$ and that $\mathbf{r} \prec \mathbf{r}'$ is equivalent to $HN(\mathbf{r}) \prec HN(\mathbf{r}')$.
The action of $\gamma$ on $\boldsymbol{\mathcal{E}}_{\mathbf{R}}$ lifts at the geometric level to an isomorphism between open substacks of $\underline{Coh}^{|\mathbf{p}|}$ and $\underline{Coh}^{|\gamma(\mathbf{p})|}$~:
$$i_{\gamma}: \bigsqcup_{HN(\underline{\sigma}) \succ HN(\mathbf{q})} \underline{HN}^{-1}(\underline{\sigma}) \stackrel{\sim}{\longrightarrow} \bigsqcup_{HN(\underline{\sigma'}) \succ HN(\gamma(\mathbf{q}))} \underline{HN}^{-1}(\underline{\sigma})$$
which maps isomorphically the substacks $Z_{\omega^{-1}(\mathbf{p})}$ onto $Z_{\omega^{-1}(\gamma(\mathbf{p}))}$ (see \ref{E:XP}).
In particular, this allows us to identify the Frobenius traces of stalks of $\omega^{-1}(\mathbf{p})$ on
$Z_{\omega^{-1}(\mathbf{q})}$ and of $\omega^{-1}(\gamma(\mathbf{p}))$ on
$Z_{\omega^{-1}(\gamma(\mathbf{q}))}$. The Proposition follows.$\hfill \checkmark$
\vspace{.2in}
\centerline{\textbf{Acknowledgments}}
\vspace{.1in}
I would like to thank I. Burban, D. Harari, D. Madore and E. Vasserot for many helpful discussions.
\vspace{.2in}
\small{ |
2,877,628,088,598 | arxiv | \section{Introduction}
\subsection{Monotone orbifold Hurwitz numbers} A sequence of transpositions $\tau_1,\dots,\tau_m\in S_d$, $\tau_i=(a_i,b_i)$, $a_i<b_i$, $i=1,\dots,m$, is called monotone if $b_1\leq b_2\leq \cdots\leq b_m$.
For the entire paper, fix a positive integer $q$. The disconnected monotone $q$-orbifold Hurwitz numbers $h_{g,\mu}^\bullet$, $\mu=(\mu_1,\dots,\mu_\ell)$ are defined as
\begin{equation}
h_{g,\mu}^\bullet \coloneqq \frac{|\mathrm{Aut}(\mu)|}{|\mu|!} \left|
\left\{
(\tau_0,\tau_1,\dots,\tau_m) \, \Bigg|\,
\begin{array}{c}
\tau_i\in S_{|\mu|}, \tau_0\tau_1\cdots\tau_m \in C_\mu, \tau_0\in C_{(q,\dots,q)}, \\
m=2g-2+\ell+\frac{|\mu|}{q}, \text{ and} \\
\tau_1,\dots,\tau_m \text{ is a monotone sequence of transpositions}
\end{array}
\right\}
\right|
\,.
\end{equation}
Here, $ |\mu | = \sum_{i=1}^\ell \mu_i$ and $ \mathrm{Aut} (\mu) = \{ \sigma \in S_\ell \mid \mu_j = \mu_{\sigma (j)} \forall j\} $.\par
The connected monotone Hurwitz numbers $h_{g,\mu}^\circ$ are defined by the same formula, but with an extra addition that $\tau_0,\tau_1,\dots,\tau_m$ generate a transitive subgroup of $S_{|\mu|}$.
The double monotone Hurwitz numbers were first introduced by Goulden, Guay-Paquet, and Novak, `double' allowing for $\tau_0$ to be any permutation, in~\cite{GouldenHCIZ} in their study of the HCIZ integral, and their orbifold, i.e. $ \tau_0 \in C_{(q,\dotsc,q)}$, version that we study in this paper was first considered explicitly as an object of research by Do and Karev in~\cite{DoKarev}. These numbers were very intensively studied in the recent years due to their rich system of connections to integrability, combinatorics, representation theory, and geometry, see e.g.~\cite{GouldenPolynomiality,GouldenetalGenus0,GuayHarnad,HarnadOrlov,ALS,HahnKramerLewanski,Hahn,ACEH}.
\subsection{Topological recursion} The topological recursion of Chekhov, Eynard, and Orantin~\cite{EynardOrantin} is a recursive procedure that associates to some initial data on a Riemann surface $\Sigma$ a sequence of meromorphic differentials $\omega_{g,n}$ on $\Sigma^{\times n}$. The initial data consist of $\Sigma$ itself, two non-constant meromorphic functions $x$ and $y$ on $\Sigma$, and a choice of a symmetric bi-differential $B$ on $\Sigma^{\times 2}$ with a double pole with bi-residue 1 on the diagonal.
We assume that $x$ has simple critical points $p_1,\dots,p_s\in\Sigma$, and by $\sigma_i$ we denote the local deck transformation for $x$ near the point $p_i$. We also assume that the $ p_i$ are not critical points of $y$. We use the variables $z_i$ as the placeholders for the arguments of the differential forms to stress dependence on the point of the curve, and we denote by $z_I$ the set of variables with indices in the set $I$. Finally, $\llbracket n \rrbracket$ denotes the set $\{1,\dots, n\}$.
The topological recursion works as follows: first define $\omega_{0,1}\coloneqq ydx$, $\omega_{0,2}\coloneqq B$, and for $2g-2+n+1>0$
\begin{align}
\omega_{g,n+1}(z_0,z_{\llbracket n\rrbracket}) \coloneqq \frac 12\sum_{i=1}^s \mathop{\Res}\limits_{z\to p_i} \frac{\int_{z}^{\sigma_iz} B(\cdot, z_0)}{ydx(\sigma_iz)-ydx(z)}\Bigg[ \omega_{g-1,n+2}(z,\sigma_iz,z_{\llbracket n\rrbracket}) \\ \notag
\sum_{\substack{g_1+g_2=g,\ I_1\sqcup I_2 = \llbracket n \rrbracket \\ (g_1,|I_1|)\neq (0,0)\neq (g_2,|I_2|)}} \omega_{g_1,1+|I_1|}(z,z_I)\omega_{g_2,1+|I_2|}(\sigma_iz, z_{I_2}) \Bigg]\,.
\end{align}
Originally, this procedure was designed to compute the cumulants of some class of matrix models~\cite{ChekhovEynard}, but since then it has evolved a lot and nowadays it is intensively studied on the crossroads of enumerative geometry, integrable systems, and mirror symmetry, see e.g.~\cite{EynardBook,LiuMulase} for a survey of applications. In particular, it is the key ingredient of the so-called remodeling of the B-model conjecture proposed in~\cite{BKMP}, which suggests that topological recursion is the right version of the B-model for a class of enumerative problems, in the context of mirror symmetry theory.
\subsection{The Do-Karev conjecture}
Denote by $H_{g,n}$ the $n$-point generating function for the connected $q$-orbifold monotone Hurwitz numbers:
\begin{equation}
H_{g,n}(x_1,\dots,x_n)\coloneqq \sum_{\mu_1,\dots,\mu_n=1}^\infty h^\circ_{g,\mu_1,\dots,\mu_n} \prod_{i=1}^n x_i^{\mu_i}\,.
\end{equation}
Consider the spectral curve data given by $\Sigma=\mathbb{C}$, $x(z)=z(1-z^{q})$ and $y(z)= z^{q-1}/(1-z^q)$, $B(z_1,z_2) = dz_1dz_2/(z_1-z_2)^2$ (our definition of $y$ differs by a sign from the one in~\cite{DoKarev} since we use a different sign in the definition of the recursion kernel than \emph{op.~cit.}). The critical points of $x(z)$ are $p_j = (q+1)^{-1/q} \exp(2\pi\sqrt{-1} j/q)$, $j=1,\dots,q$.
Consider the symmetric multi-differentials $\omega_{g,n}(z_1,\dots,z_n)$, $g\geq 0$, $n\geq 1$, defined on $\mathbb{C}^n$ by the Chekhov-Eynard-Orantin topological recursion. The conjecture of Do-Karev claims that
\begin{equation}
\omega_{g,n}(z_1,\dots,z_n) = d_1\otimes\cdots \otimes d_n H_{g,n}(x_1,\dots,x_n)\,,
\end{equation}
where we consider the Taylor series expansion near $x_1=\cdots=x_n=0$ and substitute $ x_i \to x(z_i)$. This conjecture is proved for $(g,n)=(0,1)$ in~\cite{DoKarev} and for $(g,n)=(0,2)$ in~\cite{KLS} and in an unpublished work of Karev. It is also proved in~\cite{DoDyerMathews,DBKraPS} for all $(g,n)$ in the case $q=1$. In this paper we prove it in the general case:
\begin{theorem} The conjecture of Do-Karev holds.
\end{theorem}
In addition to settling an explicitly posed open conjecture, this theorem is interesting in several different contexts. Firstly, it can be considered as a mirror symmetry statement in the context of the remodeling of the B-model principle of~\cite{BKMP}. Secondly, it is a part of a more general conjecture for weighted double Hurwitz numbers proposed in~\cite{ACEH} and its proof might be useful for the analysis of this more general conjecture. Thirdly, once the Do-Karev conjecture is proved, one can use the results of~\cite{Eynard,DOSS} to express the monotone orbifold Hurwitz numbers as the intersection numbers of the tautological classes on the moduli spaces of curves (for $q=1$, this is done in~\cite{ALS,DoKarev}).
\subsection{Proof} For the proof we use a corollary of~\cite[Theorem 2.2]{BorotShadrin} (see also~\cite{BorotEynardOrantin}). Namely, in order to prove that the differentials $d_1\otimes\cdots \otimes d_n H_{g,n}(x_1,\dots,x_n)$ satisfy the topological recursion on a given \emph{rational} spectral curve, it is sufficient to show that
\begin{enumerate}
\item The conjecture holds for $(g,n)=(0,1)$ and $(0,2)$.
\item $H_{g,n}(x_1,\dots,x_n)$, $2g-2+n>0$, are the expansion at the point $x_1=\cdots=x_n=0$ of a finite linear combination of the products of finite order $d/dx_i$-derivatives of the functions $\xi_j(z_i)\coloneqq 1/(z_i-p_j)$, $x_i=x(z_i)$, $i=1,\dots,n$, $j=1,\dots,q$.
\item The differential forms $d_1\otimes\cdots \otimes d_n H_{g,n}(x_1,\dots,x_n)$, considered as globally defined differentials on the spectral curve rather than formal power series expansions, satisfy the so-called \emph{quadratic loop equations}. For a collection of symmetric differentials $ (\omega_{g,n})_{g\geq 0, n \geq 1}$ on a spectral curve, the quadratic loop equations state that for all $ g \geq 0$ and $n \geq 1$
\begin{equation}
\label{QLE}
\omega_{g-1,n+1}(z, \sigma_i(z), z_{\llbracket n-1 \rrbracket} ) + \sum_{\substack{g = g_1 + g_2\\ \llbracket n-1 \rrbracket = I \sqcup J}} \omega_{g_1,|I|+1}(z, z_I) \omega_{g_2,|J|+1}(\sigma_i(z),z_J)
\end{equation}
is holomorphic in $z$ near $p_i$, with a double zero at $ p_i$ itself, cf. \cite[(2.2)]{BorotShadrin}.
\end{enumerate}
The relation between \cite[Theorem 2.2]{BorotShadrin} and the list above is given by lemma~\ref{lem:lle}.\par
As we mentioned above, the unstable cases are proved in~\cite{DoKarev,KLS}, and in an unpublished work of Karev. The second property is proved in~\cite{KLS}. So, the only thing that we have to do to complete the proof is to formulate and prove the quadratic loop equations. It is done in proposition~\ref{prop:QLE} below.\hfill $\Box$
\begin{remark}
This approach to proving the topological recursion was used before in~\cite{DLPS,DBKraPS} (where the quadratic loop equations followed directly from the cut-and-join equation) and in~\cite{BKLPS,r-spinFullProof}, where a system of formal corollaries of the quadratic loop equations was related to the cut-and-join operators of completed $r$-cycles. In this paper we combine the latter result with the formula in~\cite[Example 5.8]{ALS} that expresses the partition function of the monotone orbifold Hurwitz numbers in terms of an infinite series of the operators of completed $r$-cycles.
\end{remark}
\subsection{Organization of the paper} This paper is very essentially based on the results of~\cite{r-spinFullProof} and~\cite{ALS}. However, in this paper, we work exclusively in the so-called bosonic Fock space, i.e. the space of symmetric functions instead of the fermionic Fock space, or semi-infinite wedge formalism, as in \emph{op.~cit.}. By the classical boson-fermion correspondence~\cite{KacBook,MiwaJimboDate}, we can translate the necessary results in the fermionic Fock space to the language of differential operators in the ring of symmetric functions.
In section~\ref{sec:CJ} we derive the so-called ``cut-and-join'' evolutionary equation for the exponential partition function of monotone orbifold Hurwitz numbers and discuss its convergence issues. In section~\ref{sec:Holom} we use the cut-and-join operator to construct a particular expression holomorphic at the critical points of the spectral curve, which is needed for the proof of the quadratic loop equations. In section~\ref{sec:QLE} we formulate and prove the quadratic loop equations.
\subsection{Acknowledgments} We thank A.~Alexandrov, P.~Dunin-Barkowski, M.~Karev, and D.~Lewa\'nski for useful discussions. We also thank an anonymous referee for useful suggestions.
A.P. would like to thank Korteweg-de Vries Institute for hospitality and flourishing scientific atmosphere.
R.K. and S.S. were supported partially by the Netherlands Organization for Scientific Research.
A.P. was supported in part by the grant of the Foundation for the Advancement of Theoretical Physics “BASIS",
by RFBR grants 16-01-00291, 18-31-20046 mol\_a\_ved and 19-01-00680 A.
\section{The cut-and-join operator}
\label{sec:CJ}
Define the function $ \zeta (z) = e^{z/2} - e^{-z/2}$ and for a partition $ \lambda$ (viewed as its Young diagram), and a box $ \square = (i,j) \in \lambda$, let $ \mathsf{cr}^\lambda_\square = i-j$ be its content. The partition function of the monotone $q$-orbifold Hurwitz numbers can be defined as \cite{HarnadOrlov}
\begin{equation}\label{partitionfunction}
Z\coloneqq \sum\limits_{g=0}^\infty\sum\limits_{\mu} \frac{\hbar^{2g-2+l(\mu)+|\mu|/q}}{|\mathrm{Aut}(\mu)|}h^\bullet_{g,\mu}\prod\limits_{i=1}^{l(\mu)}p_{\mu_i}= \sum_\lambda s_\lambda (\delta_q ) \Big( \prod_{\square \in \lambda} (1-\hbar \mathsf{cr}^\lambda_\square )^{-1}\Big) s_\lambda (p)\,,
\end{equation}
where the $s_\lambda$ are Schur functions expressed as polynomials in the power sums $ p_i$, and the left Schur function is evaluated at the point $ p_j = \delta_{j,q}$.
Define the series of operators $Q(z)=\sum_{r=1}^\infty Q_rz^r$ as
\begin{equation}
Q(z)\coloneqq\frac{1}{\zeta(z)} \sum_{s=1}^\infty \left(
\sum_{\substack{
n\geq 1 \\
k_1,\dots,k_n\geq 1 \\
k_1+\cdots+k_n=s
}}
\frac 1{n!}
\prod_{i=1}^n
\frac{\zeta(k_i z) p_{k_i}}{k_i}
\right)
\left(
\sum_{\substack{
m\geq 1 \\
\ell_1,\dots,\ell_m\geq 1 \\
\ell_1+\cdots+\ell_m=s
}}
\frac 1{m!}
\prod_{j=1}^m
\zeta(\ell_j z) \frac{\partial}{\partial p_{\ell_j}}
\right).
\end{equation}
Define the operator $J$ as
\begin{align}
J &\coloneqq \frac{\frac{\partial}{\partial \hbar}}{\zeta\left(\hbar^2 \frac{\partial}{\partial \hbar}\right)} \sum_{r=1}^\infty \hbar^r Q_r (r-1)!
-
\frac{1}{\hbar} Q_1
\\ \notag
& = \sum_{r=2}^\infty \hbar^{r-2} Q_r (r-1)! +\sum_{\alpha=1}^\infty c_\alpha \sum_{r=1}^\infty \hbar^{r-2+2\alpha} Q_r (r-1+2\alpha)!.
\end{align}
Here $c_\alpha$ are the coefficients of the expansion $\frac{z}{\zeta(z)}= \sum_{\alpha=0}^\infty c_\alpha z^{2\alpha}$, that is, $c_1=-\frac{1}{24}, c_2=\frac 7{5760}$, and in general $c_\alpha$ can be expressed in terms of the Bernoulli numbers as $c_\alpha = \frac{2^{1-2\alpha} B_{2\alpha}}{(2\alpha)!}$.
\begin{proposition}
\label{DifEqPartFun}
We have:
$
\frac{\partial }{\partial \hbar} Z= JZ.
$
\end{proposition}
\begin{proof} Recall~\cite[proposition 5.2]{ALS}, which states that the operator $\mathcal{D}(\hbar)$ acting on the space of symmetric functions as $\mc{D} (\hbar)s_\lambda \coloneqq \left[\prod_{\square \in \lambda} (1-\hbar \mathsf{cr}^\lambda_\square )^{-1}\right]s_\lambda$ as in equation~\eqref{partitionfunction} can be expressed by the formula
\begin{equation}
\mathcal{D}(\hbar)=\exp\left(\left[
\tilde{\mathcal{E}}_0(\hbar^2 \frac{\partial}{\partial \hbar}) / \zeta(\hbar^2 \frac{\partial}{\partial \hbar})
-\mathcal{F}_1
\right]\log(\hbar) \right) \,,
\end{equation}
where $\tilde{\mathcal{E}}_0(z) = \sum_{r=1}^\infty \mathcal{F}_r\frac{z^r}{r!}= z \sum_{r=1}^\infty \mathcal{F}_{r}\frac{z^{r-1}}{r!}$, and $\mathcal{F}_r$ is the operator whose action in the basis of Schur polynomials is diagonal and is given by
\begin{equation}
\mathcal{F}_{r} s_\lambda = \sum_{i=1}^\ell \left( (\lambda_i -i +\frac 12)^r - (-i+\frac 12)^r\right)s_\lambda
\end{equation}
for $\lambda = (\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_\ell)$ \cite[equation (2.4)]{ALS}. The operators $\mathcal{F}_r$ can be expressed as differential operators in the variables $p$ as $\mathcal{F}_r s_\lambda = r! Q_r s_\lambda $, $r\geq 1$ (\cite[theorem 5.2]{SSZ-LMS}, see also~\cite{Alexandrov-CJ,Rossi-CJ}).
Note that
\begin{align}
\frac{\partial}{\partial \hbar}\mathcal{D}(\hbar)
&=\frac{1}{\hbar^2}\cdot \hbar^2\frac{\partial}{\partial \hbar}\mathcal{D}(\hbar) = \mathcal{D}(\hbar) \cdot \frac{1}{\hbar^2}\left(\left[
\tilde{\mathcal{E}}_0(\hbar^2 \frac{\partial}{\partial \hbar}) / \zeta(\hbar^2 \frac{\partial}{\partial \hbar}) - \mathcal{F}_1
\right]\hbar \right)\\ \notag
&=\mathcal{D}(\hbar) \cdot \left(\frac{1}{\hbar^2} \left(
\frac{\hbar^2 \frac{\partial}{\partial \hbar}}{\zeta(\hbar^2 \frac{\partial}{\partial \hbar})}
\sum_{r=1}^\infty \mathcal{F}_{r}\frac{\hbar^{r}}{r}
\right)
-\frac{1}{\hbar}\mathcal{F}_1 \right)
\end{align}
and, therefore,
\begin{align}
\frac{\partial}{\partial \hbar}Z & = \sum_\lambda s_\lambda( \delta_q) \mathcal{D}(\hbar) \left[
\frac{\frac{\partial}{\partial \hbar}}{\zeta(\hbar^2 \frac{\partial}{\partial \hbar})}
\sum_{r=1}^\infty \mathcal{F}_{r}\frac{\hbar^{r}}{r}
-\frac{1}{\hbar}\mathcal{F}_1
\right]
s_\lambda (p)
\\ \notag
& = \sum_\lambda s_\lambda( \delta_q) \mathcal{D}(\hbar) \left[
\frac{\frac{\partial}{\partial \hbar}}{\zeta(\hbar^2 \frac{\partial}{\partial \hbar})}
\sum_{r=1}^\infty \hbar^r Q_{r} (r-1)!
-\frac{1}{\hbar}Q_1
\right]
s_\lambda (p)
=JZ.
\end{align}
\end{proof}
\begin{corollary}\label{cor:CJ}
For $2g-2+n>0$ we have:
\begin{align} \label{eq:CutAndJoinNPoint}
&
\bigg(2g-2+n+\frac{1}{q}\sum_{i=1}^n D_{x_i}\bigg) \tilde H_{g,n} =
\\ \notag
&
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\sum_{\substack{
\{ k\} \sqcup \bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}}
Q_{d,\emptyset,m}^{(k)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg]
\\ \notag
&
+
\sum_{\alpha=1}^g c_\alpha
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 1}}
\frac{(m+2d-1+2\alpha)!}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\sum_{\substack{
\{ k\} \sqcup \bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d-\alpha = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}}
Q_{d,\emptyset,m}^{(k)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg],
\end{align}
where $D_{x_i} = x_i \frac{\partial}{\partial x_i}$,
\begin{align}
\label{Qrdef}
\sum_{d \geq 0} Q_{d;K_0 ,m}^{(k)}\,z^{2d} & = \frac{z}{\zeta(z)} \prod_{i \in \{k\} \sqcup K_0} \frac{\zeta(zD_{x_i})}{zD_{x_i}} \circ \prod_{j = 1}^m \frac{\zeta(zD_{\xi_j})}{z}\bigg|_{\xi_j = x_k}, & D_{\xi_j} &= \xi_j \frac{\partial}{\partial \xi_j}\,,
\\
\tilde H_{0,1}(\xi) & = H_{0,1}(\xi)
\\
\tilde{H}_{0,2}(\xi,x) & = H_{0,2}(\xi,x) + H^{\textup{sing}}_{0,2}(\xi, x) \,, & H^{\textup{sing}}_{0,2}(\xi, x)& =\log \Big(\frac{\xi - x}{\xi x}\Big),\label{TildeH02}
\\
\tilde{H}_{0,2}(\xi_1,\xi_2) & = H_{0,2}(\xi_1,\xi_2),
\\
\tilde{H}_{g,n} & = H_{g,n} + \sum_{\alpha = 0}^g c_\alpha \frac{(2g-2+n+2\alpha )!}{2g-2+n}\,, & 2g-2+n&>0.\label{TildeHgn}
\end{align}
\end{corollary}
The contribution $H^{\textup{sing}}_{0,2}(\xi, x)$ is called the \emph{singular part}. Note that we introduce more general operators $Q_{d;K_0 ,m}^{(k)}$ than the ones used in the statement of the corollary (where only have $K_0=\emptyset$), since we need them below in the proof.
\begin{proof} The proof repeats \emph{mutatis mutandis} the proof of~\cite[proposition 10]{BKLPS}, so we only give a sketch of the idea, with the analogy explained. The operator $J$ is a linear combination of the $ Q_r$. Hence, comparing proposition~\ref{DifEqPartFun} to \cite[equation (3)]{BKLPS}: $ \frac{1}{r!} \frac{\partial}{\partial \beta} Z^{r,q} = Q_{r+1}Z^{r,q}$, we can manipulate the first equation as the second. So, we map $ p_\mu$ to monomial symmetric functions $ \textup{M}_\mu(x_1, \dotsc, x_n)$, using \cite[equation~(5)]{BKLPS} for the effect of this map on the operators $Q_r$ acting on a partition function $Z$. The next step is incorporating the factors $ \frac{x_i}{x_k-x_i}$ as part of $ \tilde{H}_{0,2}$, which is given by \eqref{TildeH02} and explained in the proof of \cite[proposition~10]{BKLPS}. As in that proof, this adds a term on the right-hand side of equation~\eqref{eq:CutAndJoinNPoint} where all factors are singular parts, and this is the extra term in \eqref{TildeHgn}. This corresponds to the case $m=\ell = n-1$, and can equivalently be written in the shape of~\cite[proposition 6]{BKLPS}, with $m=\ell = 0$. This gives
\begin{equation}
\begin{split}
\sum_{d \geq \min \{ 0,3-n\}} (n+2d-2)! \sum_{\{k\} \sqcup K_0 = \llbracket n\rrbracket} \delta_{g,d} Q^{(k)}_{d,K_0,0} \prod_{j \in K_0} \frac{x_j}{x_k-x_j} \\
+\sum_{\alpha =1}^g c_\alpha (n+2d+2\alpha -2)! \sum_{\{k\} \sqcup K_0 = \llbracket n\rrbracket} \delta_{g,d} Q^{(k)}_{d,K_0,0} \prod_{j \in K_0} \frac{x_j}{x_k-x_j} \,.
\end{split}
\end{equation}
The condition $ d \geq \min \{ 0,3-n\}$ excludes the unstable cases $ (g,n) = (0,1), (0,2)$, and for $2g-2+n >0$ it simplifies to
\begin{align}
\sum_{\alpha =0}^g &c_\alpha(n+2g+2\alpha -2)! \sum_{\{k\} \sqcup K_0 = \llbracket n\rrbracket} \prod_{i =1}^n \frac{\zeta(zD_{x_i})}{zD_{x_i}} \prod_{j \in K_0} \frac{x_j}{x_k-x_j} \\
&= \sum_{\alpha =0}^g c_\alpha (n+2g+2\alpha -2)! \prod_{i =1}^n \frac{\zeta(zD_{x_i})}{zD_{x_i}} \sum_{k=1}^n \prod_{\substack{j=1\\j\neq k}}^n \frac{x_j}{x_k-x_j} \,.
\end{align}
As calculated in~\cite[proposition 10]{BKLPS},
\begin{equation}
\sum_{k=1}^n \prod_{\substack{j=1\\j\neq k}}^n \frac{x_j}{x_k-x_j} =-1
\end{equation}
and therefore, there cannot be any derivatives acting on it.\par
Now we use induction on $ 2g-2+n$, with the induction hypothesis being that $ H_{g,n} - \tilde{H}_{g,n} $ is a constant. This holds for the $(0,1)$ case, while the $(0,2)$ case is taken care of by the previous argument. Using the induction hypothesis, we get from the previous calculation that
\begin{equation}
\bigg(2g-2+n+\frac{1}{q}\sum_{i=1}^n D_{x_i}\bigg)(H_{g,n}- \tilde H_{g,n}) = \sum_{\alpha =0}^g c_\alpha (n+2g+2\alpha -2)!\,,
\end{equation}
as all constants from previous $H-\tilde{H}$ are annihilated on the right-hand side by derivatives.\par
As both $H_{g,n} $ and $ \tilde{H}_{g,n}$ are power series in the $x_i$ and the $D_{x_i}$ preserve degree and vanish on constants, this shows that
\begin{equation}
H_{g,n}- \tilde H_{g,n} = \sum_{\alpha =0}^g c_\alpha \frac{(n+2g+2\alpha -2)!}{2g-2+n}\,.
\end{equation}
\end{proof}
\begin{remark}\label{rem:globalfunction} It is proved in~\cite{KLS} that each $H_{g,n}$ is an expansion of a globally defined meromorphic function on $\mathbb{C}^n$ with known positions of poles and bounds on their order. More precisely, for $2g-2+n>0$, $H_{g,n}(x_ {\llbracket n \rrbracket})$ is the expansion of a function of $z_{\llbracket n \rrbracket}$, $x_i=x(z_i)$, which, by a slight abuse of notation, we also denote by $H_{g.n}(z_ {\llbracket n \rrbracket})$, with the poles in each variable only at the points $p_1,\dots,p_q$, where the order of poles is bounded by some constants that depend only on $g$ and $n$.
\end{remark}
Remark~\ref{rem:globalfunction} implies, in particular, that the right hand side of equation~\eqref{eq:CutAndJoinNPoint} is an infinite sum of meromorphic functions on $\mathbb{C}^{n}$ with the natural coordinates $z_1,\dots,z_n$, $x_i=x(z_i)$, with the poles in each variable only at the points $p_1,\dots,p_q$ and on the diagonals, where the order of poles is bounded by some constants that depend only on $g$ and $n$. Let us prove that this infinite sum converges absolutely and uniformly on every compact subset of $(D\setminus \{p_1,\dots,p_q\})^{n}\setminus\mathrm{Diag}$ to a meromorphic function with the same restriction on poles (and, therefore, equation~\eqref{eq:CutAndJoinNPoint} makes sense). Here, $D$ is the unit disc.
\begin{lemma}\label{lem:convergence}
Corollary~\ref{cor:CJ} holds on the level of meromorphic functions on the unit disc $D$ in the variables $z_i$, $i=1,\dots,n$: the right hand side converges absolutely and uniformly on every compact subset of $(D\setminus \{p_1,\dots,p_q\})^{n}\setminus\mathrm{Diag}$ to a meromorphic function with the poles in each variable only at the points $p_1,\dots,p_q$ and on the big diagonal, the locus where at least two coordinates are equal. The order of poles is bounded by some constants that depend only on $g$ and $n$.
\end{lemma}
\begin{proof}
In order to see the convergence, we have to rewrite each of the summands on the right hand side (the first summand and the coefficients of $c_\alpha$) in a way that collects all but finitely many terms in a series that can be analysed well. We claim that the only source of infinite summation are factors $D_\xi \tilde H_{0,1}(\xi)$. To see this, let us first analyse the summation range of \eqref{eq:CutAndJoinNPoint} for a given $(g,n)$. Let us work it out for the first summand, the computation for all other summands is exactly the same. One summation condition is $g-d = \sum_{j=1}^\ell g_j + m - \ell$, which can be rewritten as $g = d+ \sum_{j=1}^\ell (g_j + |M_j| - 1)$. As $ g_j + |M_j| -1 >0$ unless $ (g,|M_j| ) = (0,1)$, and furthermore there are only finitely many $x_i$ to distribute, this does show that the sum over $m$, $d$ (which bounds the number of $ D$), decompositions of $ \llbracket n \rrbracket $, and $ g_j$ is finite if we exclude $ D_{x_k} \tilde{H}_{0,1}$. Furthermore, each such term obtains an infinite `tail' of $D_{x_k} \tilde{H}_{0,1}$, as follows, where the variable $ m$ on the first line is split into $ m$ and $ t$ on the second and third line:
\begin{align}
&
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\sum_{\substack{
\{ k\} \sqcup \bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}}
Q_{d,\emptyset,m}^{(k)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg] =
\\ \notag
& \sum_{m, d \geq 0}
\frac{1}{m!}
\sum_{\ell =0}^m
\frac{1}{\ell!}
\sum_{\substack{
\{ k\} \sqcup \bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}}
\left[Q_{d,\emptyset,m}^{(k)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg] \right]_{\text{no}\, D_{x_k} \tilde H_{0,1}(x_k)}
\\ \notag
&
\phantom{
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{1}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\sum_{
g-d = \sum_{j=1}^\ell g_j + m - \ell }
}
\times
\sum_{\substack{t=0\\t+m \geq 1\\t+m+2d \geq 2}}^\infty \frac{(m+t+2d-1)!}{t!} \left(D_{x_k} \tilde H_{0,1}(x_k)\right)^t.
\end{align}
Here we mean that in the second line, we exclude any factors $D_{x_k} \tilde{H}_{0,1}(x_k)$ left after the action of $Q$, and we collect these in the third line.
Now the first two summations are finite, the coefficients
\begin{equation}
\left[Q_{d,\emptyset,m}^{(k)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg] \right]_{\text{no}\, D_\xi \tilde H_{0,1}(\xi)}
\end{equation}
are meromorphic functions with the desired restriction on poles, and the sum over $t$ determines the explicit functions $ \sum_{t=0}^\infty \frac{(m+t+2d-1)!}{t!} u^t = (\frac{d}{du})^{m+2d-1} \frac{u^{m+2d-1}}{1-u}$ of its argument $u = z^q =xy = D_{x_k} \tilde H_{0,1}(x_k)$, which converge on the unit disc.
\end{proof}
\section{Holomorphic expression}
\label{sec:Holom}
In this section we analyse a symmetrization of equation~\eqref{eq:CutAndJoinNPoint} near one of the critical points of the function $x(z)$. For the rest of this paper we fix $p=p_j$, $j=1,\dots, q$, and by $z\mapsto \bar{z}$ we denote the deck transformation near $p$.
We define the \emph{symmetrizing operator} $\mathsf{S}_z$ and the \emph{anti-symmetrizing operator} $\Delta_z$ by
\begin{align}
\mathsf{S}_zf(z) &\coloneqq f(z) + f(\bar{z})\,;\\ \notag
\Delta_zf(z) &\coloneqq f(z) - f(\bar{z})\,,
\end{align}
and use the identity~\cite{BKLPS,r-spinFullProof}
\begin{equation}\label{Sondiagonal}
\mathsf{S}_z \Big(f(z_1,\dots,z_r) \Big|_{z_i = z} \Big) = 2^{1-r} \Big( \sum_{\substack{I \sqcup J = \llbracket r \rrbracket \\ |J|\,\,\text{even}}} \Big( \prod_{i \in I} \mathsf{S}_{z_i} \Big) \Big( \prod_{j \in J} \Delta_{z_j} \Big)f(z_1,\dotsc, z_r) \Big)\Big|_{z_i = z}\,, \qquad r\geq 1.
\end{equation}
Recall remark~\ref{rem:globalfunction}. Another direct corollary of the results of~\cite{KLS} is the linear loop equations for the $n$-point functions that can be formulated as the following lemma:
\begin{lemma}
\label{lem:lle}
For any $g\geq0$ and $n\geq 1$ we have: $\mathsf{S}_{z_i} H_{g,n}(z_{\llbracket n \rrbracket})$ is holomorphic at $z_i\to p$.
\end{lemma}
\begin{proof}
By \cite[theorem~5.2~\&~proposition~6.2]{KLS}, cf. the statement after the proof of that proposition, the $H_{g,n}$ are linear combinations of polynomials in $ \big\{ \frac{d}{d x_i} \big\} $ acting on $ \prod_{i=1}^n \xi_{\alpha_i} (x_i)$, where, up to a linear change of basis given in \cite[Section~6.1]{KLS}, $ \xi_\alpha (z) = \frac{1}{z - p_\alpha}$. In particular, $ \mathsf{S}_{z_i} \xi_\alpha (z_i) $ is holomorphic at $ z_i \to p$ (trivially if $ p \neq p_\alpha$ and because the pole is odd if $ p = p_\alpha$). Because $x$ itself is invariant under the involution by definition, this holomorphicity is preserved under any amount of applications of $ \frac{d}{d x_i}$.
\end{proof}
In order to simplify the notation, consider equation~\eqref{eq:CutAndJoinNPoint} for $H_{g,n+1}=H_{g,n+1}(x_0,\dots,x_n)$, and substitute $ x_i = x(z_i)$. We apply the operator $\mathsf{S}_{z_0}$ to both sides of this equation. From the linear loop equations we immediately see that the left hand side of this equation is holomorphic at $z_0\to p$, as well as all summands on the right hand of this equation with $k\neq 0$ (it is an infinite sum that converges in the sense of lemma~\ref{lem:convergence}). Thus we know that
\begin{lemma} \label{lem:holomorphicity-1} The expression
\begin{align} \label{eq:holomorphicexpression}
& \mathsf{S}_{z_0} \Bigg[\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\!\!\!\!\!
\sum_{\substack{
\bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}} \!\!\!\!\!
Q_{d,\emptyset,m}^{(0)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg]
\\ \notag
&
+
\sum_{\alpha=1}^g c_\alpha
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 1}}
\frac{(m+2d-1+2\alpha)!}{m!}
\sum_{\ell =1}^m
\frac{1}{\ell!}
\!\!\!\!\!\!\!\sum_{\substack{
\bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g-d-\alpha = \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}} \!\!\!\!\!\!\!
Q_{d,\emptyset,m}^{(0)} \bigg[\prod_{j = 1}^{\ell} \tilde{H}_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j})\bigg]
\Bigg]
\end{align}
is holomorphic at $z_0\to p$.
\end{lemma}
It is convenient to introduce the notation
\begin{align}
& W_{g,n+m} (\xi_{\llbracket m \rrbracket} , x_{\llbracket n \rrbracket})\coloneqq D_{\xi_1}\cdots D_{\xi_m}D_{x_1}\cdots D_{x_n} \tilde{H}_{g,m+n}(\xi_{\llbracket m \rrbracket},x_{\llbracket n \rrbracket}); \\
& \mathcal{W}_{g,m,n}(\xi_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}) \coloneqq
\sum_{\ell =1}^m
\frac{1}{\ell!}
\!\!
\sum_{\substack{
\bigsqcup_{j=1}^\ell K_j = \llbracket n \rrbracket \\
\bigsqcup_{j=1}^\ell M_j = \llbracket m \rrbracket\\
M_j \neq \emptyset \\
g= \sum_{j=1}^\ell g_j + m - \ell \\
g_1,\ldots,g_{\ell} \geq 0
}} \!\!
\prod_{j = 1}^{\ell} W_{g_j,|M_j|+|K_j|}(\xi_{M_j},x_{K_j}),
\end{align}
where we assume that $\xi_i\coloneqq x(w_i)$, $i=1,\dots,m$, and $x_j\coloneqq x(z_j)$, $j=1,\dots,n$. Denote also
\begin{align}
\sum_{d \geq 0} \mathcal{Q}_{d,m}(z_0)t^{2d} & = \frac{t}{\zeta(t)} \frac{\zeta(tD_{x(z_0)})}{tD_{x(z_0)}} \circ
\prod_{j = 1}^m \bigg( \left[\vert_{w_j = z_0}\right] \circ \frac{\zeta(tD_{x(w_j)})}{tD_{x(w_j)}}\bigg),
\end{align}
where
\begin{align}
\left[\vert_{w_j = z_0}\right] F(w) &\coloneq \Res_{w=z} F(w)\frac{dx(w)}{x(w)-x(z)}
\,.
\end{align}
These notations allow us to rewrite expression~\eqref{eq:holomorphicexpression} and to reformulate lemma~\ref{lem:holomorphicity-1} as
\begin{corollary}\label{cor:holomorphicinput} The expression
\begin{align}\label{eq:holoinput}
&\mathsf{S}_{z_0} \left[\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!} \mathcal{Q}_{d,m}(z_0) \mathcal{W}_{g-d,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}) \right.
\\ \notag &
\left.
+
\sum_{\alpha=1}^g c_\alpha
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 1}}
\frac{(m+2d-1+2\alpha)!}{m!} \mathcal{Q}_{d,m}(z_0) \mathcal{W}_{g-d-\alpha,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}) \right].
\end{align}
is holomorphic at $z_0\to p$.
\end{corollary}
\section{Quadratic loop equations}
\label{sec:QLE}
In this section we use corollary~\ref{cor:holomorphicinput} and results of~\cite{r-spinFullProof} for the proof of the quadratic loop equations that can be formulated as the following proposition:
\begin{proposition}\label{prop:QLE} For any $g\geq 0$, $n\geq 0$ we have: $\mathcal{W}_{g,2,n}(w,\bar{w}\mid z_{\llbracket n \rrbracket})$ is holomorphic at $w\to p$.
\end{proposition}
\begin{remark}
In order to see that this really gives the quadratic loop equation, as given in \eqref{QLE}, note that
\begin{equation}
\mathcal{W}_{g,2,n}(w,\bar{w} \mid x_{\llbracket n\rrbracket}) = W_{g-1,n+2}(w, \bar{w}, z_{\llbracket n \rrbracket}) + \sum_{\substack{K_1 \sqcup K_2 = \llbracket n \rrbracket \\ g = g_1 + g_1}} W_{g_1,1+|K_1|}(w,z_{K_1}) W_{g_2,1+|K_2|}(\bar{w},z_{K_2})\,.
\end{equation}
Furthermore, $ d_1 \otimes \dotsb \otimes d_n H_{g,n}(z_{\llbracket n\rrbracket} ) = W_{g,n}(z_{\llbracket n \rrbracket} ) \prod_{i=1}^n \frac{dx(z_i)}{x(z_i)}$. Therefore, $\mathcal{W}_{g,2,n}(w,\bar{w}\mid z_{\llbracket n \rrbracket})$ is holomorphic at $ w \to p$ if and only if \eqref{QLE} is holomorphic with double zero there.
\end{remark}
Let us explain the strategy of the proof. We prove this proposition by induction on the negative Euler characteristic, that is, on $2g-2+(n+1)$. We split the known to be holomorphic at $z_0\to p$ expression~\eqref{eq:holoinput}, which is an infinite sum of meromorphic functions converging in the sense of lemma~\ref{lem:convergence}, into a sum of two converging infinite sums, where one sum is holomorphic once the quadratic loop equations hold for all $(g',n')$ with $2g'-2+(n'+1)<2g-2+(n+1)$, and the other sum is holomorphic if and only if the quadratic loop equation holds for $(g,n)$.
To this end, we have to recall some of the results of~\cite{r-spinFullProof}. First of all, we need a change of notation in the case when we apply $\Delta_{w_i}\Delta_{w_j}$ and $\mathsf{S}_{w_i}\mathsf{S}_{w_j}$ operators to $W_{0,2}(\xi_i,\xi_j))$, $\xi_i=x(w_i)$, $\xi_j=x(w_j)$ (which is a possible factor in $\mathcal{W}$)---see~\cite[section 3.1]{r-spinFullProof} for a motivation of this change of notation. So, we redefine
\begin{align}\label{eq:redefinition02}
\widetilde{\Delta_{w_i}\Delta_{w_j} } W_{0,2}(\xi_i,\xi_j) & \coloneqq \Delta_{w_i}\Delta_{w_j} W_{0,2}(\xi_i,\xi_j)-\frac{2}{(\log\xi_i-\log\xi_j)^2}\,;
\\ \notag
\widetilde{\mathsf{S}_{w_i}\mathsf{S}_{w_j} } W_{0,2}(\xi_i,\xi_j) & \coloneqq \mathsf{S}_{w_i}\mathsf{S}_{w_j} W_{0,2}(\xi_i,\xi_j) +\frac{2}{(\log\xi_i-\log\xi_j)^2}\,.
\end{align}
From now on, we use this modified definition, and abusing notation we always omit the tildes.
Recall that all $H_{g,n}$'s satisfy the linear loop equations (lemma~\ref{lem:lle}). Under the assumption that the quadratic loop equations hold for all $(g',n')$ with $2g'-2+(n'+1)<2g-2+(n+1)$ the following two lemmas hold:
\begin{lemma} \label{lem:corollaryDKPS}
For any $r \geq 0$ and any $h, k \geq 0$ such that $ 2h-1+k -r\leq 2g-2+n$, the expression
\begin{equation}\label{eq:corollaryDKPS}
\sum_{m=1}^{r+1}\frac{1}{m!} \sum_{\substack{2\alpha_1 + \dotsb + 2\alpha_m \\ + m = r+1}} \prod_{j=1}^m \bigg( \left[\vert_{w_j = z_0}\right] \frac{D_{x(w_j)}^{2\alpha_j}}{(2\alpha_j+1)!} \bigg) \sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j} \mathcal{W}_{h- \alpha_1 - \dotsc - \alpha_m,m,k} (w_{\llbracket m\rrbracket} \mid z_{\llbracket k\rrbracket})
\end{equation}
as well as its arbitrary $D_{x(z_0)}$-derivatives, is holomorphic at $z_0\to p$.
\end{lemma}
\begin{proof} This is a direct corollary of~\cite[corollary 3.4]{r-spinFullProof}.
\end{proof}
\begin{lemma}\label{lem:secondRSpinLemma} For any $r\geq 1$
\begin{align}\label{eq:expr-r}
& \sum_{\substack{k,\alpha_1,\dots,\alpha_{2k}\\ \ell, \beta_1,\dots,\beta_\ell \\ 2k+2\alpha_1+\cdots+2\alpha_{2k} \\ +\ell + 2\beta_1+\cdots+2\beta_{\ell} = r+1}}\!\!\!\!\!\!\!\!\! \frac{1}{\ell! (2k)!}\prod_{i=1}^\ell \left[\vert_{w'_i = z_0}\right] \frac{D_{x(w'_i)}^{2\beta_i}}{(2\beta_i+1)!} \mathsf{S}_{w'_i}
\prod_{i=1}^{2k} \left[\vert_{w_i = z_0}\right] \frac{D_{x(w_i)}^{2\alpha_i}}{(2\alpha_i+1)!} \Delta_{w_i} \mathcal{W}_{g+(2k+\ell-r-1)/2,\ell+2k,n} (w'_{\llbracket \ell\rrbracket},w_{\llbracket 2k\rrbracket} \mid z_{\llbracket n\rrbracket})
\\ \notag &
-\sum_{2k+\ell = r+1} \frac{1}{\ell! (2k)!} \binom{k}{1} \left(\mathsf{S}_{z_0} W_{0,1}(x(z_0))\right)^\ell \left(\Delta_{z_0} W_{0,1}(x(z_0))\right)^{2k-2} \left[\vert_{w_1 = z_0}\right] \left[\vert_{w_2 = z_0}\right] \Delta_{w_1} \Delta_{w_2} \mathcal{W}_{g,2,n}(w_1,w_2 \mid z_{\llbracket n \rrbracket})
\end{align}
is holomorphic at $ z_0 \to p$.
\end{lemma}
\begin{proof} This is a direct corollary of~\cite[corollary 3.4 and remark 3.3]{r-spinFullProof}. Note that the sum over $\alpha$s and $ \beta$s in the first line is the same as the sum over $\alpha$s in \eqref{eq:expr-r}, but split depending on whether $ \mathsf{S}$ or $\Delta$ acts on the corresponding variable.
\end{proof}
Another statement that we need is the following. Let $f_i(z)$, $i\in \mathbb{Z}_{\geq 0}$ be a sequence of meromorphic functions defined on an open neighborhood $U$ of the point $p$ with the orders of poles bounded by some constant. Assume $\sum_{i=0}^\infty f_i(z)$ converges absolutely and uniformly on every compact subset of $U\setminus \{p\}$ to a function of the same type, that is, to a meromorphic function $f(z)$ on $U$ with a possible pole only at the point $p$ with the order of the pole bounded by the same constant. Assume that we can split $\mathbb{Z}_{\geq 0}$ into a sequence of pairwise disjoint finite subsets $I_k$, $k=1,2,3,\dots$, $\mathbb{Z}_{\geq 0}=\bigsqcup_{k=1}^\infty I_k$, such that $\sum_{i\in I_k} f_i(z)$ is holomorphic at $z\to p$ for every $k$. Then we have:
\begin{lemma} \label{lem:cxanalysis} The sum $f(z)=\sum_{i=0}^\infty f_i(z)$ is holomorphic at $z\to p$.
\end{lemma}
Now we are ready to prove proposition~\ref{prop:QLE}.
\begin{proof}[Proof of proposition~\ref{prop:QLE}] As stated before, the proof works by induction on $2g-2+(n+1)$. First, equation~\eqref{Sondiagonal} and the holomorphicity at $z_0\to p$ of the expression~\eqref{eq:holoinput} imply that the following expression is holomorphic at $z_0\to p$:
\begin{align}\label{eq:holoinput-2}
&\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!2^m} \mathcal{Q}_{d,m}(z_0) \sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j} \mathcal{W}_{g-d,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket})
\\ \notag &
+
\sum_{\alpha=1}^g c_\alpha
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 1}}
\frac{(m+2d-1+2\alpha)!}{m!2^m} \mathcal{Q}_{d,m}(z_0)
\sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j}
\mathcal{W}_{g-d-\alpha,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}) .
\end{align}
We split this expression into three parts and analyse them separately.
The first part is the second summand. We consider
\begin{align}\label{eq:hologenusdefect}
&\sum_{\alpha=1}^g c_\alpha
\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 1}}
\frac{(m+2d-1+2\alpha)!}{m!2^m} \mathcal{Q}_{d,m}(z_0)
\sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j}
\mathcal{W}_{g-d-\alpha,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}) .
\end{align}
Note that this expression is an infinite sum of the products of derivatives of the function $\{\tilde H_{g,n}\}_{g,n}$, and it absolutely uniformly converges to a meromorphic function on $U\setminus \{p\}$ in the variable $z_0$, where $U$ is an open neighborhood of the point $p$. The proof of that is exactly the same as the proof of lemma~\ref{lem:convergence}: we have a finite number of terms with no factors of $\mathsf{S}_{w} D_{\xi} \tilde H_{0,1}(\xi(w))$ and $\Delta_{w} D_{\xi} \tilde H_{0,1}(\xi(w))$ multiplied by a geometrically converging series in $\mathsf{S}_{w} D_{\xi} \tilde H_{0,1}(\xi(w))$ and $\Delta_{w} D_{\xi} \tilde H_{0,1}(\xi(w))$. On the other hand, we can rewrite this expression as the sum over $r+1=m+2d$ and then for each fixed $r+1$ we have a finite expression, which is holomorphic at $z_0\to p$ according to lemma~\ref{lem:corollaryDKPS}, and using the induction hypothesis along with the fact that $ \alpha \geq 1$. Thus expression~\eqref{eq:hologenusdefect} satisfies the conditions of lemma~\ref{lem:cxanalysis}, and therefore \eqref{eq:hologenusdefect} converges to a holomorphic function on $U$.
Introduce a new notation:
\begin{align}
\sum_{d \geq 0} \mathcal{Q}^{\mathsf{red}}_{d,m}(z_0)t^{2d} & \coloneqq
\prod_{j = 1}^m \bigg( \left[\vert_{w_j = z_0}\right] \circ \frac{\zeta(tD_{x(w_j)})}{tD_{x(w_j)}}\bigg)\,.
\end{align}
This is the `leading order' part of $\mathcal{Q}_{d,m}$ in the sense that it does not include the global derivatives of $ z_0$ or the extra $\frac{z}{\zeta (z)}$, which would lead to terms that have been shown to be holomorphic in earlier steps of the induction.
The second part is then all these extra terms, the ``genus defect'' part of the first summand in~\eqref{eq:holoinput-2}. We consider
\begin{align}\label{eq:holoinput-2-fisrtgendef}
&\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!2^m} \left(\mathcal{Q}_{d,m}(z_0)- \mathcal{Q}^{\mathsf{red}}_{d,m}(z_0)\right)\sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j} \mathcal{W}_{g-d,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket}).
\end{align}
Literally the same argument as in the case of expression~\eqref{eq:hologenusdefect} proves that~\eqref{eq:holoinput-2-fisrtgendef} converges to a holomorphic function on $U$.
The third part is equal to
\begin{align}\label{eq:finalpart}
&\sum_{\substack{m \geq 1, d \geq 0 \\ m + 2d \geq 2}}
\frac{(m+2d-1)!}{m!2^m} \mathcal{Q}^{\mathsf{red}}_{d,m}(z_0)\sum_{\substack{I\sqcup J = \llbracket m \rrbracket \\ |I|\in 2\mathbb{Z} }} \prod_{i\in I} \Delta_{w_i} \prod_{j\in J} \mathsf{S}_{w_j} \mathcal{W}_{g-d,m,n} (w_{\llbracket m \rrbracket} \mid z_{\llbracket n \rrbracket})
\\ \notag
& = \sum_{r=1}^\infty \frac{r!}{2^{r+1}} \!\!\!\!\!\!\!\!\sum_{\substack{k,\alpha_1,\dots,\alpha_{2k}\\ \ell, \beta_1,\dots,\beta_\ell \\ 2k+\alpha_1+\cdots+\alpha_{2k} \\ +\ell + \beta_1+\cdots+\beta_{\ell} = r+1}} \!\!\!\!\!\!\!\! \frac{1}{\ell! (2k)!}
\\ \notag
& \phantom{=\ }
\prod_{i=1}^\ell \left[\vert_{w'_i = z_0}\right] \frac{D_{x(w'_i)}^{2\beta_i}}{(2\beta_i+1)!} \mathsf{S}_{w'_i}
\prod_{i=1}^{2k} \left[\vert_{w_i = z_0}\right] \frac{D_{x(w_i)}^{2\alpha_i}}{(2\alpha_i+1)!} \Delta_{w_i}
\mathcal{W}_{g+2k+\ell-r-1,\ell+2k,n} (w'_{\llbracket \ell\rrbracket},w_{\llbracket 2k\rrbracket} \mid z_{\llbracket n\rrbracket}) \,,
\end{align}
and it must be holomorphic as $ z_0 \to p$, as it is the difference of equations~\eqref{eq:holoinput-2} and~\eqref{eq:hologenusdefect}, \eqref{eq:holoinput-2-fisrtgendef}.
Each of the $r$-summands of equation~\eqref{eq:finalpart} corresponds to the first part of the equation in lemma~\ref{lem:secondRSpinLemma}. By the same arguments as before, the sum over all $r$ of the expression in lemma~\ref{lem:secondRSpinLemma},
\begin{align}\label{eq:secondRSpinLemma2}
&\sum_{r= 1}^\infty
\frac{r!}{2^{r+1}} \mathsf{Expr}_r,
\end{align}
where $\mathsf{Expr}_r $ is equal to~\eqref{eq:expr-r},
still converges absolutely and uniformly on $U\setminus \{p\}$ in the variable $z_0$, and is holomorphic as $ z_0 \to p$.
Because of this, the difference between equations~\eqref{eq:finalpart} and~\eqref{eq:secondRSpinLemma2} must also be holomorphic. Explicitly, this is
\begin{align}\label{eq:QLEtimesInv}
& \sum_{\substack{k\geq 1\\ \ell \geq 0}}\frac{(2k+\ell -1)!}{2^{2k+\ell}\ell! (2k)!} \binom{k}{1} \left(\mathsf{S}_{z_0} W_{0,1}(x(z_0))\right)^\ell \left(\Delta_{z_0} W_{0,1}(x(z_0))\right)^{2k-2}
\\ \notag &
\times\left[\vert_{w_1 = z_0}\right] \left[\vert_{w_2 = z_0}\right] \Delta_{w_1} \Delta_{w_2} \mathcal{W}_{g,2,n}(w_1,w_2 \mid z_{\llbracket n \rrbracket}) \,.
\end{align}
To analyse this expression, let us first consider the sum
\begin{align}
\sum_{\substack{k\geq 1\\ \ell \geq 0}}\frac{(2k+\ell -1)!}{2^{2k+\ell}\ell! (2k)!} k \, s^\ell \delta^{2k-2} = \frac{1}{2\big((2-s)^2-\delta^2\big)}.
\end{align}
For $s=\mathsf{S}_{z_0} W_{0,1}(x(z_0))$ and $\delta=\Delta_{z_0} W_{0,1}(x(z_0))$ both $(s+\delta)/2$ and $(s-\delta)/2$ belong to the unit ball for $z_0$ near $p$, as $ W_{0,1}(x(p)) = \frac{1}{q+1}$. Therefore, this expression defines a holomorphic function on $U$ in the variable $z_0$, non-vanishing at $z_0\to p$. This implies that
\begin{equation}
\left[\vert_{w_1 = z_0}\right] \left[\vert_{w_2 = z_0}\right] \Delta_{w_1} \Delta_{w_2} \mathcal{W}_{g,2,n}(w_1,w_2 \mid z_{\llbracket n \rrbracket})
\end{equation} is holomorphic at $z_0\to p$.
Then equation~\eqref{Sondiagonal} and lemma~\ref{lem:lle} (in the case $g=0$, $n=0$ one also has to recall equation~\eqref{eq:redefinition02}) imply that $\mathcal{W}_{g,2,n}(z_0,\bar{z}_0 \mid z_{\llbracket n \rrbracket})$ is holomorphic at $z_0\to p$ (cf.~also the arguments in~\cite[section 2.4]{BorotShadrin} and~\cite[section 3.2]{r-spinFullProof}).
\end{proof}
\bibliographystyle{alpha}
|
2,877,628,088,599 | arxiv | \part{title}
\usetikzlibrary{arrows}
\newtheorem{theorem}{Theorem}
\newtheorem*{theorem17}{Theorem 17}
\newtheorem*{theorem18}{Theorem 18}
\newtheorem*{theorem28}{Theorem 28}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{question}[theorem]{Question}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem*{claim}{Claim}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{$\epsilon$-pseudo-orbit}{$\epsilon$-pseudo-orbit}
\newcommand{\omega}{\omega}
\newcommand{\epsilon}{\epsilon}
\newcommand{\sigma}{\sigma}
\newcommand{\varnothing}{\varnothing}
\newcommand{\mathcal C}{\mathcal C}
\newcommand{\mathcal H}{\mathcal H}
\newcommand{\mathcal S}{\mathcal S}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\invlim}[2]{\lim\limits_{\longleftarrow}\{#1,#2\}}
\newcommand{\hspace{-3pt}\upharpoonright}{\hspace{-3pt}\upharpoonright}
\newcommand{\ols}[1]{\omega_#1}
\newcommand{\seq}[1]{\langle #1\rangle}
\newcommand{\component}[1]{
\textrm{comp}(#1)}
\newcommand{\chainacc}[1]{\mathcal{CA}(#1)}
\newcommand{\comment}[1]{{\textcolor{red}{#1}}}
\makeatletter
\@namedef{subjclassname@2020}{%
\textup{2020} Mathematics Subject Classification}
\makeatother
\begin{document}
\title{Shadowing, recurrence, and rigidity in dynamical systems}
\author[J. Meddaugh]{Jonathan Meddaugh}
\address[J. Meddaugh]{Baylor University, Waco TX, 76798}
\email[J. Meddaugh]{Jonathan\[email protected]}
\subjclass[2020]{37B65, 37B45}
\keywords{shadowing, pseudo-orbit tracing property, topological dynamics}
\begin{abstract} In this paper we examine the interplay between recurrence properties and the shadowing property in dynamical systems on compact metric spaces. In particular, we demonstrate that if the dynamical system $(X,f)$ has shadowing, then it is recurrent if and only if it is minimal. Furthermore, we show that a uniformly rigid system $(X,f)$ has shadowing if and only if $X$ is totally disconnected and use this to demonstrate the existence of a space $X$ for which no surjective system $(X,f)$ has shadowing. We further refine these results to discuss the dynamics that can occur in spaces with compact space of self-maps.
\end{abstract}
\maketitle
\section{Introduction}
A dynamical system is a pair $(X,f)$ where $X$ is a compact metric space and $f:X\to X$ is a continuous function. It is well-known that every dynamical system has at least one \emph{recurrent point}---a point $z$ such that for every neighborhood $U$ of $z$, the orbit of $z$ meets $U$ infinitely often. Recurrent points are well-studied objects, with connections to many other dynamical notions.
Katznelson and Weiss began the study of systems in which \emph{every} point is recurrent (called recurrent systems) and demonstrated that there exist transitive non-minimal recurrent systems \cite{KatznelsonWeiss}. Informed by analogous measure-theoretic notions, Glasner and Maon introduced stronger notions of recurrence---$n$-rigidity, weak rigidity, rigidity, and uniform rigidity (in increasing order of strength)---and showed that rigid systems have zero topological entropy \cite{GlasnerMaon}. Korner shows that rigidity and uniform rigidity are distinct notions, even in the restricted context of minimal systems \cite{Korner}. However, in \cite{DonosoShao:uniformlyrigidmodels}, Donoso and Shao demonstrate that the class of uniformly rigid systems is rich enough to model all non-periodic ergodic rigid systems.
In the following sections, we will explore the implications of the various forms of recurrence in systems with shadowing. Informally, a dynamical system $(X,f)$ has the \emph{shadowing property} provided that approximate orbits (pseudo-orbits) are well-approximated by true orbits of the system. Shadowing (also known as the \emph{pseudo-orbit tracing property}) is naturally important in computation \cite{Corless, Pearson} but has also been the subject of significant research in its own right, beginning with Bowen's analysis of non-wandering sets for Axiom A diffeomorphisms \cite{Bowen}. The shadowing property has wide-reaching applications in the analysis of dynamical systems, especially in analyzing stability \cite{Pil, robinson-stability, walters} and in characterizing $\omega$-limit sets \cite{Bowen, MR}. Despite being a relatively strong property, shadowing is a generic property in the space of dynamical systems on many spaces \cite{BMR-Dendrites, Mazur-Oprocha, Meddaugh-Genericity, Pilyugin-Plam}.
The main results of this paper are the following theorems.
{
\renewcommand*{\thetheorem}{\ref{recurrentminimal}}
\addtocounter{theorem}{-1}
\begin{corollary}
Let $X$ be connected and let $(X,f)$ be a system with shadowing. Then $(X,f)$ is minimal if and only if it is recurrent.
\end{corollary}
}
{
\renewcommand*{\thetheorem}{\ref{rigidsystem}}
\addtocounter{theorem}{-1}
\begin{theorem}
A uniformly rigid system $(X,f)$ has shadowing if and only if $X$ is totally disconnected.
\end{theorem}
}
{
\renewcommand*{\thetheorem}{\ref{HnNondense}}
\addtocounter{theorem}{-1}
\begin{theorem}
There exist continua with the property that shadowing is not dense in $\mathcal C(X)$. In particular, for each $n\in\mathbb{N}$, the continuum $H_n$ has the property that shadowing is not dense in $\mathcal C(X)$.
\end{theorem}
}
{
\renewcommand*{\thetheorem}{\ref{compactC}}
\addtocounter{theorem}{-1}
\begin{corollary}
Let $X$ be a compact, connected metric space with $\mathcal C(X)$ compact. Then $(X,f)$ has shadowing if and only if $\seq{f^n}$ converges to a constant map.
\end{corollary}
}
The organization of this paper is as follows. In Section \ref{Prelim}, we define notation and relevant terminology. In Section \ref{Rigid systems}, we demonstrate that, for systems with the shadowing property, recurrence properties have strong implications on the dynamics of the system and the topological structure of the underlying space. In Section \ref{Rigid Spaces}, we use these implications to demonstrate that there is a large class of spaces on which no surjective map has shadowing. Finally, in Section \ref{non-surjective}, we expand on these ideas to demonstrate that there are spaces for which only the constant maps have shadowing.
\section{Preliminaries} \label{Prelim}
In the following material, we will most frequently be indexing sets and sequences with the non-negative integers ($\omega$) or subsets thereof. It will, however, at times be convenient to index with the positive integers ($\mathbb{N}$) instead.
For the purposes of this paper, a \emph{dynamical system} is a pair $(X,f)$ consisting of a compact metric space $(X,d)$ and a continuous function $f:X\to X$.
For a dynamical system $(X,f)$ and a point $x\in X$, the \emph{orbit of $x$ under $f$} (or, the $f$-orbit of $x$) is the sequence $\seq{f^i(x)}_{i\in\omega}$ where $f^0$ denotes the identity. For $\delta>0$, a \emph{$\delta$-pseudo-orbit for $f$} is a sequence $\seq{x_i}_{i\in\omega}$ such that for all $i\in\omega$, $d(f(x_i),x_{i+1})<\delta$. For $\epsilon>0$, we say that a sequence $\seq{x_i}_{i\in\omega}$ in $X$ \emph{$\epsilon$-shadows} a sequence $\seq{y_i}_{i\in\omega}$ in $X$ provided that $d(x_i,y_i)<\epsilon$ for all $i\in\omega$.
\begin{definition}
A dynamical system $(X,f)$ has \emph{shadowing} (or the \emph{pseudo-orbit tracing property}, sometimes denoted \emph{POTP}) provided that for all $\epsilon>0$, there exists $\delta>0$ such that if $\seq{x_i}$ is a $\delta$-pseudo-orbit for $f$, then there exists $z\in X$ such that for all $i$, $d(x_i,f^i(z))<\epsilon$, i.e. the orbit of $z$ {$\epsilon$-shadows} the pseudo-orbit $\seq{ x_i}$.
\end{definition}
It is often useful to consider finite versions of pseudo-orbits. For a dynamical system $(X,f)$, $\delta>0$, and $a,b\in X$, a \emph{$\delta$-chain from $a$ to $b$} is a finite sequence $\seq{x_i}_{i\leq n}$ with $x_0=a$, $x_n=b$ and such that for all $i<n$, $d(f(x_i),x_{i+1})<\delta$. Note that any truncation of a $\delta$-pseudo-orbit is a $\delta$-chain and that any $\delta$-chain can be extended to a $\delta$-pseudo-orbit by defining $x_{n+j}=f^j(x_n)$ for all $j\in\omega$. For $a\in X$, the \emph{chain accessible set of $a$} is the set $\chainacc{a}$ consisting of all $b\in X$ such that for all $\delta>0$, there exists a $\delta$-chain from $a$ to $b$.
In a dynamical system $(X,f)$ a point $x\in X$ is \emph{recurrent} provided that for all $\epsilon>0$, there exists $n>0$ such that $f^n(x)\in B_\epsilon(x)$. The system $(X,f)$ is a \emph{recurrent system} provided that each $x\in X$ is recurrent.
Stronger still, $(X,f)$ is \emph{uniformly rigid} provided that for each $\epsilon>0$, there exists $n>0$ such that for each $x\in X$, $f^n(x)\in B_\epsilon(x)$ (note that this property could sensibly called uniformly recurrent, but there is an extant unrelated pointwise notion of uniform recurrence in the literature). Finally, a system $(X,f)$ is \emph{periodic} if there exists $n>0$ such that $f^n=id_X$.
It is worth noting that the notions of recurrence and uniform rigidity can be equivalently written in terms of convergence in $X$ and the space $\mathcal C(X)$ of continuous self-maps on $X$ with topology generated by the supremum metric
\[\rho(f,g)=\sup_{x\in X}d(f(x),g(x)).\]
It is clear that convergence with respect to $\rho$ is the same as uniform convergence---this topology is sometimes called the topology of uniform convergence.
\begin{remark}
Let $(X,f)$ be a dynamical system.
\begin{enumerate}
\item $x\in X$ is recurrent if and only if there is a subsequence of $\seq{f^i(x)}$ which converges to $x$.
\item $(X,f)$ is uniformly rigid if and only if there is a subsequence of $\seq{f^i}$ which converges uniformly to $id_X$.
\end{enumerate}
\end{remark}
\section{Recurrence and Rigidity in Systems with Shadowing} \label{Rigid systems}
In this section, we explore the connections between, recurrence, rigidity, shadowing and the topology of the space $X$. Much of this is inspired by the following observation about the identity map.
\begin{theorem} \label{identity}
The system $(X,id_X)$ has shadowing if and only if $X$ is totally disconnected.
\end{theorem}
\begin{proof}
Suppose that $(X,id_X)$ has shadowing but that $X$ has a non-trivial component $C$. Fix $\epsilon>0$ less than the diameter of $C$ and let $a,b\in C$ with $d(a,b)>\epsilon$. Since the system has shadowing, choose $\delta>0$ such that every $\delta$-pseudo-orbit is $\epsilon/2$-shadowed.
Since $C$ is connected, we can find a sequence $\seq{x_i}_{i\in\omega}$ in $C$ and $n>0$ with $x_0=a$, $x_k=b$ for all $k\geq n$ and $d(x_i,x_{i+1})<\delta$ for all $i\in\omega$ (See 4.23 of \cite{Nadler}). As this is a $\delta$-chain, choose $z\in X$ whose orbit shadows $\seq{x_i}$. But then $d(a,b)\leq d(a,z)+d(z,b)=d(a,z)+d(f^n(z),b)<\epsilon$, a contradiction.
Conversely, suppose that $X$ is totally disconnected and fix $\epsilon>0$. Since $X$ is compact, metric and totally disconnected, we can find a finite cover $\mathcal U=\{U_j:j\leq n\}$ consisting of pairwise disjoint clopen sets of diameter less than $\epsilon$. Fix $0<\delta<\min\{d(U_j,U_k):j\neq k\}$ and let $\seq{x_i}_{i\in\omega}$ be a $\delta$-pseudo-orbit for $id_X$. By choice of $\delta$, if $x_i\in U_j$, then $d(x_i,x_{i+1})=d(f(x_i),x_{i+1})<\delta$. Thus $d(x_{i+1},U_j)<\delta$, and hence $x_{i+1}\in U_j$ as well. Thus there exists $j\leq n$ with $\{x_i:i\in\omega\}\subseteq U_j$. Fix $z\in U_j$ and observe that for each $i\in\omega$, $d(x_i,z)$ is less than the diameter of $U_j$, and thus $d(f^i(z),x+i)=d(z,x_i)<\epsilon$, i.e. the orbit of $z$ $\epsilon$-shadows $\seq{x_i}$. Thus $(X,id_X)$ has shadowing.
\end{proof}
This result is easily extended to systems which are periodic. First let us observe the following general result.
\begin{lemma} \label{iterates}
If $(X,f)$ has shadowing, then for each $n>0$, $(X, f^n)$ also has shadowing.
\end{lemma}
\begin{proof}
Suppose that $(X,f)$ has shadowing and fix $\epsilon>0$. Choose $\delta>0$ such that every $\delta$-pseudo-orbit for $f$ is $\epsilon$-shadowed by an $f$-orbit.
Now, let $\seq{x_i}_{i\in\omega}$ be a $\delta$-pseudo-orbit for $f^n$. Define a sequence $\seq{y_j}_{j\in\omega}$ as follows. For $j\in\omega$, let $q_j, r_j\in\omega$ be the unique choices with $r_j<n$ and $j=q_jn+r_j$, and define $y_j=f^{r_j}(x_{q_j})$. Observe that for $j\in\omega$ such that $r_j\neq n-1$, we have $q_{j+1}=q_j$ and $y_{j+1}=f^{r_{j+1}}(x_{q_j})=f^{r_{j}+1}(x_{q_j})=f(y_j)$ and hence $d(y_{j+1},f(y_j))=0<\delta$. For $j$ such that $r_j=n-1$, we have $q_{j+1}=1+q_j$ and $r_{j+1}=0$, and hence $d(y_{j+1},f(y_{j})=d(x_{1+q_j},f(f^{n-1}(x_{q_j}))=d(x_{1+q_j},f^{n}(x_{q_j})<\delta$ since $\seq{x_i}$ is a $\delta$-pseudo-orbit for $f^n$.
In either case, we see that $d(y_{j+1},f(y_j))<\delta$, so we can choose $z\in X$ such that $d(f^j(z),y_j)<\epsilon$ for all $j\in\omega$. But then $d(f^{in}(z),y_{in})<\epsilon$ for all $i$ and, since $y_{in}=x_{q_{in}}=x_i$, we have $d((f^n)^i(z),x_i)<\epsilon$, i.e. the $f^n$-orbit of $z$ $\epsilon$-shadows $\seq{x_i}$.
\end{proof}
\begin{theorem} \label{periodic}
A periodic system $(X,f)$ has shadowing if and only if $X$ is totally disconnected.
\end{theorem}
\begin{proof}
Let $(X,f)$ be a periodic dynamical systems and let $n>0$ with $f^n=id_X$.
By the above lemma, if $(X,f)$ has shadowing, then so does $(X,f^n)$, i.e. $(X, id_X)$ has shadowing, and hence $X$ is totally disconnected by Theorem \ref{identity}.
Conversely, suppose that $X$ is totally disconnected and fix $\epsilon>0$. Since $X$ is compact, metric and totally disconnected, we can find a finite cover $\mathcal U=\{U_j:j\leq N\}$ consisting of pairwise disjoint clopen sets of diameter less than $\epsilon$.
Fix $0<\eta<\min\{d(U_j,U_k):j\neq k\}$ and by uniform continuity of $f$ and its iterates, choose $0<\delta<\eta$ such that if $d(a,b)<\delta$, then $d(f^i(a),f^i(b))<\eta$ for $i\leq n$. However, since $f^n=id_X$, it follows that $d(f^i(a),f^i(b))<\eta$ for all $i\in\omega$.
Now, let $\seq{x_i}_{i\in\omega}$ be a $\delta$-pseudo-orbit for $f$. We claim that the point $x_0$ $\epsilon$-shadows this pseudo-orbit. To observe this, notice that for each $j,k\in\omega$, we have $d(f^k(f(x_j)),f^k(x_{j+1}))<\eta$ since $d(f(x_i),x_{i+1})<\delta$. In particular, for each $i$, the sequence $f^i(x_0),f^{i-1}(x_1),\ldots f(x_{i-1}),x_i$ has the property that adjacent terms are within $\eta$ of one another, and by choice of $\eta$, there exists an index $j(i)$ such that each term belongs to $U_{j(i)}$, and hence $d(f^i(x_0),x_i)<\epsilon$. This establishes that $x_0$ $\epsilon$-shadows the pseudo-orbit as claimed and therefore $(X,f)$ has shadowing.
\end{proof}
Of course, periodicity in the sense that $f^n=id_X$ is a very strong form of recurrence. It is then natural to consider whether weaker forms of recurrence have similar implications. We begin by considering recurrent systems, in which each point in the system is recurrent, but there is no guaranteed synchronization of return times.
\begin{lemma} \label{reversingchains}
Let $(X,f)$ be a recurrent system. Then for all $\delta>0$, there exists $\eta>0$ such that if $x,y\in X$ with $d(f(x),y)<\eta$, then there exists a $\delta$-chain from $y$ to $x$.
\end{lemma}
\begin{proof}
Suppose that $(X,f)$ is recurrent and let $\delta>0$. By uniform continuity of $f$, fix $\eta>0$ such that if $d(a,b)<\eta$, then $d(f(a),f(b))<\delta$.
Now, let $x,y\in X$ such that $d(f(x),y)<\eta$. Since $(X,f)$ is recurrent, we can choose $M>2$ such that $d(f^M(x),x)<\delta$. Now, observe that $d(f(y),f^2(x))<\delta$ by choice of $\eta$, and thus, if we define $a_i$ for $i\leq M-1$ as follows:
\[a_i=\begin{cases}
y & i = 0 \\
f^{i+1}(x) & 0<i<M-1 \\
x & i=M-1
\end{cases},\]
we see that $\seq{a_i}_{i\leq M}$ is a $\delta$-chain from $y$ to $x$.
\end{proof}
\begin{lemma} \label{equiv}
Let $(X,f)$ be a recurrent system. Then chain accessibility is an equivalence relation on $X$. Furthermore, for all $p\in X$, the component of $p$ in $X$ is contained in $\chainacc{p}$.
\end{lemma}
\begin{proof}
First, note that chain accessibility is transitive in all dynamical systems. Indeed, if $r\in\chainacc{q}$ and $q\in\chainacc{p}$, then for $\delta>0$ there exist $\delta$-chains $\seq{a_i}_{i\leq n}$ from $p$ to $q$ and $\seq{b_i}_{i\leq m}$ from $q$ to $r$. By concatenating these chains, i.e. defining a chain $\seq{c_i}_{i\leq n+m}$ by $c_i=a_i$ for $i\leq m$ and $c_i=b_{i-n}$ for $i\geq n$, we see that there is a $\delta$-chain from $p$ to $r$.
Now, suppose that $(X,f)$ is a recurrent system and let $p\in X$. It is straightforward to see that $p\in\chainacc{p}$: fix $\delta>0$ and, by recurrence, find $M>0$ such that $d(f^M(p),p)<\delta$. It is easy to see that the sequence $p,f(p),\ldots f^{M-1}(p),p$ is the desired $\delta$-chain.
Now, suppose that $q\in\chainacc{p}$. Fix $\delta>0$ and choose $\eta>0$ as in Lemma \ref{reversingchains}. Since $q\in\chainacc{p}$, let $\seq{a_i}_{i\leq n}$ be a $\eta$-chain with $a_0=p$ and $a_n=q$. We wish to demonstrate that there is a $\delta$-chain from $q$ to $p$. By our choice of $\eta$, there exists, for each $0<i\leq n$, a $\delta$-chain from $a_i$ to $a_{i-1}$. By concatenating these chains, we have a $\delta$-chain from $a_n$ to $a_0$, i.e. from $q$ to $p$. Since we can do this for all $\delta>0$, we see that $p\in\chainacc{q}$.
Thus, we have established that chain equivalence is an equivalence relation on $X$. Now, fix $p\in X$ and suppose that $q$ is in the component of $p$. Let $\delta>0$. As in the proof of Theorem \ref{identity}, we can find find a sequence $\seq{x_j}_{j\leq n}$ with $x_0=p$, $x_n=q$ and $d(x_j,x_{j+1})<\delta/2$ for all $i<n$. Additionally, since $(X,f)$ is recurrent, for each $i<n$, we can find $M_j>0$ with $d(f^{M_j}(x_j),x_j)<\delta>2$.
Now, let $M=\sum_{j<n}M_j$ and define the sequence $\seq{a_i}_{i\leq M}$ as follows. For each $i\leq M$, choose $k\leq n$ such that $\sum_{j< k}M_j\leq i<\sum_{j\leq k}M_j$ and define $a_i=f^{i-\sum_{j< k}M_j}(x_k)$. It is straightforward to verify that $\seq{a_i}_{i\leq M}$ is a $\delta$-chain from $p$ to $q$. Indeed, for each $i<M$, there exists $j$ such that either $a_i=f^l(x_j)$ for some $l<M_j-1$ or $a_i=f^{M_j-1}(x_j)$. In the former case, $a_{i+1}=f^{l+1}(x_j)$ and hence $d(f(a_i),a_{i+1})=0<\delta$ and in the latter case $a_{i+1}=x_{j+1}$ and $d(f(a_i),a_{i+1})=d(f(f^{M_j-1}(x_j)),x_{j+1})\leq d(f^{M_j}(x_j),x_{j})+d(x_j,x_{j+1})<\delta$, thus verifying the claim that $\seq{a_i}_{i\leq M}$ is a $\delta$-chain.
\end{proof}
Interestingly, this establishes that for each $p\in X$, the set $\chainacc{p}$ is \emph{chain transitive} in the sense that for any pair $x,y\in\chainacc{p}$, and any $\delta>0$ there is a $\delta$-chain from $x$ to $y$. In particular, this means that $\chainacc{p}$ is closed and completely invariant \cite{deVries}. In addition, this also means that every point belongs to a \emph{basic set} of $(X,f)$, i.e. a maximal chain transitive subset of $X$. As we shall see in the following theorem, if $(X,f)$ also has shadowing, then $\chainacc{p}$ is also \emph{minimal} in the sense that if $A\subseteq\chainacc{p}$ is closed and invariant under $f$ (i.e. $f(A)\subseteq A$), then $A=\chainacc{p}$.
\begin{theorem} \label{chainacc}
Let $(X,f)$ be a recurrent system with shadowing. Then, for all $p\in X$, $(\chainacc{p},f|_{\chainacc{p}})$ is minimal.
\end{theorem}
\begin{proof}
Suppose that $(X,f)$ is recurrent and has shadowing. Fix $p\in X$ and let $C\subseteq \chainacc{p}$ be closed and invariant. We will show that $p\in C$. Once we have done so, since Lemma \ref{equiv} establishes that for all $q\in\chainacc{p}$, $\chainacc{q}=\chainacc{p}$, we see that this holds for all $q\in\chainacc{p}$, establishing that $C=\chainacc{p}$.
Now, let $c\in C$ and $\epsilon>0$. Choose $\delta>0$ such that any $\delta$-pseudo-orbit for $f$ is $\epsilon/3$-shadowed. Since $c\in\chainacc{p}$, we can fix a $\delta$-chain $\seq{a_i}_{i\leq n}$ with $a_0=p$ and $a_n=c$. We can extend this $\delta$-chain to a $\delta$-pseudo-orbit for $f$ by defining, for $i>n$, $a_i=f^{i-n}(c)$.
By choice of $\delta$, fix $z\in X$ such that the orbit of $z$ $\epsilon/3$-shadows $\seq{a_i}_{i\in\omega}$. In particular, $d(z,p)<\epsilon/3$ and for all $i\geq n$, $d(f^i(z),f^{i-n}(c))<\epsilon/3$. Since $(X,f)$ is recurrent, there exists $M\geq n$ such that $d(f^M(z),z)<\epsilon/3$. Then $d(f^{M-n}(c),p)\leq d(f^{M-n}(c),f^M(z))+d(f^{M}(z),z)+d(z,p)<\epsilon$. Since $C$ is invariant by hypothesis, $f^{M-n}(c)\in C$ and thus we see that $d(p,C)<\epsilon$. Bu this holds for all $\epsilon>0$, and $C$ is closed, so $p\in C$ as claimed.
\end{proof}
Interestingly, by applying a theorem of Birkhoff (Theorem 4.2.2 in \cite{deVries}), we can establish that recurrent systems with shadowing actually exhibit a much stronger form of recurrence. In particular, by this theorem, since every point of $X$ belongs to a minimal set, they are also \emph{almost periodic} (a point $x$ is almost periodic provided that for all $\epsilon>0$ there exists $l>0$ such that for any $n\in \omega$, the intersection $\{f^i(x):n\leq i<n+l\}\cap B_\epsilon(x)$ is nonempty).
In Theorems \ref{identity} and \ref{periodic}, we established that shadowing implies total disconnectedness in periodic systems. As it happens, since periodic orbits are minimal, these results are also consequences of the following more general result, which is a direct application of Theorem \ref{chainacc} and Lemma \ref{equiv}.
\begin{corollary} \label{recurrentanalogue}
Let $(X,f)$ be a recurrent system with shadowing. Then $X$ is the disjoint union of its minimal subsets and no component of $X$ meets more than one such subset.
\end{corollary}
This has particularly interesting implications when $X$ is connected.
\begin{corollary} \label{recurrentminimal}
Let $X$ be connected and let $(X,f)$ be a system with shadowing. Then $(X,f)$ is minimal if and only if it is recurrent.
\end{corollary}
\begin{proof}
Let $(X,f)$ be a system with shadowing with $X$ connected.
Suppose that $(X,f)$ is recurrent. Then, since $X$ has only one component, Corollary \ref{recurrentanalogue} immediately implies that $(X,f)$ is minimal.
Conversely, it is well known that a minimal system is recurrent \cite{deVries}, and thus if $(X,f)$ is minimal, then it is recurrent.
\end{proof}
We now turn out attention to the stronger recurrence-type notion of uniform rigidity. In uniformly rigid systems, the `return times' of each point coincide, allowing for much stronger results which paralleling those for periodic systems.
\begin{theorem} \label{rigidsystem}
A uniformly rigid system $(X,f)$ has shadowing if and only if $X$ is totally disconnected.
\end{theorem}
\begin{proof}
First, suppose that $(X,f)$ is uniformly rigid and has shadowing. Fix $p\in X$ and let $C$ be the component of $p$ in $X$.
Fix $q\in C$ and let $\epsilon>0$. Since $(X,f)$ has shadowing, fix $\delta>0$ such that every $\delta$-pseudo-orbit is $\epsilon/3$-shadowed. Since $(X,f)$ is uniformly rigid, choose $N$ such that $\rho(f^N,id_X)<\delta/2.$
Now, since $p$ and $q$ belong to the same component of $X$, we can find $K\geq 0$ and a sequence $\seq{a_i}_{i\leq K}$ in $X$ such that $a_0=p$, $a_K=q$ and for all $i<K$, $d(a_i,a_{i+1})<\delta/2$. We can extend this to an infinite sequence $\seq{a_i}_{i\in\omega}$ by defining $a_i=q$ for all $i>K$. From this sequence, we construct the sequence $\seq{c_i}_{i\in\omega}$ by defining, $c_i=f^{i-jN}(a_j)$ where $jN\leq i<(j+1)N$. It is not difficult to see that this is a $\delta$-pseudo-orbit for $f$: for $i$ which are not congruent to $N-1$ modulo $N$, we have $d(f(c_i),c_{i+1})=d(c_{i+1},c_{i+1})=0<\delta$ and otherwise $d(f(c_i),c_{i+1})=d(f(f^{N-1}(a_j)),a_{j+1})=d(f^N(a_j),a_{j+1})\leq d(f^N(a_j),a_{j})+ d(a_j,a_{j+1})<\delta$.
By choice of $\delta$, we can find $z\in X$ which $\epsilon/3$-shadows $\seq{c_i}$. In particular, $d(z,p)<\epsilon/3$, and for all $k\geq K$, $d(q,f^{kN}(z))=d(c_{kN},f^{kN}(z))<\epsilon/3$.
To complete the proof, choose $t\geq K$ such that $\rho(f^t,id_X)<\epsilon/(3N)$. It is easy to verify then, that $\rho(f^{tN},id_X)<\epsilon/3$ and thus
\[d(p,q)\leq d(p,z)+d(z,f^{tN}(z))+d(f^{tN}(z),q)<\epsilon.\]
Since this holds for all $\epsilon>0$, we see that $d(p,q)=0$, i.e. $q=p$ and therefore $C=\{p\}$. Thus, $X$ has no non-degenerate components and is totally disconnected.
Conversely, suppose that $(X,f)$ is uniformly rigid and that $X$ is totally disconnected. Fix $\epsilon>0$. Since $X$ is compact and metric, we can find a finite cover $\mathcal U=\{U_j:j\leq n \}$ of $X$ consisting of pairwise disjoint open sets of diameter less than $\epsilon$.
Fix $0<\eta<\min\{d(U_j,U_k):j\neq k\}$. Now, since $(X,f)$ is uniformly rigid, choose $N>0$ with $\rho(f^N,id_X)<\eta$ and, by uniform continuity, choose $\delta>0$ such that if $d(a,b)<\delta$, then $d(f^i(a),f^i(b))<\eta$ for all $i\leq N$.
Let $\seq{x_i}$ be a $\delta$-pseudo-orbit. We claim that the orbit of $x_0$ $\epsilon$-shadows $\seq{x_i}$. Indeed, for each $j\in\omega$ and $k\leq N$, we have $d(f^k(f(x_j)),f^k(x_{j+1}))<\eta$ since $d(f(x_i),x_{i+1})<\delta$. It follows that for each $k\leq N$ and $i\in\omega$, the sequence $f^k(x_{i-k}),f^{k-1}(x_{i-k+1}),\ldots x_{i}$ has the property that consecutive terms are within $\eta$ of one another, and hence there exists a unique index $j(i)$ such that each of these terms belongs to $U_{j(i)}$.
We claim that $f^i(x_0)\in U_{j(i)}$ for all $i\in\omega$. Indeed, by taking $k=i$ in the above observation, we see that the claim holds for $i\leq N$. Now, suppose that the claim fails and let $L$ be the least index at which it fails. Since $L>N$, $N-L\geq 0$ and each of $ d\left(x_L,f^N(x_{L-N})\right)$, $d\left(f^N(x_{L-N}),x_{L-N}\right)$, $d\left(x_{L-N},f^{L-N}(x_0)\right)$, and $d\left(f^{L-N}(x_0),f^{L}(x_0)\right)$ is less than $\eta$ (the first by the observation in the previous paragraph, the second and fourth by choice of $N$ and uniform rigidity, and the third by minimality if $L$). Therefore, each of $x_L, f^N(x_{L-N}), x_{L-N}, f^{L-N}(x_0)$ and $f^L(x_0)$ each belong to the same element of $\mathcal U$. In particular, $f^L(x_0)\in U_{j(L)}$, a contradiction.
Thus, for all $i\in\omega$, both $f^i(x)$ and $x_i$ belong to $U_{j(i)}$ and therefore $d(f^i(x_0),x_i)<\epsilon$. This establishes that the orbit of $x_0$ $\epsilon$-shadows $\seq{x_i}$ and thus $(X,f)$ has shadowing.
\end{proof}
\section{Rigid Spaces and Shadowing} \label{Rigid Spaces}
As mentioned in the introduction, there is a significant body of work devoted to determining the prevalence of shadowing in $\mathcal C(X)$ for various categories of topological spaces $X$. In particular, shadowing has been shown to be a generic property of dynamical systems on manifolds (\cite{Yano, Odani, Pilyugin-Plam, Mizera, Mazur-Oprocha}), dendrites (\cite{BMR-Dendrites}), and more exotic locally connected continua (\cite{Meddaugh-Genericity}). In addition, shadowing has been shown to be generic in the space of \emph{surjections} ($\mathcal S(X)$) on spaces in those classes.
In this section, we use the results of Section \ref{Rigid systems} to classify those spaces $X$ which have the property that \emph{every} map (in $\mathcal C(X)$ and in $\mathcal S(X)$) as well as those in which \emph{essentially no} map has shadowing. Of particular note is the demonstration of a compact metric space having the property that shadowing is not a dense property in $\mathcal C(X)$.
We begin by describing a system on the Cantor set which does not have shadowing.
\begin{lemma} \label{cantorexample}
Let $C$ denote the standard middle-third Cantor set in the interval $[0,1]$ and define $t:C\to C$ by
\[t(x)=\begin{cases}
3x & x\leq1/3\\
0 & x\geq2/3.
\end{cases}\]
Then $t$ is a continuous surjection and $(C,t)$ does not have shadowing.
\end{lemma}
\begin{proof}
It is trivial to see that $t\in\mathcal S(X)$. To see that $(C,t)$ does not have shadowing we proceed as follows.
Fix $1>\epsilon>0$ and choose $\delta>0$. We will now construct a $\delta$-pseudo-orbit for $t$ which cannot be $\epsilon$-shadowed by the orbit of any point. Now, choose $M\in\omega$ such that $1/3^{M}<\delta$ and define the sequence $\seq{p_i}_{i\in\omega}$ by $p_i=\frac{3^{(i\mod{M+1})}}{3^M}$. Observe that if $i\mod{M+1}\neq M$, then $t(p_i)=p_{i+1}$ and if $i\mod{M+1}=M$, then $t(p_i)=0$ and thus $d(p_{i+1},t(p_i))=1/3^M<\delta$, so that $\seq{p_i}$ is a $\delta$-pseudo-orbit for $t$.
To see that no orbit $\epsilon$-shadows this pseudo-orbit, note that for any $z\in C$, there exists $N\in\omega$ with $t^n(z)=0$ for $n\geq N$. However, if we choose $i\geq N$ with $i\mod{M+1}=M$, then $d(p_i,t^n(z))=d(1,0)\geq\epsilon$.
Thus, there is no $\delta>0$ such that every $\delta$-pseudo-orbit is $\epsilon$-shadowed, and $(C,t)$ does not have shadowing.
\end{proof}
We are now prepared to prove the following.
\begin{theorem} \label{finitesurjcont}
For a compact metric space $X$, the following are equivalent:
\begin{enumerate}
\item every map in $\mathcal C(X)$ has shadowing.
\item every map in $\mathcal S(X)$ has shadowing
\item $X$ is finite.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $X$ be a compact metric space.
Clearly, if every map in $\mathcal C(X)$ has shadowing, then so does every map in $\mathcal S(X)$.
Now, suppose that each map in $\mathcal S(X)$ has shadowing. In particular, the identity map on $X$ has shadowing, and therefore by Theorem \ref{identity}, $X$ is totally disconnected. We now have two cases to consider---either there are finitely many isolated points in $X$ or there are infinitely many.
First, suppose that there are finitely many isolated points in $X$ and let $F$ be the set of these points. Then $X\setminus F$ is compact, totally disconnected and has no isolated points, and hence is a Cantor set \cite{Nadler}. Let $(C,t)$ be the system from Lemma \ref{cantorexample} and let $h:X\setminus F\to C$ be a homeomorphism. Define $f:X\to X$ by $f(x)=x$ if $x\in F$ and $f(x)=h^{-1}(t(h(x)))$ if $x\in X\setminus F$. It is easily seen that since $(C,t)$ does not have shadowing, then $(X,f)$ also does not, contradicting our assumption.
Thus, $X$ must have infinitely many isolated points. Since $X$ is compact metric, we can enumerate a sequence $\seq{p_i}_{i\in\omega}$ of distinct isolated points which converges to a point $p_\infty\in X$. We then define $f:X\to X$ by taking $f(x)=x$ if $x\notin\{p_i:i\in\omega\}$, $f(p_0)=p_\infty$, and $f(p_i)=p_{i-1}$ for $i>0$.. It is easy to see that this is a continuous surjection on $X$ and, by an argument similar to that in Lemma \ref{cantorexample}, $(X,f)$ does not have shadowing, again contradicting out assumption, thus establishing that $X$ is finite.
Finally, assume that $X$ is finite. Let $f\in \mathcal C(X)$ and fix $\epsilon>0$. Let $\delta>0$ such that if $a\neq b\in X$, then $d(a,b)\geq\delta$. Then every $\delta$-pseudo-orbit is a true orbit and is trivially shadowed by its initial point.
\end{proof}
It is worth pointing out that we can prove that $X$ being finite implies that every map $\mathcal S(X)$ has shadowing can also be proven by appealing to the results of the previous section. In particular, if $X$ is finite and $f\in\mathcal S(X)$, then $f$ is a permutation and therefore there exists $N>0$ such that $f^N=id_X$, and therefore by Theorem \ref{periodic}, $f$ has shadowing since $X$ is finite.
We now turn our attention to the analysis of those spaces in which essentially no map has shadowing. We begin by noting that if $X$ is a compact metric space and $c\in X$, then the constant map $x\mapsto c$ has shadowing, and therefore $\mathcal C(X)$ will always contain at least some maps with shadowing.
The following examples of Cook will be useful. Through an intricate process using inverse limits to `blow-up' points into solenoids, Cook developed a family of continua with very few non-constant self-maps. We list those properties of these continua which are relevant to our discussion in the following remark.
\begin{remark}[Cook, \cite{Cook}] \label{Cook}
For each $n\in \mathbb{N}$, there exists a non-degenerate continuum $H_n$ such that there exist $n$ and only $n$ elements of $\mathcal S(H_n)$, each of which is a homeomorphism. Furthermore, if $f\in\mathcal C(H_n)\setminus\mathcal S(H_n)$, then $f(H_n)$ is a singleton.
Additionally, there is a continuum $H_\infty$ such that $\mathcal S(H_\infty)$ is homeomorphic to the Cantor set and each element is a homeomorphism. Furthermore, if $f\in\mathcal C(H_\infty)\setminus\mathcal S(H_\infty)$, then $f(H_\infty)$ is a singleton.
\end{remark}
These examples allow us to answer a few open questions concerning shadowing. In particular, in \cite{Meddaugh-Genericity} we asked whether there were, in fact any continua $X$ such that shadowing is not generic in $\mathcal C(X)$.
\begin{theorem} \label{HnNondense}
There exist continua with the property that shadowing is not dense in $\mathcal C(X)$. In particular, for each $n\in\mathbb{N}$, the continuum $H_n$ has the property that shadowing is not dense in $\mathcal C(X)$.
\end{theorem}
\begin{proof}
Fix $n\in \mathbb{N}$ and note that $\mathcal S(H_n)$ has a exactly $n$ elements, each which is isolated in $\mathcal C(H_n)$ since $\mathcal C(H_n)\setminus\mathcal S(H_n)$ contains only the constant maps. In particular, $id_{H_n}$ is one of these maps. Since $H_n$ is not totally disconnected, by Theorem \ref{identity}, $(H_n,id_{H_n})$ does not have shadowing. Since $id_{H_n}$ is isolated in $\mathcal C(H_n)$, we see that shadowing is not dense in $\mathcal C(H_n)$.
\end{proof}
In \cite{meddaugh2021shadowing} we demonstrated that for a large class of spaces, the shadowing property is equivalent to the \emph{continuously generated pseudo-orbit tracing property (CGPOTP)}. Briefly, a system $(X,f)$ has CGPOTP if for all $\epsilon>0$ there exists $\delta>0$ such that if $f,g\in\mathcal C(X)$ with $\rho(f,g)<\delta$, then every $g$-orbit is $\epsilon$-shadowed by an $f$-orbit. It was left open in that paper whether there were any systems with CGPOTP, but not shadowing.
\begin{theorem} \label{HnNoShadowingInSurj}
For each $n\in\mathbb{N}$, and $f\in\mathcal S(H_n)$, the system $(H_n,f)$ has CGPOTP but does not have shadowing.
\end{theorem}
\begin{proof}
Fix $n\in \mathbb{N}$ and $f\in\mathcal S(H_n)$.
To observe that $(H_n,f)$ does not have shadowing, note that $\mathcal S(H_n)$ consists only of homeomorphisms and therefore is a group under composition. Since it is finite, every element is of finite order and, in particular, there exists $k>0$ with $f^k=id_{H_n}$ and thus $(H_n,f)$ is a periodic system . Since $H_n$ is not totally disconnected, $(H_n,f)$ does not have shadowing.
To see that $(H_n,f)$ has CGPOTP, since $f$ is isolated in $\mathcal C(X)$, we can choose $\delta>0$ such that the set $\{g\in\mathcal C(X):\rho(f,g)<\delta\}=\{f\}$. Now, if we fix $\epsilon>0$, then any continuously generated $\delta$-pseudo-orbit for $f$ is a true orbit for $f$, and is therefore trivially shadowed by the orbit of its initial term.
\end{proof}
The observant reader may have noticed that we have made no claims regarding the properties of $H_\infty$. In Theorems \ref{HnNondense} and \ref{HnNoShadowingInSurj}, since $\mathcal S(H_n)$ is finite, we were able to demonstrate that the surjections in $\mathcal C(X)$ were periodic and apply Theorems \ref{identity} and \ref{periodic}. Since $\mathcal S(H_\infty)$ is not finite, this technique cannot work. In order to generalize this, observe that for each $n\in\mathbb{N}$, each map in $\mathcal S(H_n)$ is periodic and hence uniformly rigid. As it happens, this is the relevant property to generalize the preceding results.
\begin{definition}
A compact metric space $X$ is \emph{rigid} provided that for each map $f\in\mathcal S(X)$, the system $(X,f)$ is uniformly rigid.
\end{definition}
Examples of rigid spaces include finite spaces as well as the $H_n$ continua of Cook. As we shall see later, the $H_\infty$ continuum is also rigid.
\begin{lemma} \label{totallydisconnectedrigid}
A totally disconnected space is rigid if and only if it is finite.
\end{lemma}
\begin{proof}
Let $X$ be a finite space. Since $\mathcal S(X)$ is finite and consists only of homeomorphisms, as in the proof of Theorem \ref{HnNoShadowingInSurj}, each map in $\mathcal S(X)$ is periodic and hence uniformly recurrent. Thus, $X$ is rigid.
Now, assume that $X$ is totally disconnected and infinite. It is easy to verify that a continuous surjection constructed in the fashion of the maps constructed in the proofs of Lemma \ref{cantorexample} and Theorem \ref{finitesurjcont} is not uniformly rigid, as (in both constructions) there exists points $x\neq y\in X$ and $n\in\mathbb{N}$ such that $\{f^n(x): n\geq N\}=\{y\}$, so that no subsequence of $\seq{f^n}$ can converge to $id_X$.
\end{proof}
By Theorem \ref{finitesurjcont}, then, finite rigid spaces have the property that \emph{every} system has shadowing. This stands in stark contrast to the infinite case in which \emph{no} surjective system has shadowing.
\begin{theorem} \label{rigidshadowing}
Let $X$ be an infinite rigid space and $f\in\mathcal S(X)$. Then $(X,f)$ does not have shadowing.
\end{theorem}
\begin{proof}
Let $X$ be an infinite rigid space and $f\in\mathcal S(X)$. By Lemma \ref{totallydisconnectedrigid}, $X$ is not totally disconnected. Since $X$ is rigid, $(X,f)$ is uniformly rigid, and since $X$ is not totally disconnected, Theorem \ref{rigidsystem} gives us that $(X,f)$ does not have shadowing.
\end{proof}
Of course, it is not immediately apparent how to determine whether a space $X$ is rigid. Fortunately, compactness of $\mathcal S(X)$ is sufficient.
\begin{theorem} \label{rigidcompact}
Let $X$ be a compact metric space. If $\mathcal S(X)$ is compact, then $X$ is rigid.
\end{theorem}
\begin{proof}
Suppose $X$ is a compact metric space with $\mathcal S(X)$ compact and fix $f\in\mathcal S(X)$.
For each $g\in\mathcal S(X)$, define $I(g)=\overline{\{g^n:n\in\mathbb{N}\}}$ and note that this is a compact subset of $\mathcal S(X)$. We claim that $I(g)$ is closed under composition in the sense that if $p,q\in I(g)$, then $p\circ q\in I(g)$. To observe this, let $p,q\in I(g)$ and fix $\epsilon>0$. Choose $\seq{n_i}_{i\in\omega}$ and $\seq{m_i}_{i\in\omega}$ so that $\seq{g^{n_i}}$ converges to $p$ and $\seq{g^{m_i}}$ converges to $q$ (note that one or both of these may be constant sequences).
Now, choose $\delta>0$ (by continuity of $p$) so that if $d(a,b)<\delta$, $d(p(a),p(b))<\epsilon/2$. Finally, choose $K\in\omega$ such that for all $i\geq K$, we have both $\rho(g^{n_i},p)<\epsilon/2$ and $\rho(g^{m_i},q)<\delta$. Then, for all $i\geq K$ and $x\in X$, we have
\[d(g^{n_i+m_i}(x),p(q(x)))\leq d(g^{n_i}(g^{m_i}(x)), p(g^{m_i}(x)))+d(p(g^{m_i}(x)), p(q(x)))<\epsilon,\]
and therefore $\rho(g^{n_i+m_i},p\circ q)\leq\epsilon$. It follows that $p\circ q\in \overline{\{g^n:n\in\mathbb{N}\}} = I(g)$.
Since $I(g)$ is closed under composition in this sense, if $h\in I(g)$, then $\{h^n:n\in\mathbb{N}\}\subseteq I(g)$, and therefore $I(h)\subseteq I(g)$ since $I(g)$ is closed.
We will now show $id_X\in I(f)$. As a consequence of the above, the set $\mathcal I(f)=\{I(h):h\in I(f)\}$ is partially ordered by inclusion. Note that, by compactness of $\mathcal S(X)$ and hence of $I(f)$, if $\seq{h_i}_{i\in\omega}$ is a sequence in $I(f)$ such that $I(h_{i+1})\subseteq I(h_i)$ for all $i\in\omega$, then $\bigcap I(h_i)$ is nonempty. In particular, there exists $g\in \bigcap I(h_i)$ and $I(g)$ is a subset of $I(h_i)$ for each $i\in\omega$. By Zorn's lemma, since every descending chain has a lower bound, there exists a minimal element of $\mathcal I(f)$.
Fix $g\in I(f)$ such that $I(g)$ is minimal in $\mathcal I(f)$. Observe that $I(g^2)\subseteq I(g)$ and $I(g^2)\in\mathcal I(f)$, so $I(g^2)=I(g)$ and, in particular, $g\in I(g^2)$, and therefore $g\in\overline{\{g^{1+n}:n\in\mathbb{N}\}}$. It follows that there exists a sequence $\seq{n_i}_{i\in\omega}$ in $\mathbb{N}$ with $g^{1+n_i}$ converging to $g$ (again, this may be a constant sequence).
Now, since $\mathcal S(X)$ is compact, there exists an increasing sequence $\seq{i_j}_{j\in\omega}$ such that $\seq{g^{n_{i_j}}}_{j\in\omega}$ converges to a function $h$. But this function must satisfy $h\circ g=g$, since the sequence $\seq{g^{n_{i_+j}}\circ g}=\seq{g^{1+n_{i_j}}}$ converges to $g$. Since $g$ is a surjection, it follows that $h=id_X$. This establishes that $id_X\in I(f)$ and therefore $f$ is uniformly rigid.
\end{proof}
Since $\mathcal S(H_\infty)$ is a Cantor set, it is compact and thus $H_\infty$ is rigid (and infinite). Notice that $\mathcal C(H_\infty)\setminus\mathcal S(\infty)$ contains only the constant maps. Since the collection of constant maps is homeomorphic to $H_\infty$, it is compact, and hence closed. Thus, we have that $\mathcal S(H_\infty)$ is open, which yields the following corollary.
\begin{corollary}
Shadowing is not dense in $\mathcal C(H_\infty)$ and is not present in $\mathcal S(H_\infty)$.
\end{corollary}
\section{Rigidity and Shadowing for Non-Surjections} \label{non-surjective}
As an interesting consequence of Theorem \ref{rigidcompact}, if $X$ is a space with $\mathcal C(X)$ compact, then $X$ is rigid (since $\mathcal S(X)$ is closed in $\mathcal C(X)$, and therefore compact) and thus shadowing is not present in $\mathcal S(X)$. A priori, however, $\mathcal S(X)$ may be nowhere dense in $\mathcal S(X)$ and therefore the question of whether shadowing is dense in $\mathcal C(X)$ is still open. In order to better understand what can happen when $\mathcal C(X)$ is compact, we first generalize the notion of uniform rigidity.
\begin{definition}
A dynamical system $(X,f)$ is \emph{uniformly retract-rigid} provided that there exists a retraction $r:X\to R$ and a subsequence of $\seq{f^i}$ which converges uniformly to $r$.
\end{definition}
It is not difficult to show that a map $r$ is a retraction if and only if $r$ is \emph{idempotent}, i.e., $r\circ r=r$, and thus uniformly retract-rigid systems can be alternatively characterized as those systems which have a subsequence of iterates converging to an idempotent system. It is also worth pointing out that it must be the case that the set $R$ is $f$-invariant, in the sense that $f(R)=R$ and that it must be the case that $\bigcap f^n(X)= R$. Indeed, fix $y\in R$ and $n\in\omega$. Notice that the sequence $\seq{f^{n_i}(y)}$ converges to $y$, and since for each $n$, $f^n(X)$ is a compact subset of $X$ which contains a tail of $\seq{f^{n_i}(y)}$, we have $y\in f^n(X)$, establishing that $R\subseteq\bigcap f^n(X)$. Conversely, fix $\eta>0$ and choose $N\in\omega$ such that for $i\geq N$, and $x\in X$ we have $f^{n_i}(x)\in B_\epsilon(r(x))$, and by compactness, $f^{n_i}(X)\subseteq B_\epsilon(R)$. In particular, $\bigcap f^n(X)\subseteq f^{n_i}(X)\subseteq B_\epsilon(R)$, and since this holds for every $\epsilon>0$, we have $\bigcap f^n(X)\subseteq R$ by compactness.
The remainder of this paper focuses on developing results for uniform retract-rigidity which parallel those we have developed for uniform rigidity.
\begin{theorem} \label{retractrigid}
If the uniformly retract-rigid system $(X,f)$ has shadowing, then $\bigcap f^n(X)$ is totally disconnected
\end{theorem}
\begin{proof}
Since $(X,f)$ is uniformly retract-rigid, we can find a compact set $R\subseteq X$ and a retraction $r:X\to R$ and and a subsequence $\seq{n_i}$ such that $\seq{f^{n_i}}$ converges to $r$.
Now, suppose that $(X,f)$ has shadowing. Fix $\epsilon>0$ and choose $\delta>0$ such that every $\delta$-pseudo-orbit is $\epsilon/4$-shadowed. Now, let $p\in R$ and let $C$ be the component of $p$ in $R$.
Fix $q\in C$. We will show that $q=p$ in a manner similar to that of Theorem \ref{rigidsystem}. Towards this end, choose $N$ such that $\rho(f^N,r)<\delta/2$.
Now, since $p$ and $q$ belong to the same component of $X$, we can find $K\geq 0$ and a sequence $\seq{a_i}_{i\leq K}$ in $X$ such that $a_0=p$, $a_K=q$ and for all $i<K$, $d(a_i,a_{i+1})<\delta/2$ and we extend this to an infinite sequence $\seq{a_i}_{i\in\omega}$ by defining $a_i=q$ for all $i>K$. From this sequence, we construct the sequence $\seq{c_i}_{i\in\omega}$ by defining, $c_i=f^{i-jN}(a_j)$ where $jN\leq i<(j+1)N$. As in Theorem \ref{rigidsystem}, this is a $\delta$-pseudo-orbit for $f$.
Moreover, since $f(R)=R$, we can, for each $k\in\omega$, find $c_{-n_k}\in R$ with $f^{n_k}(c_{-n_k})=c_0$. By defining $c_{j-n_k}=f^j(c_{-n_k})$ for $j\leq n_k$, we can construct a family of $\delta$-pseudo-orbits $\seq{c_i}_{i\geq-n_k}$ in $R$.
By choice of $\delta$, the pseudo-orbit $\seq{c_i}_{i\geq-n_k}$ is $\epsilon/4$-shadowed by some point $z_k\in X$, and therefore the pseudo-orbit $\seq{c_i}_{i\in\omega}$ is $\epsilon/4$-shadowed by $f^{n_k}(z_k)$. By passing to a subsequence if necessary, we can assume that the points $f^{n_k}(z_k)$ converge to some point $z\in X$. This point is easily seen to $\epsilon/3$-shadow $\seq{c_i}_{i\in\omega}$, since $d(f^i(f^{n_k}(z_k)),c_i)$ converges to $d(f^i(z),c_i)$. Furthermore, this point $z$ belongs to $R$ since $R$ is closed and $d(z,R)$ is the limit of $d(f^{n_k}(z_k),R)$, which is clearly zero.
To complete the proof, choose $t\geq K$ such that $\rho(f^t,r)<\epsilon/(3N)$. It is easy to verify then, that $\rho(f^{tN},r)<\epsilon/3$ and thus
\[d(p,q)\leq d(p,z)+d(z,f^{tN}(z))+d(f^{tN}(z),q)<\epsilon.\]
As in Theorem \ref{rigidsystem}, it quickly follows that $C=\{p\}$, i.e., $R$ is totally disconnected.
\end{proof}
Note that, in contrast with the result of Theorem \ref{rigidcompact}, this is not a complete characterization. However, it possible that more can be said. In particular, if $R=\bigcap f^n(X)$ is totally disconnected, then $(R,f|R)$ has shadowing since the restriction of $f$ to $R$ is a rigid system. However, the action of $f$ on the complement of $R$ may not be sufficiently well structured to guarantee shadowing of $(X,f)$. Indeed, there are examples of uniformly retract-rigid systems in which $(X,f)$ does not have shadowing. This can be observed by letting $(Y,g)$ be a system without shadowing and letting $(X,f)$ be the `cone' of $(Y,g)$ with the system $([0,1],t\mapsto \frac{1}{2}t)$, i.e. the space $Y$ is the quotient of $Y\times[0,1]$ obtained by identifying all points of the form $(y,0)$ and $f((y,t))$ is defined to be $(g(y),\frac{1}{2}t)$. Still, this system exhibits the property of \emph{eventual shadowing} and it seems probable that every uniformly retract-rigid system with $R$ totally disconnected exhibits this property---see \cite{GoodMeddaugh-ICT} for a detailed discussion of this concept.
As is the case with uniform rigidity, it is useful to consider the class of spaces on which every dynamical system is uniformly retract-rigid.
\begin{definition}
A compact metric space $X$ is \emph{retract-rigid} provided that for every $f\in\mathcal C(X)$, the system $(X,f)$ is uniformly retract-rigid.
\end{definition}
Note that a retract-rigid space is necessarily rigid, but a rigid space need not be retract-rigid.
We will now show that in systems on retract-rigid spaces, shadowing is an uncommon property in $\mathcal C(X)$. We remark that since each constant map in $\mathcal C(X)$ has shadowing, shadowing can never be absent in $\mathcal C(X)$ like it can be in $\mathcal S(X)$. In order to proceed, we first need the following lemmas.
\begin{lemma} \label{retractions}
If $X$ is retract-rigid and $r:X\to R$ is a retraction, then $R$ is rigid.
\end{lemma}
\begin{proof}
Suppose that $X$ is retract-rigid and let $r:X\to R$ be a retraction.
Suppose that $f\in\mathcal S(R)$. Then $g=f\circ r\in\mathcal C(X)$ and therefore $(X,g)$ is retract-rigid. Fix $\seq{n_i}$ such that $\seq{g^{n_i}}$ converges to a retraction $r'$. Since $g^{n_i}(X)=f^{n_i}(r(X))=f^{n_i}(R)=R$ for all $i\in\omega$, it follows that $r'(X)=R$.
But then, $\seq{f^{n_i}}=\seq{g^{n_i}|_{R}}$ converges to $r'|_R=id_R$, and therefore $(R,f)$ is uniformly rigid. Since this holds for all $f\in\mathcal S(R)$, $R$ is a rigid space.
\end{proof}
\begin{lemma} \label{finitelymanycomponents}
If $X$ is retract-rigid, then $X$ has finitely many components.
\end{lemma}
\begin{proof}
Suppose that $X$ is retract-rigid and that $X$ has infinitely many components. We will show that there is a retraction of $X$ onto an infinite, totally disconnected space. By Lemma \ref{totallydisconnectedrigid}, such a space cannot be rigid, which by Lemma \ref{retractions} contradicts the assumption that $X$ is retract-rigid, completing the proof.
The claimed retraction is constructed as follows. Since $X$ has infinitely many components, we can find a separation $A_1, B_1$ of $X$, at least one of which contains infinitely many components--without loss of generality, $B_1$. We now proceed inductively: having defined $A_n$ and $B_n$ to be a separation of $B_{n-1}$ with $B_n$ containing infinitely many components, we choose a separation $A_{n+1}$, $B_{n+1}$ of $B_n$ such that $B_{n+1}$ contains infinitely many components. Finally, we define $A_\infty=\bigcap B_{n}$.
Note that the collection $\{A_n:n\in\omega\cup\{\infty\}\}$ is a a pairwise disjoint collection of nonempty closed subsets of $X$, and that each element other than $A_\infty$ is also open. Now, choose a sequence $\seq{a_n}_{n\in\omega}$ with $a_n\in A_n$ for each $n\in\omega$. Since $X$ is compact, we may choose a subsequence $\seq{n_i}_{i\in\omega}$ which converges to a point $a_\infty$, which is necessarily in $A_\infty$. It is easy to see that $\{a_{n_i}:i\in\omega\}\cup\{a_\infty\}$ is infinite, compact and totally disconnected.
Finally, we define $r:X\to\{a_{n_i}:i\in\omega\}\cup\{a_\infty\}$ as follows. For $x\in A_\infty$, $r(x)=a_\infty$. For $x\in A_k$, choose $i\in\omega$ minimal such that $x\leq n_i$ and define $r(x)=a_{n_i}$. All that remains to verify is that $r$ is continuous. To see this, we observe that the subspace topology on $\{a_{n_i}:i\in\omega\}\cup\{a_\infty\}$ has basis consisting of all sets of the form $\{a_{n_i}:i\in\omega\}$ or $\{a_{n_i}:i\geq L\}\cup\{a_\infty\}$. Clearly $r^{-1}(\{a_{n_i}\})=\bigcup_{n_{i-1}<k\leq n_i} A_k$ and $r^{-1}(\{a_{n_i}:i\geq L\}\cup\{a_\infty\})=B_{n_L-1}$ are both open, and thus $r$ is continuous.
\end{proof}
\begin{theorem} \label{infiniteequalsnoshadowing}
Let $X$ be a retract-rigid space and $f\in\mathcal C(X)$ such that $\bigcap f^n(X)$ is infinite. Then $(X,f)$ does not have shadowing.
\end{theorem}
\begin{proof}
Suppose $X$ is retract-rigid and that $(X,f)$ has shadowing, but that $\bigcap f^n(X)$ is infinite. Since $X$ is retract-rigid, $(X,f)$ is uniformly retract-rigid and therefore $\bigcap f^n(X)$ is totally disconnected by Theorem \ref{retractrigid}.
However, by Lemma \ref{finitelymanycomponents}, $X$ has finitely many components. Let $N$ be the number of components. Then, for each $n\in\omega$, $f^n(X)$ has no more than $N$ components since it is the continuous image of $X$. It follows then, that $\bigcap f^n(X)$ has at most $N$ many components.
Thus $\bigcap f^n(X)$ is totally disconnected with finitely many components, and is therefore finite, contradicting our assumption.
\end{proof}
Of course, these results are only useful if we can determine that a space is retract-rigid. Fortunately, the results concerning compactness of $\mathcal S(X)$ and rigidity of $X$ in Theorem \ref{rigidcompact} generalize in the obvious way.
\begin{theorem}
Let $X$ be a compact metric space. If $\mathcal C(X)$ is compact, then $X$ is retract-rigid.
\end{theorem}
\begin{proof}
Suppose that $X$ is a compact metric space with $\mathcal C(X)$ compact and fix $f\in\mathcal C(X)$.
The proof of this result proceeds exactly in the proof of Theorem \ref{rigidcompact} until we reach that $h\circ g = g$. Since $g$ need not be a surjection, we do not know that $h=id_X$. However, since $h\circ g=g$, $h(X)=g(X)$ and $h|_{g(X)}=id_{g(X)}$, so that $h$ is a retract. Since $h\in I(f)$, there is a subsequence of $\seq{f^{n}}$ which converges to a retraction, and thus $f$ is uniformly retract-rigid.
\end{proof}
We close with the following result, which characterizes shadowing in compact connected spaces $X$ with compact $\mathcal C(X)$ (of which the spaces $H_n$ for $n\in\mathbb{N}\cup\{\infty\}$ are trivial examples).
\begin{corollary} \label{compactC}
Let $X$ be a compact, connected metric space with $\mathcal C(X)$ compact. Then $(X,f)$ has shadowing if and only if $\seq{f^n}$ converges to a constant map.
\end{corollary}
\begin{proof}
Suppose $X$ is a compact, connected metric space with $\mathcal C(X)$ compact. By the previous result, $X$ is retract-rigid, and thus $(X,f)$ is uniformly retract-rigid.
Then $(X,f)$ has shadowing only if $\bigcap f^n(X)$ is finite, by Theorem \ref{infiniteequalsnoshadowing}. Since $X$ is connected, this is equivalent to $\bigcap f^n(X)$ being a singleton set $\{c\}$ for some $c\in X$. By compactness of $X$, it follows that this is the case if and only if $\seq{f^n}$ converges to the constant map $x\mapsto c$.
Conversely, if $\seq{f^n}$ converges to the constant map $c:X\to X$ given by $x\mapsto c$, we can observe that $(X,f)$ has shadowing as follows. Fix $\epsilon>0$ and choose $N$ such that for $n\geq N$, we have $\rho(f^n,c)<\epsilon/3$. Then, by uniform continuity of $f$, choose $\delta>0$ such that if $d(a,b)<\delta$, then $d(f^i(a),f^i(b))<\frac{\epsilon}{3N}$ for each $i\leq N$.
Now, let $\seq{x_i}$ be a $\delta$-pseudo-orbit. We claim that it is shadowed by the orbit of $x_0$. Indeed, by choice of $\delta$, we have that for all $i\geq0$ and $M\leq N$,
\begin{align*}d(f^M(x_i), x_{M+i})&\leq\sum_{j=0}^{M-1}d\left(f^{M-j}(x_{i+j}),f^{M-j-1}(x_{i+j+1})\right)\\
&\leq\sum_{j=0}^{M-1}d\left(f^{M-j-1}(f(x_{i+j})),f^{M-j-1}(x_{i+j+1})\right)\\
&< M\frac{\epsilon}{2N}=\epsilon/3.
\end{align*}
And thus, by taking $i=0$ and $N=k$, we see that $d(f^k(x_),x_k)<\epsilon/3$ for $k\leq N$. For $k>N$, we let $i=k-N$ and observe
\begin{align*}d(f^k(x_0),x_k)&=d(f^{N+i}(x_0), x_{N+i})\\
&\leq d(x_{N+i},f^N(x_i))+d(f^N(x_i),c)+d(f^{N+i}(x_0), c)<\epsilon,\end{align*}
thus verifying that the orbit of $x_0$ indeed shadows $\seq{x_i}$.
\end{proof}
\bibliographystyle{plain}
|
2,877,628,088,600 | arxiv | \section{Introduction}
Let $\mathcal{K}$ be a class of first order structures in the same
signature, and let $\mathbf{A},\mathbf{B}\in\mathcal{K}$. We say
that $\mathbf{A}$ is an \emph{epic substructure} of $\mathbf{B}$
in $\mathcal{K}$ provided that $\mathbf{A}$ is a substructure of
$\mathbf{B}$, and for every $\mathbf{C}\in\mathcal{K}$ and all homomorphisms
$g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$ such that $g|_{A}=g^{\prime}|_{A}$,
we have $g=g^{\prime}$. That is, if $g$ and $g'$ agree on $A$,
then they must agree on all of $B$. At first glance the definition
may suggest that $A$ generates $\mathbf{B}$, but on closer inspection
this does not make sense. As $\mathbf{A}$ is a substructure of $\mathbf{B}$,
generating with $A$ will yield exactly $\mathbf{A}$. However, as
the main result of this article shows, the intuition that $A$ acts
as a set of generators of $\mathbf{B}$ is not far off. In fact, if
$\mathcal{K}$ is closed under ultraproducts, we prove that $A$ actually
``generates'' $\mathbf{B}$, only that the generation is not through
the fundamental operations but rather through primitive positive definable
functions. Let's take a look at an example. Write $\mathcal{D}$ for
the class of bounded distributive lattices. There are several ways
to show that both of the three-element chains contained in the bounded
distributive lattice $\mathbf{B}:=\mathbf{2}\times\mathbf{2}$ are
epic substructures of $\mathbf{B}$ in $\mathcal{D}$. One way to
do this is via definable functions. Note that the formula
\[
\varphi(x,y):=x\wedge y=0\,\&\, x\vee y=1
\]
defines the complement (partial) operation in every member of $\mathcal{D}$.
Let $\mathbf{A}$ be the sublattice of $\mathbf{B}$ with universe
$\{\left\langle 0,0\right\rangle ,\left\langle 0,1\right\rangle ,\left\langle 1,1\right\rangle \}$,
and suppose there are $\mathbf{C}\in\mathcal{D}$ and $g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$
such that $g|_{A}=g^{\prime}|_{A}$. Clearly $\mathbf{B}\vDash\varphi(\left\langle 0,1\right\rangle ,\left\langle 1,0\right\rangle )$,
and since $\varphi$ is open and positive, it follows that $\mathbf{C}\vDash\varphi(g\langle0,1\rangle,g\langle1,0\rangle)$
and $\mathbf{C}\vDash\varphi(g'\langle0,1\rangle,g'\langle1,0\rangle)$.
Now $\varphi(x,y)$ defines a function in $\mathbf{C}$, and $g\langle0,1\rangle=g'\langle0,1\rangle$,
so $g\langle1,0\rangle=g'\langle1,0\rangle$. Theorem \ref{epic sii pp definable}
below says that every epic substructure in a class closed under ultraproducts
is of this nature (although the formulas defining the generating operations
may be primitive positive).
The notion of epic substructure is closely connected to that of epimorphism.
Recall that a homomorphism $h:\mathbf{A}\rightarrow\mathbf{B}$ is
a $\mathcal{K}$\emph{-epimorphism} if for every $\mathbf{C}\in\mathcal{K}$
and homomorphisms $g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$,
if $gh=g^{\prime}h$ then $g=g^{\prime}$. That is, $h$ is right-cancellable
in compositions with $\mathcal{K}$-morphisms. Of course every surjective
homomorphism is an epimorphism, but the converse is not true. Revisiting
the example above, the inclusion of the three-element chain $\mathbf{A}$
into $\mathbf{2}\times\mathbf{2}$ is a $\mathcal{D}$-epimorphism.
This also illustrates the connection between epic substructures and
epimorphisms. It is easily checked that $\mathbf{A}$ is an epic substructure
of \textbf{$\mathbf{B}$} in $\mathcal{K}$ if and only if the inclusion
$\iota:A\rightarrow B$ is a $\mathcal{K}$-epimorphism. A class $\mathcal{K}$
is said to have the \emph{surjective epimorphisms }if every $\mathcal{K}$-epimorphism
is surjective. Although this property is of an algebraic (or categorical)
nature it has an interesting connection with logic. When $\mathcal{K}$
is the algebraic counterpart of an algebraizable logic $\vdash$ then:
$\mathcal{K}$ has surjective epimorphisms if and only if $\vdash$
has the (infinite) Beth property (\cite[Thm. 3.17]{beth-block-hoogland}).
For a thorough account on the Beth property in algebraic logic see
\cite{beth-block-hoogland}. We don't go into further details on this
topic as the focus of the present article is on the algebraic and
model theoretic side.
The paper is organized as follows. In the next section we establish
our notation and the preliminary results used throughout. Section
\ref{sec:Main-Theorem} contains our characterization of epic substructures
(Theorem \ref{epic sii pp definable}), the main result of this article.
We also take a look here at the case where $\mathcal{K}$ is a finite
set of finite structures. In Section \ref{sec:Checking-for-epic}
we show that checking for the presence of proper epic subalgebras
(or, equivalently, surjective epimorphisms) in certain quasivarieties
can be reduced to checking in a subclass of the quasivariety. An interesting
application of these results is that if $\mathcal{F}$ is a finite
set of finite algebras with a common near-unanimity term, then it
is decidable whether the quasivariety generated by $\mathcal{F}$
has surjective epimorphisms (see Corollary \ref{cor:NU implica decidible}).
\section{Preliminaries and Notation}
Let $\mathcal{L}$ be a first order language and $\mathcal{K}$ a
class of $\mathcal{L}$-structures. We write $\mathbb{I},\mathbb{S},\mathbb{H},\mathbb{P}$
and $\mathbb{P}_{u}$ to denote the class operators for isomorphisms,
substructures, homomorphic images, products and ultraproducts, respectively.
We write $\mathbb{V}(\mathcal{K})$ for the variety generated by $\mathcal{K}$,
that is $\mathbb{HSP}(\mathcal{K})$; and with $\mathbb{Q}(\mathcal{K})$
we denote the quasivariety generated by $\mathcal{K}$, i.e., $\mathbb{ISPP}_{u}(\mathcal{K})$.
\begin{defn}
Let $\mathbf{A},\mathbf{B}\in\mathcal{K}$.
\begin{itemize}
\item $\mathbf{A}$ is an \emph{epic substructure} \emph{of $\mathbf{B}$
in} $\mathcal{K}$ if $\mathbf{A}\leq\mathbf{B}$, and for every $\mathbf{C}\in\mathcal{K}$
and all homomorphisms $g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$
such that $g|_{A}=g^{\prime}|_{A}$, we have $g=g^{\prime}$. Notation:
$\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{K}$.
\item A homomorphism $h:\mathbf{A}\rightarrow\mathbf{B}$ is a $\mathcal{K}$\emph{-epimorphism}
if for every $\mathbf{C}\in\mathcal{K}$ and homomorphisms $g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$,
if $gh=g^{\prime}h$ then $g=g^{\prime}$.
\end{itemize}
\end{defn}
We say that $\mathbf{A}$ is a \emph{proper} epic substructure of
$\mathbf{B}$ in $\mathcal{K}$ (and write $\mathbf{A}<_{e}\mathbf{B}$
in $\mathcal{K}$), if $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{K}$
and $\mathbf{A}\neq\mathbf{B}$.
The next lemma explains the connection between epic substructures
and epimorphisms.
\begin{lem}
If $h:\mathbf{A}\rightarrow\mathbf{B}$ with $\mathbf{A},\mathbf{B},h(\mathbf{A})\in\mathcal{K}$,
then t.f.a.e.:
\begin{enumerate}
\item $h$ is a $\mathcal{K}$-epimorphism.
\item The inclusion map $\iota:h(\mathbf{A})\rightarrow\mathbf{B}$ is a
$\mathcal{K}$-epimorphism.
\item $h(\mathbf{A})\leq_{e}\mathbf{B}$ in $\mathcal{K}$.
\end{enumerate}
\end{lem}
\begin{proof}
Immediate from the definitions.
\end{proof}
Here are some straightforward facts used in the sequel.
\begin{lem}
Let $\mathbf{A},\mathbf{B}\in\mathcal{K}$.
\begin{enumerate}
\item $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{K}$ iff $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathbb{ISP}(\mathcal{K})$
\item Let $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{K}$ and suppose
$h:\mathbf{B}\rightarrow\mathbf{C}$ is such that $h(\mathbf{A}),h(\mathbf{B})\in\mathcal{K}$.
Then $h(\mathbf{A})\leq_{e}h(\mathbf{B})$ in $\mathcal{K}$.
\item Let $\mathcal{Q}$ be a quasivariety. T.f.a.e.:
\begin{enumerate}
\item $\mathcal{Q}$ has surjective epimorphisms.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{Q}$ we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathcal{Q}$ implies $\mathbf{A}=\mathbf{B}$.
\end{enumerate}
\end{enumerate}
\end{lem}
\section{Main Theorem\label{sec:Main-Theorem}}
Recall that a \emph{primitive positive }(p.p.\ for brevity) formula
is one of the form $\exists\bar{y}\,\alpha(\bar{x},\bar{y})$ with
$\alpha(\bar{x},\bar{y})$ a finite conjunction of atomic formulas.
We shall need the following fact.
\begin{lem}
\label{toda pp implica homo}(\cite[Thm. 6.5.7]{Hodges1993}) Let
$\mathbf{A},\mathbf{B}$ be $\mathcal{L}$-structures. T.f.a.e.:
\begin{enumerate}
\item Every primitive positive $\mathcal{L}$-sentence that holds in $\mathbf{A}$
holds in $\mathbf{B}$.
\item There is a homomorphism from $\mathbf{A}$ into an ultrapower of $\mathbf{B}$.
\end{enumerate}
\end{lem}
Let $\mathcal{K}$ be a class of $\mathcal{L}$-structures. We say
that the $\mathcal{L}$-formula $\varphi(x_{1},\dots x_{n},y_{1},\dots,y_{m})$
\emph{defines a function} in $\mathcal{K}$ if
\[
\mathcal{K}\vDash\forall\bar{x},\bar{y},\bar{z}\ \varphi\left(\bar{x},\bar{y}\right)\wedge\varphi\left(\bar{x},\bar{z}\right)\rightarrow\bigwedge_{j=1}^{m}y_{j}=z_{j}.
\]
In that case, for each $\mathbf{A}\in\mathcal{K}$ we write $[\varphi]^{\mathbf{A}}$
to denote the $n$-ary partial function defined by $\varphi$ in $\mathbf{A}$.
If $X$ is a set disjoint with $\mathcal{L}$, we write $\mathcal{L}_{X}$
to denote the language obtained by adding the elements in $X$ as
new constant symbols to $\mathcal{L}$. If $\mathbf{B}$ is an $\mathcal{L}$-structure
and $A$ is a subset of $B$, let $\mathbf{B}_{A}$ be the expansion
of $\mathbf{B}$ to $\mathcal{L}_{A}$ where each new constant names
itself. If $\mathcal{L}\subseteq\mathcal{L}^{+}$ and $\mathbf{A}$
is an $\mathcal{L}^{+}$-model, let $\mathbf{A}|_{\mathcal{L}}$ denote
the reduct of $\mathbf{A}$ to $\mathcal{L}$.
Next we present the main result of this article.
\begin{thm}
\label{epic sii pp definable}Let $\mathcal{K}$ be a class closed
under ultraproducts and $\mathbf{A}\leq\mathbf{B}$ structures. T.f.a.e.:
\begin{enumerate}
\item $A$ is an epic subalgebra of $\mathbf{B}$ in $\mathcal{K}$.
\item For every $b\in B$ there are a primitive positive formula $\varphi\left(\bar{x},y\right)$
and $\bar{a}$ from $A$ such that:
\begin{enumerate}
\item $\varphi\left(\bar{x},y\right)$ defines a function in $\mathcal{K}$
\item $[\varphi]^{\mathbf{B}}(\bar{a})=b$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{proof}
(1)$\Rightarrow$(2). We can assume that $\mathcal{K}$ is axiomatizable
(replacing $\mathcal{K}$ by $\mathbb{I}\mathbb{S}(\mathcal{K})$
if necessary). Suppose $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{K}$
and let $b\in B$. Define
\[
\Sigma\left(x\right):=\{\varphi\left(x\right)\mid\varphi\left(x\right)\text{ is a p.p.\ formula of }\mathcal{L}_{A}\text{ and }\mathbf{B}_{A}\vDash\varphi\left(b\right)\}\text{,}
\]
Let $c,d$ be two new constant symbols and take
\[
\mathcal{K}^{*}:=\{\mathbf{M}\mid\mathbf{M}\text{ is a }\mathcal{L}_{A}\cup\{c,d\}\text{-model and }\mathbf{M}|_{\mathcal{L}}\in\mathcal{K}\}\text{.}
\]
Let $\mathbf{C}$ be a model of $\mathcal{K}^{*}$ such that $\mathbf{C}\vDash\Sigma(c)\cup\Sigma(d)$.
By Lemma \ref{toda pp implica homo}, there are elementary extensions
$\mathbf{E},\mathbf{E}^{\prime}$ of $\mathbf{C}$. and homomorphisms
\begin{align*}
h & :\mathbf{B}_{A}\rightarrow\mathbf{E}|_{\mathcal{L}_{A}}\\
h^{\prime} & :\mathbf{B}_{A}\rightarrow\mathbf{E}^{\prime}|_{\mathcal{L}_{A}}
\end{align*}
such that $h(b)=c^{\mathbf{C}}$ and $h^{\prime}(b)=d^{\mathbf{C}}$.
The elementary amalgamation theorem \cite[Thm. 6.4.1]{Hodges1993}
provides us with an algebra $\mathbf{D}$ and elementary embeddings
$g:\mathbf{E}\rightarrow\mathbf{D}$, $g^{\prime}:\mathbf{E}^{\prime}\rightarrow\mathbf{D}$
such that $g$ and $g^{\prime}$ agree on $C$. Next, observe that
\begin{align*}
gh & :\mathbf{B}\rightarrow\mathbf{D}|_{\mathcal{L}}\\
g^{\prime}h^{\prime} & :\mathbf{B}\rightarrow\mathbf{D}|_{\mathcal{L}}
\end{align*}
are homomorphisms that agree on $A$, and since $\mathbf{D}|_{\mathcal{L}}\in\mathcal{K}$
we must have
\[
gh=g^{\prime}h^{\prime}\text{.}
\]
In particular $gh(b)=g^{\prime}h^{\prime}(b)$, which is $g(c^{\mathbf{C}})=g^{\prime}(d^{\mathbf{C}})$.
So, as $g$ is 1-1, and $g$ and $g^{\prime}$ are the same on $C$
we have $c^{\mathbf{C}}=d^{\mathbf{C}}$.
Thus we have shown
\[
\mathcal{K}^{*}\vDash{\textstyle \bigwedge}\left(\Sigma\left(c\right)\cup\Sigma\left(d\right)\right)\rightarrow c=d\text{.}
\]
By compactness (and using that the conjunction of p.p.\ formulas
is equivalent to a p.p.\ formula), there is single p.p.\ $\mathcal{L}$-formula
$\varphi\left(\bar{x},y\right)$ such that
\[
\mathcal{K}^{*}\vDash\varphi(\bar{a},c)\wedge\varphi(\bar{a},d)\rightarrow c=d\text{,}
\]
and hence
\[
\mathcal{K}\vDash\forall\bar{x},y,z\ \varphi(\bar{x},z)\wedge\varphi(\bar{x},z)\rightarrow y=z\text{.}
\]
This completes the proof of (1)$\Rightarrow$(2).
(2)$\Rightarrow$(1). Suppose (2) holds for $\mathbf{A}$, $\mathbf{B}$
and $\mathcal{K}$. Let $\mathbf{C}\in\mathcal{K}$ and $h,h^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$
homomorphisms agreeing on $A$. Fix $b\in B$. There are a p.p.\ formula
$\varphi\left(\bar{x},y\right)$ and $\bar{a}$ elements from $A$
such that
\begin{align*}
\mathbf{B} & \vDash\varphi(\bar{a},b)\\
\mathcal{K} & \vDash\forall\bar{x},y,z\ \varphi(\bar{x},y)\wedge\varphi(\bar{x},z)\rightarrow y=z\text{.}
\end{align*}
Hence
\[
\mathbf{C}\vDash\varphi(h\bar{a},hb)\wedge\varphi(h^{\prime}\bar{a},h^{\prime}b)\text{,}
\]
and as $h\bar{a}=h^{\prime}\bar{a}$ we have $hb=h^{\prime}b$.
\end{proof}
It is worth noting that (2)$\Rightarrow$(1) in Theorem \ref{epic sii pp definable}
always holds, i.e., it does not require for $\mathcal{K}$ to be closed
under ultraproducts. On the other hand, as the upcoming example shows,
the implication (1)$\Rightarrow$(2) may fail if $\mathcal{K}$ is
not closed under ultraproducts.
\begin{example}
Let $\mathcal{L}=\{s,0\}$ where $s$ is a binary function symbol
and $0$ a constant. Let $\mathbf{B}$ be the $\mathcal{L}$-structure
with universe $\omega\cup\{\omega\}$ such that $0^{\mathbf{B}}=0$
and
\[
s^{\mathbf{B}}(a,b)=\begin{cases}
0 & \mbox{if }b=a+1,\\
1 & \mbox{otherwise.}
\end{cases}
\]
Take $\mathbf{A}$ the subalgebra of $\mathbf{B}$ with universe $\omega$.
It is easy to see that the identity is the only endomorphism of $\mathbf{B}$.
Thus, in particular, we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\{\mathbf{B}\}$. We prove next that there is no p.p.\ formula
with parameters from $A$ defining $\omega$ in $\mathbf{B}$. Take
$\mathcal{L}^{+}:=\mathcal{L}_{B}\cup\{\omega'\}$, where $\omega'$
is a new constant, and let $\Gamma$ be the $\mathcal{L}^{+}$-theory
obtained by adding to the elementary diagram of $\mathbf{B}$ the
following sentences:
\[
\{s(n,\omega')=1\mid n\in\omega\}\cup\{s(\omega',n)=1\mid n\in\omega\}\cup\{\omega\neq\omega'\}.
\]
It is a routine task to show that $\Gamma$ is consistent. Fix a model
$\mathbf{C}$ of $\Gamma$ and define $h,h':\mathbf{B}\rightarrow\mathbf{C}$
by $h(n)=h'(n)=n^{\mathbf{C}}$ for all $n\in\omega$, $h(\omega)=\omega^{\mathbf{C}}$
and $h'(\omega)=\omega'^{\mathbf{C}}$. Again, it is easy to see that
$h$ and $h'$ are homomorphisms from $\mathbf{B}$ to $\mathbf{C}|_{\mathcal{L}}$.
Since they agree on $A$ and $h(\omega)\neq h'(\omega)$, we conclude
that there is no p.p. formula with parameters from $A$ defining $\omega$
in $\mathbf{B}$.
\end{example}
\subsection{The finite case}
When $\mathcal{K}$ is (up to isomorphisms) a finite set of finite
structures, we can sharpen Theorem \ref{epic sii pp definable}. In
this case it is possible to avoid the existential quantifiers in the
definable functions at the cost of adding parameters from $\mathbf{B}$.
\begin{thm}
\label{thm:epicas en clases finitas}Let $\mathcal{K}$ be (up to
isomorphisms) a finite set of finite structures, and let $\mathbf{A}\leq\mathbf{B}$
be finite. T.f.a.e.:
\begin{enumerate}
\item $\mathbf{A}$ is an epic substructure of $\mathbf{B}$ in $\mathcal{K}$.
\item For every $b_{1}\in B$ there are a finite conjunction of atomic formulas
$\alpha(\bar{x},\bar{y})$, $a_{1},\dots,a_{n}\in A$ and $b_{2},\dots,b_{m}\in B$,
with $m\geq1$, such that
\begin{enumerate}
\item $\alpha(\bar{x},\bar{y})$ defines a function in $\mathcal{K}$
\item $[\alpha]^{\mathbf{B}}(\bar{a})=\bar{b}$.
\end{enumerate}
\item For every $b\in B$ there are a primitive positive formula $\varphi\left(\bar{x},y\right)$
and $\bar{a}$ from $A$ such that:
\begin{enumerate}
\item $\varphi\left(\bar{x},y\right)$ defines a function in $\mathcal{K}$
\item $[\varphi]^{\mathbf{B}}(\bar{a})=b$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{proof}
(1)$\Rightarrow$(2). If $b_{1}\in A$ the formula $x_{1}=y_{1}$
does the job. Suppose $b_{1}\notin A$, and let $a_{1},\dots,a_{n}$
and $b_{1},\dots,b_{m}$ be enumerations of $A$ and $B\setminus A$
respectively. Let
\[
\Delta(\bar{x},\bar{y}):=\{\delta(\bar{x},\bar{y})\mid\delta(\bar{x},\bar{y})\mbox{ is an atomic formula and }\mathbf{B}\vDash\delta(\bar{a},\bar{b})\}.
\]
Since $\mathcal{K}$ is a finite set of finite structures, there are
finitely many formulas in $\Delta(\bar{x},\bar{y})$ up to logical
equivalence in $\mathcal{K}$. Thus, there is a finite conjunction
of atomic formulas $\alpha(\bar{x},\bar{y})$ such that
\[
\mathcal{K}\vDash\alpha(\bar{x},\bar{y})\leftrightarrow\bigwedge\Delta(\bar{x},\bar{y}).
\]
Take $\mathbf{C}\in\mathcal{K}$ and suppose $\mathbf{C}\vDash\alpha(\bar{c},\bar{d})\wedge\alpha(\bar{c},\bar{e})$.
Then the maps $h,h':\mathbf{B}\rightarrow\mathbf{C}$, given by $h:\bar{a},\bar{b}\mapsto\bar{c},\bar{d}$
and $h':\bar{a},\bar{b}\mapsto\bar{c},\bar{e}$, are homomorphisms.
Since $h$ and $h'$ agree on $A$, it follows that $h=h'$. Hence
$\bar{d}=\bar{e}$, and we have shown that $\alpha(\bar{x},\bar{y})$
defines a function in $\mathcal{K}$.
(2)$\Rightarrow$(3). The p.p. formulas in (3) can be obtained by
adding existential quantifiers to the formulas given by (2).
(3)$\Rightarrow$(1). This is the same as (2)$\Rightarrow$(1) in
Theorem \ref{epic sii pp definable}.
\end{proof}
Again, it is worth noting that implications (2)$\Rightarrow$(3)$\Rightarrow$(1)
hold for any $\mathbf{A}$, $\mathbf{B}$ and $\mathcal{K}$.
The example below shows that, in the general case, the existential
quantifiers in (2) of Theorem \ref{epic sii pp definable} are necessary.
\begin{example}
Let $\mathbf{B}$ be the Browerian algebra whose lattice reduct is
depicted in Figure \ref{fig:alg tom}, and let $\mathbf{A}$ be the
subalgebra of $\mathbf{B}$ with universe $\{a_{0},a_{1},\dots\}\cup\{\top\}$.
It is proved in \cite[Thm. 6.1]{BezhanishviliMoraschiniRaftery} that
$\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathbb{V}(\mathbf{B})$. We show
that (2) in Theorem \ref{thm:epicas en clases finitas} does not hold
for $\mathbf{A}$, $\mathbf{B}$ and $\mathbb{V}(\mathbf{B})$. Towards
a contradiction fix $d_{1}\in B\setminus A$, and suppose there are
a conjunction of equations $\alpha(x_{1},\dots,x_{n},y_{1},\dots,y_{m})$,
$c_{1},\dots,c_{n}\in A$ and $d_{2},\dots,d_{m}\in B$ such that
\begin{itemize}
\item $\alpha(\bar{x},\bar{y})$ defines a function in $\mathbb{V}(\mathbf{B})$
\item $\mathbf{B}\vDash\alpha(\bar{c},\bar{d})$.
\end{itemize}
Let $\mathbf{C}$ and $\mathbf{D}$ be the subalgebras of $\mathbf{B}$
generated by $\bar{c}$ and $\bar{c},\bar{d}$ respectively. Note
that $\mathbf{D}$ is finite and $\mathbf{C}<\mathbf{D}$. Also note
that $\alpha(\bar{x},\bar{y})$ defines a function in $\mathbb{V}(\mathbf{D})$,
and $\mathbf{D}\vDash\alpha(\bar{c},\bar{d})$, because $\alpha$
is quantifier-free. So we have $\mathbf{C}<_{e}\mathbf{D}$ in $\mathbb{V}(\mathbf{D})$;
but this is not possible, as Corollary 5.5 in \cite{BezhanishviliMoraschiniRaftery}
implies that there are no proper epic subalgebras in finitely generated
varieties of Browerian algebras.
\begin{figure}
\includegraphics{alg_tom}
\protect\caption{}
\label{fig:alg tom}
\end{figure}
\end{example}
\section{Checking for epic subalgebras in a subclass\label{sec:Checking-for-epic}}
In the current section all languages considered are algebraic, i.e.,
without relation symbols. Given a quasivariety $\mathcal{Q}$ it can
be a daunting task to determine whether \emph{$\mathcal{Q}$} has
surjective epimorphisms, or equivalently, no proper epic subalgebras.
In this section we prove two results that, under certain assumptions
on $\mathcal{Q}$, provide a (hopefully) more manageable class $\mathcal{S}\subseteq\mathcal{Q}$
such that $\mathcal{Q}$ has no proper epic subalgebras iff $\mathcal{S}$
has no proper epic subalgebras.
Our first result provides such a class $\mathcal{S}$ for quasivarieties
with a near-unanimity term. The second one for arithmetical varieties
whose class of finitely subdirectly irreducible members is universal.
\subsection{Quasivarieties with a near-unanimity term}
An $n$-ary term $t(x_{1},\dots,x_{n})$ is a \emph{near-unanimity}
term for the class $\mathcal{K}$ if $n\geq3$ and $\mathcal{K}$
satisfies the identities
\[
t(x,\dots,x,y)=t(x,\dots,x,y,x)=\dots=t(y,x,\dots,x)=x.
\]
When $n=3$ the term $t$ is called a \emph{majority} term for $\mathcal{K}$.
In every structure with a lattice reduct the term $(x\vee y)\wedge(x\vee z)\wedge(y\vee z)$
is a majority term. This example is specially relevant since many
classes of structures arising from logic algebrizations have lattice
reducts.
For the sake of the exposition the results are presented for quasivarieties
with a majority term. They are easily generalized to quasivarieties
with an arbitrary near-unanimity term.
For functions $f:A\rightarrow A'$ and $g:B\rightarrow B'$ let $f\times g:A\times B\rightarrow A'\times B'$
be defined by $f\times g(a,b):=(f(a),g(b))$.
\begin{thm}[\cite{BP_para_clases}]
\label{thm:BP}Let $\mathcal{K}$ be a class of structures with a
majority term and suppose $\varphi(\bar{x},y)$ defines a function
in $\mathcal{K}$. T.f.a.e.:
\begin{enumerate}
\item There is a term $t(\bar{x})$ such that $\mathcal{K}\vDash\forall\bar{x},y\ \varphi(\bar{x},y)\rightarrow y=t(\bar{x}).$
\item For all $\mathbf{A},\mathbf{B}\in\mathbb{P}_{u}(\mathcal{K})$, all
$\mathbf{S}\leq\mathbf{A}\times\mathbf{B}$ and all $s_{1},\dots,s_{n}\in S$
such that $[\varphi]^{\mathbf{A}}\times[\varphi]^{\mathbf{B}}(\bar{s})$
is defined, we have that $[\varphi]^{\mathbf{A}}\times[\varphi]^{\mathbf{B}}(\bar{s})\in S$.
\end{enumerate}
\end{thm}
An algebra $\mathbf{A}$ in the quasivariety $\mathcal{Q}$ is \emph{relatively
subdirectly irreducible} provided its diagonal congruence is completely
meet irreducible in the lattice of $\mathcal{Q}$-congruences of $\mathbf{A}$.
We write $\mathcal{Q}_{RSI}$ to denote the class of relatively subdirectly
irreducible members of $\mathcal{Q}$. For a class $\mathcal{K}$
let $\mathcal{K}\times\mathcal{K}:=\{\mathbf{A}\times\mathbf{B}\mid\mathbf{A},\mathbf{B}\in\mathcal{K}\}$.
\begin{thm}
\label{thm:Testigos para Q con M}Let $\mathcal{Q}$ be a quasivariety
with a majority term and let $\mathcal{S}=\mathbb{P}_{u}(\mathcal{Q}_{RSI})$.
T.f.a.e.:
\begin{enumerate}
\item $\mathcal{Q}$ has surjective epimorphisms.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{Q}$ we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathcal{Q}$ implies $\mathbf{A}=\mathbf{B}$.
\item For all $\mathbf{A},\mathbf{B}\in\mathbb{S}(\mathcal{S}\times\mathcal{S})$
we have that $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{S}\times\mathcal{S}$
implies $\mathbf{A}=\mathbf{B}$.
\item $\mathbb{S}(\mathcal{S}\times\mathcal{S})$ has surjective epimorphisms.
\end{enumerate}
\end{thm}
\begin{proof}
The equivalences (1)$\Leftrightarrow$(2) and (3)$\Leftrightarrow$(4)
are immediate, and (2) clearly implies (3). We prove (3)$\Rightarrow$(2).
Suppose $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{Q}$ and let
$b\in B$. We shall see that $b\in A$. By Theorem \ref{epic sii pp definable}
there is a p.p.\ $\mathcal{L}$-formula $\varphi(\bar{x},y)$ defining
a function in $\mathcal{Q}$, and such that $[\varphi]^{\mathbf{B}}(\bar{a})=b$
for some $\bar{a}\in A^{n}$. Let
\[
\Sigma:=\{\varepsilon\mid\varepsilon\mbox{ is a p.p.\ formula of }\mathcal{L}_{A}\mbox{ and }\mathbf{B}_{A}\vDash\varepsilon\},
\]
and define
\[
\mathcal{K}:=\{\mathbf{C}\in Mod(\Sigma)\mid\mathbf{C}|_{\mathcal{L}}\in\mathcal{S}\}.
\]
Let $\psi(y):=\varphi(\bar{a},y)$, and note that $\psi(y)$ defines
a nullary function in $\mathcal{K}$. Note as well that $\exists y\,\psi(y)\in\Sigma$,
and hence $[\psi]^{K}$ is defined for every $K\in\mathcal{K}$ .
We aim to apply Theorem \ref{thm:BP} to $\mathcal{K}$ and $\psi(y)$.
To this end fix $\mathbf{C},\mathbf{D}\in\mathbb{P}_{u}(\mathcal{K})=\mathcal{K}$
and let $\mathbf{S}\leq\mathbf{C}\times\mathbf{D}$. Note that as
$\Sigma$ is a set of p.p. formulas we have $\mathbf{C}\times\mathbf{D}\vDash\Sigma$,
and thus by Lemma \ref{toda pp implica homo} there is an ultrapower
$\mathbf{E}$ of $\mathbf{C}\times\mathbf{D}$ and a homomorphism
$h:\mathbf{B}_{A}\rightarrow\mathbf{E}$. We have that $\mathbf{E}\in\mathbb{P}_{u}(\mathcal{K}\times\mathcal{K})\subseteq\mathbb{P}_{u}(\mathcal{K})\times\mathbb{P}_{u}(\mathcal{K})=\mathcal{K}\times\mathcal{K}$,
and so
\[
\mathbf{E}|_{\mathcal{L}}\in\mathcal{K}|_{\mathcal{L}}\times\mathcal{K}|_{\mathcal{L}}\subseteq\mathcal{S}\times\mathcal{S}.
\]
Next observe that since $h(\mathbf{A})\leq_{e}h(\mathbf{B})$ in $\mathcal{Q}$,
and $h(\mathbf{A}),h(\mathbf{B})\leq\mathbf{E}|_{\mathcal{L}}$, by
(3) it follows that $h(A)=h(B)$. Also, as $\mathbf{S}$ is an $\mathcal{L}_{A}$-subalgebra
of $\mathbf{E}$, we have that
\[
h(\mathbf{B}_{A})=h(\mathbf{A}_{A})\leq\mathbf{S}.
\]
The fact that $\mathbf{B}\vDash\psi(b)$ implies $\mathbf{E}\vDash\psi(hb)$,
and so $[\psi]^{\mathbf{E}}=hb\in S$. We know that $\{\mathbf{C},\mathbf{D},\mathbf{C}\times\mathbf{D}\}\vDash\exists y\,\psi(y)$;
furthermore, since $\psi$ is p.p., we have $[\psi]^{\mathbf{C}}\times[\psi]^{\mathbf{D}}=[\psi]^{\mathbf{C}\times\mathbf{D}}$.
Putting all this together
\[
[\psi]^{\mathbf{C}}\times[\psi]^{\mathbf{D}}=[\psi]^{\mathbf{C}\times\mathbf{D}}=[\psi]^{\mathbf{E}}\in S.
\]
Thus, Theorem \ref{thm:BP} produces an $\mathcal{L}_{A}$-term $t$
such that
\begin{equation}
\mathcal{K}\vDash\forall y\ \psi(y)\rightarrow y=t.\label{eq:k sat quasi}
\end{equation}
In particular, for all $\mathbf{C}\in\mathcal{Q}_{RSI}$ and all $c_{1},\dots,c_{n}\in\mathbf{C}$
such that $[\varphi]^{\mathbf{C}}(\bar{c})$ is defined, we have
\[
[\varphi]^{\mathbf{C}}(\bar{c})=t^{\mathbf{C}}(\bar{c}).
\]
Next let $\{\mathbf{B}_{i}\mid i\in I\}\subseteq\mathcal{Q}_{RSI}$
such that $\mathbf{B}\leq\prod_{I}\mathbf{B}_{i}$ is a subdirect
product. For every $i\in I$ let $\mathbf{B}_{i}^{A}$ be the expansion
of $\mathbf{B}_{i}$ to $\mathcal{L}_{A}$ given by $a^{\mathbf{B}_{i}^{A}}=\pi_{i}(a)$,
where $\pi_{i}:\mathbf{B}\rightarrow\mathbf{B}_{i}$ is the projection
map. It is clear that
\begin{equation}
\mathbf{B}_{A}\leq\prod_{I}\mathbf{B}_{i}^{A}.\label{eq:B_A es subprod}
\end{equation}
Now, each $\mathbf{B}_{i}^{A}$ is a homomorphic image of $\mathbf{B}_{A}$,
so $\mathbf{B}_{i}^{A}\vDash\Sigma$ and thus $\mathbf{B}_{i}^{A}\in\mathcal{K}$
for all $i\in I$. Since $\forall y\ \psi(y)\rightarrow y=t$ is (equivalent
to) a quasi-identity, from (\ref{eq:k sat quasi}) and (\ref{eq:B_A es subprod})
we have
\[
\mathbf{B}_{A}\vDash\forall y\ \psi(y)\rightarrow y=t.
\]
Hence $b=t^{\mathbf{B}_{A}}\in A$, and the proof is finished.
\end{proof}
Observe that Theorem \ref{thm:Testigos para Q con M} holds for any
$\mathcal{S}\subseteq\mathcal{Q}$ closed under ultraproducts and
containing $\mathcal{Q}_{RSI}$.
\begin{cor}
\label{cor:M+FG implica S(Q_RSI x Q_RSI) alcanza.}Let $\mathcal{Q}$
be a finitely generated quasivariety with a majority term. T.f.a.e.:
\begin{enumerate}
\item $\mathcal{Q}$ has surjective epimorphisms.
\item $\mathcal{\mathbb{S}}(\mathcal{Q}_{RSI}\times\mathcal{Q}_{RSI})$
has surjective epimorphisms.
\end{enumerate}
\end{cor}
\begin{proof}
For any class $\mathcal{K}$ we have $\mathbb{Q}(\mathcal{K})_{RSI}\subseteq\mathbb{ISP}_{u}(\mathcal{K})$.
Thus if $\mathcal{Q}$ is finitely generated, then $\mathcal{Q}_{RSI}$
is (up to isomorphic copies) a finite set of finite algebras, and
the corollary follows at once from Theorem \ref{thm:Testigos para Q con M}.
\end{proof}
Recall that an algebra $\mathbf{A}$ is \emph{finitely subdirectly
irreducible} if its diagonal congruence is meet irreducible in the
congruence lattice of $\mathbf{A}$. It is \emph{subdirectly irreducible
}if the diagonal is completely meet irreducible. For a variety $\mathcal{V}$
we write ($\mathcal{V}_{FSI}$) $\mathcal{V}_{SI}$ to denote its
class of (finitely) subdirectly irreducible members.
An interesting consequence of Corollary \ref{cor:M+FG implica S(Q_RSI x Q_RSI) alcanza.}
is the following.
\begin{cor}
\label{cor:NU implica decidible}Let $\mathcal{F}$ be a finite set
of finite algebras with a common majority term. It is decidable whether
the \textup{(}quasi\textup{)}variety generated by $\mathcal{F}$ has
surjective epimorphisms.\end{cor}
\begin{proof}
Let $\mathcal{V}$ be the variety generated by $\mathcal{F}$. By
J\'onsson's lemma \cite{jonssons_lemma} $\mathcal{V}_{SI}\subseteq\mathbb{HSP}_{u}(\mathcal{F})=\mathbb{HS}(\mathcal{F})$
is a finite set of finite structures, and by Corollary \ref{cor:M+FG implica S(Q_RSI x Q_RSI) alcanza.}
it suffices to decide whether $\mathcal{\mathbb{S}}(\mathcal{V}_{SI}\times\mathcal{V}_{SI})$
has surjective epimorphisms, and this is clearly a decidable problem.
If $\mathcal{Q}$ is the quasivariety generated by $\mathcal{F}$,
then $\mathcal{Q}_{RSI}\subseteq\mathbb{ISP}_{u}(\mathcal{F})=\mathbb{IS}(\mathcal{F})$,
and the same reasoning applies.
\end{proof}
\subsection{Arithmetical varieties whose FSI members form a universal class}
A variety $\mathcal{V}$ is \emph{arithmetical} if for every $\mathbf{A}\in\mathcal{V}$
the congruence lattice of $\mathbf{A}$ is distributive and the join
of any two congruences is their composition. For example, the variety
of boolean algebras is arithmetical.
\begin{lem}
\label{lem:termino interpolante para pp en aritmeticas} Let $\mathcal{V}$
be an arithmetical variety such that $\mathcal{V}_{FSI}$ is a universal
class, and let $\varphi(\bar{x},y)$ be a p.p. formula defining a
function in $\mathcal{V}$. Suppose that for all $\mathbf{A}\in\mathcal{V}_{FSI}$,
all $\mathbf{S}\leq\mathbf{A}$ and all $s_{1},\dots,s_{n}\in S$
such that $\mathbf{A}\vDash\exists y\,\varphi(\bar{s},y)$, we have
$\mathbf{S}\vDash\exists y\,\varphi(\bar{s},y)$. Then there is a
term $t(\bar{x})$ such that $\mathcal{V}\vDash\forall\bar{x},y\ \varphi(\bar{x},y)\rightarrow y=t(\bar{x}).$\end{lem}
\begin{proof}
Add new constants $c_{1},\dots,c_{n}$ to the language of $\mathcal{V}$
and let $\mathcal{K}:=\{(\mathbf{A},\bar{a})\mid\mathbf{A}\vDash\exists y\,\varphi(\bar{c},y)\mbox{ and }\mathbf{A}\in\mathcal{V}_{FSI}\}$.
Note that $\psi(y):=\varphi(\bar{c},y)$ defines a nullary function
in $\mathcal{K}$, and this function is defined for every member of
$\mathcal{K}$. Also note that by our assumptions $\mathcal{K}$ is
a universal class. Using J\'onsson's lemma \cite{jonssons_lemma} it
is not hard to show that $\mathbb{V}(\mathcal{K})_{FSI}=\mathcal{K}$.
Since $\mathcal{K}|_{\mathcal{L}}$ is contained in an arithmetical
variety it has a Pixley Term \cite[Thm. 12.5]{BurrisSankappanavar1981},
which also serves as a Pixley Term for $\mathcal{K}$, and thus $\mathbb{V}(\mathcal{K})$
is arithmetical. Next we show that $\psi(y)$ is equivalent to a positive
open formula in $\mathcal{K}$. By \cite[Thm. 3.1]{lemas_semanticos}
it suffices to show that
\begin{itemize}
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{K}$, all $\mathbf{S}\leq\mathbf{A}$,
all $h:\mathbf{S}\rightarrow\mathbf{B}$ and every $a\in A$ we have
that $\mathbf{A}\vDash\psi(a)$ implies $\mathbf{B}\vDash\psi(ha)$.
\end{itemize}
So suppose $\mathbf{A}\vDash\psi(a)$. From our hypothesis and the
fact that $\psi(y)$ defines a function we have $\mathbf{S}\vDash\psi(a)$,
and as $\psi(y)$ is p.p. we obtain $\mathbf{B}\vDash\psi(ha)$. Hence
there is a positive open formula $\beta(y)$ equivalent to $\psi(y)$
in $\mathcal{K}$. Now, \cite[Thm. 2.3]{CzelakowskiDziobiak1990}
implies that there is a conjunction of equations $\alpha(y)$ equivalent
to $\beta(y)$ (and thus to $\psi(y)$) in $\mathcal{K}$. We have
$\mathcal{K}\vDash\exists!y\,\alpha(y)$, and by \cite[Lemma 7.8]{lemas_semanticos}
there is an $\mathcal{L}\cup\{c_{1},\dots,c_{n}\}$-term $t'$ such
that $\mathbb{V}(\mathcal{K})\vDash\alpha(t')$. Let $t(x_{1},\dots,x_{n})$
be an $\mathcal{L}$-term such that $t'=t(\bar{c})$. So, if $\Gamma$
is a set of axioms for $\mathcal{V}_{FSI}$, we have
\[
\Gamma\cup\{\exists y\,\varphi(\bar{c},y)\}\vDash\varphi(\bar{c},t(\bar{c})),
\]
and this implies
\[
\Gamma\vDash\exists y\,\varphi(\bar{c},y)\rightarrow\varphi(\bar{c},t(\bar{c})),
\]
or equivalently
\[
\mathcal{V}_{FSI}\vDash\forall y(\varphi(\bar{c},y)\rightarrow\varphi(\bar{c},t(\bar{c}))).
\]
This and the fact that that $\varphi(\bar{x},y)$ defines a function
in $\mathcal{V}$ yields
\[
\mathcal{V}_{FSI}\vDash\forall\bar{x},y\ \varphi(\bar{x},y)\rightarrow y=t(\bar{x}).
\]
To conclude, note that $\forall\bar{x},y\ \varphi(\bar{x},y)\rightarrow y=t(\bar{x})$
is logically equivalent to a quasi-identity, and since it holds in
$\mathcal{V}_{FSI}$ it must hold in $\mathcal{V}$.\end{proof}
\begin{thm}
\label{thm:Testigos para V aritmetica}Let $\mathcal{V}$ be an arithmetical
variety such that $\mathcal{V}_{FSI}$ is a universal class T.f.a.e.:
\begin{enumerate}
\item $\mathcal{V}$ has surjective epimorphisms.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{V}$ we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathcal{V}$ implies $\mathbf{A}=\mathbf{B}$.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{V}_{FSI}$ we have that
$\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{V}_{FSI}$ implies $\mathbf{A}=\mathbf{B}$.
\item $\mathcal{V}_{FSI}$ has surjective epimorphisms.
\end{enumerate}
\end{thm}
\begin{proof}
We prove (3)$\Rightarrow$(2) which is the only nontrivial implication.
Suppose $\mathbf{A}\leq_{e}\mathbf{B}$ in $\mathcal{V}$ and let
$b\in B$. We shall see that $b\in A$. By Theorem \ref{epic sii pp definable}
there is a p.p.\ $\mathcal{L}$-formula $\varphi\left(\bar{x},y\right)$
defining a function in $\mathcal{V}$, and such that $[\varphi]^{\mathbf{B}}(\bar{a})=b$
for some $\bar{a}\in A^{n}$. Let
\[
\Sigma:=\{\varepsilon\mid\varepsilon\mbox{ is a p.p.\ sentence of }\mathcal{L}_{A}\mbox{ and }B_{A}\vDash\varepsilon\},
\]
and define
\[
\mathcal{K}:=\{\mathbf{C}\in Mod(\Sigma)\mid\mathbf{C}|_{\mathcal{L}}\in\mathcal{V}_{FSI}\}.
\]
\begin{claim*}
\label{cl: K universal}$\mathcal{K}$ is a universal class.
\end{claim*}
Since $\mathcal{K}$ is axiomatizable we only need to check that $\mathcal{K}$
is closed under substructures. Let $\mathbf{C}\leq\mathbf{D}\in\mathcal{K}$;
clearly $\mathbf{C}|_{\mathcal{L}}\in\mathcal{V}_{FSI}$, so it remains
to see that $\mathbf{C}\vDash\Sigma$. As $\mathbf{D}\vDash\Sigma$,
Lemma \ref{toda pp implica homo} yields a homomorphism $h:\mathbf{B}_{A}\rightarrow\mathbf{E}$
with $\mathbf{E}$ an ultrapower of $\mathbf{D}$. Note that $\mathbf{E}\in\mathcal{K}$.
Since $h(\mathbf{A})\leq_{e}h(B)$ in $\mathcal{V}$ and $h(\mathbf{A}),h(\mathbf{B})\in\mathcal{V}_{FSI}$,
it follows that $h(A)=h(B)$, because there are no proper epic subalgebras
in $\mathcal{V}_{FSI}$. Now $\mathbf{C}$ is an $\mathcal{L}_{A}$-subalgebra
of $\mathbf{D}$, so $h(B)=h(A)\subseteq C$. Finally, since $h(\mathbf{B})\vDash\Sigma$
and every sentence in $\Sigma$ is existential, we obtain $\mathbf{C}\vDash\Sigma$.
This finishes the proof of the claim.
\begin{claim*}
$\mathbb{V}(\mathcal{K})$ is arithmetical and $\mathbb{V}(\mathcal{K})_{FSI}=\mathcal{K}$.
\end{claim*}
To show that $\mathbb{V}(\mathcal{K})$ is arithmetical we can proceed
as in the proof of Lemma \ref{lem:termino interpolante para pp en aritmeticas}.
We prove $\mathbb{V}(\mathcal{K})_{FSI}=\mathcal{K}$. Note that for
$\mathbf{C}\in\mathcal{K}$ we have that $\mathbf{C}$ and $\mathbf{C}|_{\mathcal{L}}$
have the same congruences; hence every algebra in $\mathcal{K}$ is
FSI. For the other inclusion, J\'onsson's lemma \cite{jonssons_lemma}
produces $\mathbb{V}(\mathcal{K})_{FSI}\subseteq\mathbb{HSP}_{u}(\mathcal{K})$,
and by the first claim $\mathbb{HSP}_{u}(\mathcal{K})=\mathbb{H}(\mathcal{K})$.
So, as $\mathbb{H}(\mathcal{K})\vDash\Sigma$, we have that $\mathbb{V}(\mathcal{K})_{FSI}\vDash\Sigma$
and thus $\mathbb{V}(\mathcal{K})_{FSI}\subseteq\mathcal{K}$.
Next we want to apply Lemma \ref{lem:termino interpolante para pp en aritmeticas}
to $\mathbb{V}(\mathcal{K})$ and $\varphi(\bar{a},y)$, so we need
to check that the hypothesis hold. Take $\mathbf{C}\in\mathcal{K}$
and $\mathbf{S}\leq\mathbf{C}$. Since $\mathcal{K}$ is universal
we have $\mathbf{S}\in\mathcal{K}$, and thus $\mathbf{S}\vDash\exists y\,\varphi(\bar{a},y)$.
Let $t$ be a term such that $\mathbb{V}(\mathcal{K})\vDash\forall y\ \varphi(\bar{a},y)\rightarrow y=t.$
Then $b=t^{\mathbf{B}_{A}}\in A$, and we are done.
\end{proof}
Every discriminator variety (see \cite[Def. 9.3]{BurrisSankappanavar1981}
for the definition) satisfies the hypothesis in Theorem \ref{thm:Testigos para V aritmetica}.
Furthermore, in such a variety every FSI member is simple (i.e., has
exactly two congruences). Writing $\mathcal{V}_{S}$ for the class
of simple members in $\mathcal{V}$ we have the following immediate
consequence of Theorem \ref{thm:Testigos para V aritmetica}.
\begin{cor}
For a discriminator variety $\mathcal{V}$ the following are equivalent.
\begin{enumerate}
\item $\mathcal{V}$ has surjective epimorphisms.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{V}$ we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathcal{V}$ implies $\mathbf{A}=\mathbf{B}$.
\item For all $\mathbf{A},\mathbf{B}\in\mathcal{V}_{S}$ we have that $\mathbf{A}\leq_{e}\mathbf{B}$
in $\mathcal{V}_{S}$ implies $\mathbf{A}=\mathbf{B}$.
\item $\mathcal{V}_{S}$ has surjective epimorphisms.
\end{enumerate}
\end{cor}
It is not uncommon for a variety arising as the algebrization of a
logic to be a discriminator variety; thus the above corollary could
prove helpful in establishing the Beth definability property for such
a logic.
Another special case relevant to algebraic logic to which Theorem
\ref{thm:Testigos para V aritmetica} applies is given by the class
of Heyting algebras and its subvarieties (none of these are discriminator
varieties with the exception of the class of boolean algebras). Heyting
algebras constitute the algebraic counterpart to intuitionistic logic,
and have proven to be a fertile ground to investigate definability
and interpolation properties of intuitionistic logic and its axiomatic
extensions by algebraic means (see \cite{BezhanishviliMoraschiniRaftery}
and its references).
\thanks{I would like to thank Diego Casta\~no and Tommaso Moraschini for their
insightful discussions during the preparation of this paper.}
\bibliographystyle{plain}
|
2,877,628,088,601 | arxiv | \section{Introduction and Related Works}
The area of crowdsourcing for older adults is both underappreciated and underexplored and developing sustainable solutions for older adults is still challenging \cite{knowles2018wisdom,knowles2018older}.
This may be due to multiple barriers both specific to the required ICT-skills \cite{aula_learning_2004} and the nature of crowdsourcing microtasks. Older adults differ from the younger generation in their online behavior and decision-making \cite{von2018influence} and they seem more selective when choosing their engagements \cite{djoub_ict_2013}, which, alongside their generally lower ICT skills, may explain how little interest they expressed in the Mechanical Turk platform populated by tedious and repetitive tasks \cite{brewer2016would} and lacking a suitable motivation to participate in crowdsourcing, as tasks are not challenging, fun or easily relatable. This is in line with research placing the average age of crowd workers at around 20-30 years \cite{kobayashi2015motivating} \cite{inproceedingsCHIAge2010}. On the other hand, crowd-volunteering tasks, often called citizen science tasks, such as the ones found on the Zooniverse platform \cite{zoonigeneralintro} can appeal to a more balanced representation of contributors, as about 15\% of the platform contributors self-report as retired.\footnote{Survey results were presented in a post: https://blog.zooniverse.org/2015/03/05/who-are-the-zooniverse-community-we-asked-them/} There are also some crowdsourcing systems designed specifically for older adults which mitigate technology barriers, as in Hettiachchi et al. \cite{crowdtaskerchi2020} and tap into their knowledge and skills, such as tagging historical photos as in Yu et al. \cite{yu2016productive}, proofreading, as in Itoko et al. \cite{itoko2014involving} and Kobayashi et al. \cite{kobayashi2013age}, or both as in Skorupska et al. \cite{skorupska2019smartTV} They often rely on motivations that are pro-social, as in Kobayashi et al. \cite{kobayashi2015motivating} and also social, as in Seong et al. \cite{crowdolder2020chi} which is a trademark of Zooniverse. The Zooniverse platform allows crowd workers to support science projects at a larger scale by solving difficult tasks thanks to the impressive potential of such contributions \cite{zooniimpressivecrowdpotential} on a diverse crowdsourcing landscape of Zooniverse (www.zooniverse.org), which is why we have chosen this platform a to serve as the basis for this research. So, there is an opportunity to tap into the potential of older adults as crowd workers with a lot to offer and time on their hands - especially that their share in the society is increasing, and in 2019, "more than one fifth of the EU-27 population was aged 65 and over". \cite{population_structure2020}
The question whether crowdsourcing tasks are effective in keeping older adults cognitively engaged is relevant, as volunteering activities \cite{morrow2010volunteering} in general may increase older adults' well-being \cite{morrow2003effects}, improve their mental and physical health \cite{lum2005effects} and can be seen as a protective factor for their psychological well-being \cite{greenfield2004formal,hao2008productive}, potentially delaying the onset of age-related issues \cite{kotteritzsch2014adaptive}. Therefore, in this study we want to gain insights into older adults' motivation and engagement with online citizen science tasks and uncover some guidelines for designing and presenting crowdsourcing citizen science tasks to this group. In designing our research we took care to uniformly present the wide-range of real crowd-volunteering tasks often appearing in citizen science projects. Only after older adults have completed each task we asked them how to improve it, and finally what would motivate them to engage with such tasks in the future.
\section{Methods}
In this study 33 older adults were asked to complete and evaluate 8 diverse, but standardized citizen science tasks at home, in an unsupervised environment. The study consisted of a short socio-demographic survey including questions about the participants' age, sex, education, activity, ICT-use, and crowdsorucing preferences based on Seong et al. \cite{crowdolder2020chi}. These questions were followed by a set of 8 different tasks chosen based on expert knowledge of the research team, localized into Polish and presented in an uniform way, broken into 4 pages - each page for a different type of a task. There were two tasks (one easier, and one more difficult/abstract) in each category of \textbf{image recognition (PIC)} for tasks T1 and T2, \textbf{audio recognition (AUD)} for tasks T3 and T4, \textbf{document transcription (DOC)} for tasks T5 and T6 and \textbf{pattern recognition (PAT)} for tasks T7 and T8, visible in Fig. \ref{visualoverview} in order. The tasks were selected out of 40 community-chosen projects active on the Zooniverse platform in the 2019-20 academic year and spotlit in the publication "Into the Zooniverse Vol. II",\footnote{The book is available for download here: https://blog.zooniverse.org/2020/11/17/into-the-zooniverse-vol-ii-now-available/} published on the 17th of November 2020.
The final standardized tasks were as follows:
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{images/visualoverviewofthetasksSmallforpub.png}
\caption{Visual overview of the tasks; T1-T8 from the left to the right.}
\label{visualoverview}
\vspace{-6mm}
\end{figure}
\begin{itemize}
\item \textbf{T1} Recognizing animal sillouettes - multiple choice of animal silhouettes, including human visible, no animal and other (representing MichiganZoomIn)
\item \textbf{T2} Recognizing cat fur types on cat images - multiple choice with abstract images of a cat pelts with fur patterns (similar to image recognition tasks)
\item \textbf{T3} Recognizing radio programs (97s-long recording) - checkboxes and a follow-up open answer about specifics (representing Vintage Cuban Radio)
\item \textbf{T4} Recognizing local urban sounds (10s-long recording) - checkboxes with pre-defined answers and an other option (representing Sounds of NYC)
\item \textbf{T5} Transcribing key information from a hand-written birth certificate of a person born in 1887 - 4 open short answer questions about the dates, name, and the location (representing tasks such as Every Name Counts)
\item \textbf{T6} Transcribing a longer (346 characters) typewritten text on a specific subject - 1 open long answer question (representing tasks relying on longer transcription of typewritten documents)
\item \textbf{T7} Recognizing Aurora Borealis patterns (6s-long recording) - multiple choice question with names of patterns and colours (representing Aurora Zoo)
\item \textbf{T8} Recognizing eye elements in eye pictures on a coordinate grid: two drop-down questions about coordinates and one multiple choice on the visibility of veins. (based on Eye for Diabetes; image by Mikael Häggström \cite{hmedical2014})
\end{itemize}
There was a short standardized introduction to each task explaining its importance and purpose, as not to bias the participants with the quality of the project presentation, which can vary considerably between projects. Then, the participants performed each example microtask. After each task we asked the participants to judge, on a 3-point scale, its:
\textbf{attractiveness}, \textbf{importance}, \textbf{ease of performing it}, \textbf{engagement}, and \textbf{if they would like to perform similar tasks in the future} and to suggest ways in which each task could be improved. The study protocol was positively evaluated by our ethics committee. The study itself was built in Google Forms and it took between 20-35 minutes to complete, depending on the amount of feedback given after each task and the ICT proficiency of the participants. Finally, after completing all tasks the participants were asked what could encourage them to engage in such tasks in general and whether they have done similar tasks in the past. The suggestions of motivating factors were inspired by an article by Campo et al \cite{campo_community_2019} as well as a wide body of research on crowdsourcing and volunteering.
\section{Results and Discussion}
\subsection{Participants}
There were 33 participants who completed and evaluated the chosen crowdsourcing tasks between Dec. 2020 and Feb. 2021. They were recruited from among the participants of our Living Lab \cite{kopec2017living} via e-mail as unpaid volunteers as we did not want to interfere with their motivation with a financial incentive. 22 participants were in the 60-69 age group, and 10 in the 70-79 group and 1 in 80+ group.\footnote{We have chosen to use multiple choice for age groups as not to bias the participants with an assumption that the research was targeted at older adults.} All of them were based in Poland, Polish and all but 6 of them came from larger cities (over 200k) and 21 of the participants had higher education. In Table \ref{tab:motipre} we can see the results concerning volunteering motivation before performing tasks for our 33 participants contrasted with results by Seong et al \cite{crowdolder2020chi}.
\begin{table}[h]
\centering
\scriptsize
\begin{tabular}{p{5cm}|p{1.8cm}|p{1.8cm}|p{1.5cm}|p{1.5cm}|}
& \multicolumn{2}{p{3.6cm}|}{What would encourage you to engage with online or offline volunteer projects? n=33} & \multicolumn{2}{p{3cm}|}{Values older adults wanted from game experience \cite{crowdolder2020chi} n=12} \\ \hline
& No. of P. & \% of P. & No. of P. & \% of P. \\ \hline
Physical improvement & 10 & 30.3\% & 3 & 25.0\% \\ \hline
Cognitive improvement & 15 & 45.5\% & 4 & 33.3\% \\ \hline
Opportunity to learn something new & 26 & 78.8\% & 5 & 41.7\% \\ \hline
Opportunity to communicate and interact with people & 14 & 42.4\% & 8 & 66.7\% \\ \hline
Opportunity to participate and contribute to society & 11 & 33.3\% & 4 & 33.3\% \\ \hline
None of the above & 4 & 12.1\% & - & - \\ \hline
\end{tabular}
\caption{Motivations of volunteer participants before the volunteer experience.}
\label{tab:motipre}
\vspace{-6mm}
\end{table}
Our participants use the following devices: 28 use a smartphone, 25 a laptop, 18 a desktop PC, 13 a tablet, 8 a SmartTV while 4 a smartwatch or a smartband and 2 a VR headset. They are also avid Internet users as 28 of them use the Internet either a few times a day or every day, and only 5 a few times a week or less often. As such, our participant group would be a good target for online volunteering and crowdsourcing tasks. Yet, after having completed the study 28 participants reported that they have never done similar tasks before, 3 of them said they did similar tasks at work and 2 did such tasks while volunteering.
\subsection{Performance and Feedback}
\subsubsection{Image Recognition Tasks}
In \textbf{T1} 26 participants correctly identified the animal silhouette, 4 pointed to other silhouettes, 1 answered that there was no animal present while 2 more chose the "other" option where they have given in one case each: the name of the animal, the more detailed description of the animal. After completion 2 participants suggested to have a video instead - and 2 others wished the task was more challenging, while 1 complained the question was imprecise. Additionally, 1 person wished there were more animals to spot and "better hidden". In \textbf{T2} there was less agreement with 13 people choosing pattern 3, and 7 each voting for patterns 1 and 2, 2 for pattern 4, 3 saying it is "hard to tell" and 1 deciding it was some "other" pattern. The suggestions were to have "a different view of the cat in the picture" (1), a "couple of different pictures" (2) and comments appeared that "if someone does not like cats nothing can improve this task", but also "I liked it, even more so, because I like cats".
\subsubsection{Audio Recognition Tasks}
In both \textbf{T3} and \textbf{T4} our participants had no trouble listening to the recordings. In both tasks the majority of participants successfully identified the key audio elements (in T3: "many male voices" (31) in T4: "bells" (28) and "traffic" (25)). In T4 about half identified other elements ("birds singing", "people talking", "music"), while only one person noticed "barking", and one indicated "there was nothing specific" in the recording. Additionally, over half of the participants (18) chose to provide additional comment about the exact content of the radio recording from \textbf{T3}. For two people the \textbf{T3} recording was too short, for another too long and one wished it was accompanied by visuals. The feedback for \textbf{T4} was to have a longer recording (5), 1 person also wished for "more variety, to make it more difficult, but also more interesting" and to show a visual connected to the sound. One participant admitted that they "heard birds singing only upon the second hearing" but they were not sure.
\subsubsection{Document Transcription Tasks}
Both transcription tasks were done very well. In \textbf{T5} only one participant decided that the text of the birth certificate was "intelligible" and only in 1 out of 4 places, while two people provided only the first name of the person. Among the others there is almost perfect agreement about what the text says with a varying level of detail for the name of the place and date notation. Additionally, one person provided a full transcription of the document, even though the task did not require it. Appearing suggestions were to have more information about the person in the birth certificate (2), to have more similar documents (1) and the participant suggesting it said that they have "transcribed 247 pieces of disappearing poetry before" and are experienced and now working on transcribing "very difficult historical letters". Another participant suggested to have documents related to the participants' own personal history (1). In \textbf{T6} most people (22) provided a complete transcription of the 346-character long text, on top of the transcription one person commented "(placing this dot here is incorrect - transcriber's comment)", while 3 wrote that the text was legible, and 6 provided an incomplete transcription, of these 2 added that the text was legible. Additionally, 2 wrote that it was intelligible. Two people wished for a more challenging text with a harder to read font, and one said other types of content interest them, other appearing comment was "For those who have not been involved in reading old manuscripts and other documents, this is a remarkably interesting activity (...) engaging and motivating, others will put it off or give up. I like it, it draws you in" while another participant mentioned that "such tasks require patience, they are not for everybody".
\subsubsection{Pattern Recognition Tasks}
In \textbf{T7} the count of choices was 12, 7, 6, 3, 3 for the dominant aurora pattern, while one person said that it is "hard to tell" and one person saw no aurora in the video. When asked about the colour 30 people agreed it was green, and over half added other colours (yellow, violet blue and pink). Here four people suggested to have a longer video, especially that "one would like to look longer, as we don't have that here and it is very interesting". In \textbf{T8} all participants (33) correctly identified the section coordinates with the described features; 26 said that the veins are clearly visible, 7 claiming to the contrary. One participant suggested that a longer analysis would improve this task, one more expressed that they are not sure where the macula of the retina was, and another wished for an analysis of some other organ.
\subsubsection{Summary}
Overall, the older adults from our study in most cases provided high quality contributions with no training. Only T2 and T7 proved to be somewhat challenging and with these participants asked for more data. Many wished for other tasks to be more challenging (harder font (T6), more audio variety (T4), more and better hidden animals (T1), longer analysis (T8)) and the "easy" dimension had the weakest correlations with willingness to do similar tasks in the future. It seems that older adults would not mind, and even preferred it, if the tasks posed more of a challenge (eg. T5 vs T6), especially if it would allow them to learn something interesting. They also wished for the shorter tasks to be extended, either by additional data (T1, T2) steps (T8) or longer duration (T7, T3, T4), not only because they enjoyed them and wanted to learn more, but also to allow them to provide higher quality contributions by adding more data to verify their choices. It seems therefore, that microtasks, designed to be brief for efficiency, could be extended and elaborated upon to increase the contributors' satisfaction, especially if they rely on image, video or audio data.
\subsection{Evaluation of Tasks}
Participants rated T8 the highest, while T2 the lowest. They distributed most points in the category "I would do similar tasks" (380) followed by "easy" (370), "attractive" (363), "engaging" (328) and "important" (283). Our participants, once exposed to each task, reported a high willingness to engage with similar tasks (with an average of 1.44 out of 2) suggesting, that older adults would engage with such tasks more, if they were made more easily available to them.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{images/n33best.PNG}
\caption{Left: Total points awarded by our participants after the completion of each task. Right: Average scores for the same tasks.}
\label{totalpoints}
\end{figure}
As seen in Fig. \ref{matrix} the correlations with the willingness to do similar tasks in the future are either positive, or close to zero, while the strongest correlation is with the visual or thematic "attractiveness" of the task. It was also slightly important whether the task was "engaging" or "important", especially if it was not found to be "attractive" and to a lesser extent if it was "easy". This suggests that older adults' main motivation is rather intrinsic, connected to their own interest in the task, which of course is moderated by other variables.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{images/n33pearspear.PNG}
\caption{Correlation matrix for the dimension "I would do similar tasks" and other dimensions, from older adults' evaluation of the tasks right after performing them.}
\label{matrix}
\vspace{-8mm}
\end{figure}
\subsection{Motivation}
After having completed all of the tasks, the participants chose learning something new and information about the purpose of performing these tasks as the prevailing motivators. They would also like to receive feedback on their performance and to have detailed tutorials. Detailed results are reported in Table \ref{tab:motipost}. Moreover, the ability to perform these tasks using interfaces (smartphone, Smart TV or audio) other than a computer screen was judged as not particularly important, however, this may be due to lack of familiarity with them for audio and TV devices, as there are studies which successfully implement them for crowdsourcing \cite{crowdtaskerchi2020,skorupska2018smarttv} and the challenge of small-screen interaction for smartphones \cite{olderscreeninter2011} which is still relevant. \cite{kowalski2019voice}
\begin{table}[!htbp]
\scriptsize
\begin{tabular}{l|l|l}
& No. of P. & \% of P. \\ \hline
The opportunity to learn something interesting while performing these tasks & 24 & 72.7\% \\ \hline
More knowledge about the purpose of performing these tasks & 24 & 72.7\% \\ \hline
Receiving feedback on the use and usefulness of the tasks performed & 21 & 63.6\% \\ \hline
A short training to make sure I do the tasks well & 17 & 51.5\% \\ \hline
More interesting topic of tasks & 9 & 27.3\% \\ \hline
Online support and contact with other people performing these tasks & 9 & 27.3\% \\ \hline
Tasks suited to my skills & 9 & 27.3\% \\ \hline
Training and personal meetings for those performing the tasks & 7 & 21.2\% \\ \hline
Statistics showing the number of already completed tasks & 7 & 21.2\% \\ \hline
The ability to perform these tasks on the TV screen with the remote control & 6 & 18.2\% \\ \hline
Thanks from the researchers & 6 & 18.2\% \\ \hline
Ability to perform these tasks on a smartphone & 5 & 15.1\% \\ \hline
Ability to perform these tasks using the voice interface & 0 & 0.0\% \\ \hline
"None of the above" and "Other (own answer)" & 0 & 0.0\% \\ \hline
\end{tabular}
\caption{Answers to "Which of these elements would encourage you to perform tasks similar to the sample tasks in this survey?" asked after completing all of the tasks.}
\label{tab:motipost}
\vspace{-6mm}
\end{table}
\section{Conclusions}
In this exploratory research we have verified that crowdsourcing microtasks, especially those appearing in citizen science projects, can be well-suited for some groups of older adults - both in terms of the quality of older adults' contributions and their motivation. Yet, even among older adults with average and higher ICT skills - sufficient to contribute to such projects, such as the participants in our sample, the awareness of the existence of such crowdsourcing projects is quite low, as such citizen science tasks are not easily found and sampled. Older adults as a group often overlooked as potential contributors to larger scale crowdsourcing projects due to their often lower willigness to engage online and the perception of their ICT skills. However, the older adults in our study who received no compensation, provided high quality contributions with little training and were open to continue volunteering online.
To increase participation, and thus the representation of this age group's voice in citizen science, we suggest that crowdsourcing tasks ought to be advertised in line with older adults' preferences. These are related to the way in which completing these tasks may benefit, first, them individually, and then, the society as a whole. Based on our research, crowdsourcing microtasks' presentation should focus on the aspect of learning something interesting (which was confirmed by an arithmetic mean correlation of 0.47 for "I would do similar tasks" and "Attractive, thematically or visually"), rather than the aspect of being able to utilize ones' existing skills and knowledge. The contributors should also be provided with a high awareness of the tasks' purpose and ought to be made aware of the usefulness of their individual contributions to reassure the participants that it was time well spent. The tasks could also be more elaborate, to provide an appropriate challenge and increase immersion. Hence, in future research we would also like to examine a wider range of tasks of increasing complexity and duration, as well as the effects of engaging in crowdsourcing on participant's physical, mental or cognitive well-being in further comparative longitudinal research with larger groups of participants of all ages.
\bibliographystyle{splncs04}
|
2,877,628,088,602 | arxiv | \section{Introduction}
While Machine Learning technologies are increasingly used in a wide variety of domains ranging from critical systems to everyday consumer products, currently only a small group of people with formal training possess the skills to develop these technologies. Supervised ML, the most common type of ML technology, is typically trained with knowledge input in the form of labeled instances, often produced by subject matter experts (SMEs). Current ML development process presents at least two problems. First, the work to produce thousands of instance labels is tedious and time-consuming, and can impose high development costs. Second, the acquisition of human knowledge input is isolated from other parts of ML development, and often has to go through asynchronous iterations with data scientists as the mediator. For example, seeing suboptimal model performance, a data scientist has to spend extensive time obtaining additional labeled data from the SMEs, or gathering other feedback which helps in feature engineering or other steps in the ML development process~\cite{amershi2014power,brooks2015featureinsight}.
The research community and technology industry are working toward making ML more accessible through the recent movement of ``democratizing data science''~\cite{chou2014democratizing}. Among other efforts, interactive machine learning (iML) is a research field at the intersection of HCI and ML. iML work has produced a variety of tools and design guidelines~\cite{amershi2014power} that enable SMEs or end users to interactively drive the model towards desired behaviors so that the need for data scientists to mediate can be relieved. More recently, a new field of ``machine teaching" was called for to make the process of developing ML models as intuitive as teaching a student, with its emphasis on supporting ``the teacher and the teacher's interaction with
data''~\cite{simard2017machine}.
The technical ML community has worked on improving the efficiency of labeling work, for which Active Learning (AL) came to become a vivid research area. AL could reduce the labeling workload by having the model select instances to query a human annotator for labels. However, the interfaces to query human input are minimal in current AL settings, and there is surprisingly little work that studied how people interact with AL algorithms. Algorithmic work of AL assumes the human annotator to be an oracle that provides error-free labels~\cite{settles2009active}, while in reality annotation errors are commonplace and can be systematically biased by a particular AL setting. Without understanding and accommodating these patterns, AL algorithms can break down in practice. Moreover, this algorithm-centric view gives little attention to the needs of the annotators, especially their needs for transparency~\cite{amershi2014power}. For example, "stopping criteria", knowing when to complete the training with confidence remains a challenge in AL, since the annotator is unable to monitor the model's learning progress. Even if performance metrics calculated on test data are available, it is difficult to judge whether the model will generalize in the real-world context or is bias-free.
Meanwhile, the notion of model transparency has moved beyond the scope of descriptive characteristics of the model studied in prior iML work (e.g., output, performance, features used~\cite{kulesza2015principles,rosenthal2010towards, fails2003interactive,fogarty2008cueflik}). Recent work in the field of explainable AI (XAI)~\cite{gunning2017explainable} focuses on making the \textit{reasoning} of model decisions understandable by people of different roles, including those without formal ML training. In particular, \textit{local explanations} (e.g.~\cite{lundberg2017unified,ribeiro2016should}) is a cluster of XAI techniques that explain how the model arrived at a particular decision. Although researchers have only begun to examine how people actually interact with AI explanations, we believe explanations should be a core component of the interfaces to teach learning models.
Explanations play a critical role in human teaching and learning~\cite{wellman2004theory,meyer1997consensually}. Prompting students to generate explanations for a given answer or phenomenon is a common teaching strategy to deepen students' understanding. The explanations also enable the teacher to gauge the students' grasp of new concepts, reinforce successful learning, correct misunderstanding, repair gaps, as well as adjust the teaching strategies~\cite{lombrozo2012explanation}. Intuitively, the same mechanism could enable machine teachers to assess the model logic, oversee the machine learner's progress, and establish trust and confidence in the final model. Well-designed explanations could also allow people without ML training to access the inner working of the model and identify its shortcomings, thus potentially reducing the barriers to provide knowledge input and enriching teaching strategies, for example by giving direct feedback for the model's explanations.
Toward this vision of ``machine teaching through model explanations'', we propose a novel paradigm of \textit{explainable active learning} (XAL), by providing local explanations of the model's predictions of selected instances as the interface to query an annotator's knowledge input. We conduct an empirical study to investigate how local explanations impact the annotation quality and annotator experience. It also serves as an elicitation study to explore how people naturally want to teach a learning model with its explanations. The contributions of this work are threefold:
\begin{itemize}
\item We provide insights into the opportunities for explainable AI (XAI) techniques as an interface for machine teaching, specifically feature importance based local explanation. We illustrate both the benefits of XAI for machine teaching, including supporting trust calibration and enabling rich teaching feedback, and challenges that future XAI work should tackle, such as anchoring judgment and cognitive workload. We also identify important individual factors mediating one's reception to model explanations in the machine teaching context, including task knowledge, AI experience and Need for Cognition.
\item We conduct an in-depth empirical study of interaction with an active learning algorithm. Our results highlight several problems faced by annotators in an AL setting, such as increasing challenge to provide correct labels as the model matures and selects more uncertain instances, difficulty to know when to stop with confidence, and desire to provide knowledge input beyond labels. We claim that some of these problems can be mitigated by explanations.
\item We propose a new paradigm to teach ML models, \textit{explainable active learning (XAL)}, that has the model selectively query the machine teacher, and meanwhile allows the teacher to understand the model's reasoning and adjust their input. The user study provides a systematic understanding on the feasibility of this new model training paradigm. Based on our findings, we discuss future directions of technical advancement and design opportunities for XAL.
\end{itemize}{}
In the following, we first review related literature, then introduce the proposal for XAL, research questions and hypotheses for the experimental study. Then we discuss the XAL setup, methodology and results. Finally, we reflect on the results and discuss possible future directions.
\section{Related work}
\label{literature}
Our work is motivated by prior work on AL, interactive machine learning and explainable AI.
\subsection{Active learning}
The core idea of AL is that if a learning algorithm intelligently selects instances to be labeled, it could perform well with much less training data~\cite{settles2009active}. This idea resonates with the critical challenge in modern ML, that labeled data are time-consuming and expensive to obtain~\cite{zhu2005semi}. AL can be used in different scenarios like stream based~\cite{cohn1994improving} (from a stream of incoming data), pool based~\cite{lewis1994sequential} (from a large set of unlabeled instances), etc.~\cite{settles2009active}. To select the next instance for labeling, multiple query sampling strategies have been proposed in the literature \cite{qbc, qbc2, unc, dasgupta2008hierarchical, quire, entropy, confidence}. Most commonly used is \textit{Uncertainty sampling} \cite{unc, entropy, confidence, margin}, which selects instances the model is most uncertain about. Different AL algorithms exploit different notions of uncertainty, e.g. entropy \cite{entropy}, confidence \cite{confidence}, margin \cite{margin}, etc.
While the original definition of AL is concerned with instance labels, it has been broadened to query other types of knowledge input. Several works explored querying feedback for features, such as asking whether the presence of a feature is an indicator for the target concept~\cite{raghavan2006active,druck2009active,settles2011closing}. For example, DUALIST~\cite{settles2011closing} is an active learning tool that queries annotators for labels of both instances (e.g., whether a text document is about ``baseball'' or ``hockey'') and features (which keywords, if appeared in a document, are likely indicators that the document is about ``baseball''). Other AL paradigms include \textit{active class selection}~\cite{lomasky2007active} and \textit{active feature acquisition}~\cite{zheng2002active}, which query the annotator for additional training examples and missing features, respectively.
Although AL by definition is an interactive annotation paradigm, the technical ML community tends to simply assume the human annotators to be mechanically queried oracles. The above-mentioned AL algorithms were mostly experimented with simulated human input providing error-free labels. But labeling errors are inevitable, even for simple perceptual judgment tasks~\cite{cheng2015measuring}. Moreover, in reality, the targeted use cases for AL are often ones where high-quality labels are costly to obtain either because of knowledge barriers or effort to label. For example, AL can be used to solicit users' labels for their own records to train an email spam classifier or context-aware sensors ~\cite{kapoor2010interactive,rosenthal2010towards}, but a regular user may lack the knowledge or contextual information to make all judgments correctly. Many have criticized the unrealistic assumptions that AL algorithms make. For example, by solving a multi-instance, multi-oracle optimization problem, \textit{proactive learning}~\cite{donmez2008proactive} relaxes the assumptions that the annotator is infallible, indefatigable (always answers with the same level of quality), individual (only one oracle), and insensitive to costs.
Despite the criticism, we have a very limited understanding on how people actually interact with AL algorithms, hindering our ability to develop AL systems that perform in practice and provide a good annotator experience. Little attention has been given to the annotation interfaces, which in current AL works are undesirably minimal and opaque. To our knowledge, there has been few HCI work on this topic. One exception is in the field of human-robot interaction (HRI), where AL algorithms were used to develop robots that continuously learn by asking humans questions~\cite{cakmak2010designing,cakmak2012designing,chao2010transparent,gonzalez2014asking,saponaro2011generation}. In this context, the robot and its natural-language queries \textit{is} the interface for AL. For example, Cakmak et al. explored robots that ask three types of AL queries~\cite{cakmak2010designing,cakmak2012designing}: instance queries, feature queries and demonstration queries. The studies found that people were more receptive of feature queries and perceived robots asking about features to be more intelligent. The study also pointed out that a constant stream of queries led to a decline in annotators' situational awareness~\cite{cakmak2010designing}. This kind of empirical results challenged the assumptions made by AL algorithms, and inspired follow-up work proposing mixed-initiative AL: the robot only queries when certain conditions were met, e.g., following an uninformative label. Another relevant study by Rosenthal and Dey ~\cite{rosenthal2010towards} looked at information design for an intelligent agent that queries labels to improve its classification. They found that contextual information, such as keywords in a text document or key features in sensor input, and providing system's prediction (so people only need to confirm or reject labels) improved labeling accuracy. Although this work cited the motivation for AL, the study was conducted with an offline questionnaire without interacting with an actual AL algorithm.
We argue that it is necessary to study annotation interactions with a real-time AL algorithm
because temporal changes are key characteristics of AL settings. With an interactive learning algorithm, every annotation impacts the subsequent model behaviors, and the model should become better aligned with the annotator's knowledge over time. Moreover, systematic changes could happen in the process in both the type of queried instances, depending on the sampling strategy, and the annotator behaviors, for example fatigue~\cite{settles2011closing}. These complex patterns could only be understood by holistically studying the annotation and and the evolving model in real time.
Lastly, it is a nontrivial issue to understand how annotator characteristics impact their reception to AL system features. For example, it would be instrumental to understand what system features could narrow the performance gaps of people with different levels of domain expertise or AI experience, thus reducing the knowledge barriers to teach ML models.
\subsection{Interactive machine learning}
Active learning is sometimes considered a technique for iML. iML work is primarily motivated by enabling non-ML-experts to train a ML model
through ``rapid, focused, and incremental model updates''~\cite{amershi2014power}. However, conventional AL systems, with a minimum interface asking for labels, lack the fundamental element in iML--a tight interaction loop that transparently presents how every human input impacts the model, so that the non-ML-experts could adapt their input to drive the model into desired directions~\cite{amershi2014power,fails2003interactive}. Our work aims to move AL in that direction.
Broadly, iML encompasses all kinds of ML tasks including supervised ML, unsupervised ML (e.g., clustering ~\cite{choo2013utopian,smith2018closing}) and reinforcement learning~\cite{cakmak2010designing}. To enable interactivity, iML work has to consider two coupled aspects: \textit{what information} the model presents to people, and \textit{what input} people give to the model. Most iML systems present users with \textit{performance} information as impacted by their input, either performance metrics~\cite{kapoor2010interactive,amershi2015modeltracker}, or model output, for example by visualizing the output for a batch of instances~\cite{fogarty2008cueflik} or allowing users to select instances to inspect. An important lesson from the bulk of iML work is that users value \textit{transparency} beyond performance~\cite{rosenthal2010towards,kulesza2013too}, such as descriptive information about how the algorithm works or what features are used~\cite{kulesza2015principles,rosenthal2010towards}. Transparency is found to not only help improve users' mental model of the learning model and hence provide more effective input, but also satisfaction in their interaction outcomes~\cite{kulesza2013too}.
iML research has studied a variety of user input into the model such as providing labels, training examples~\cite{fails2003interactive}, as well as specifying model and algorithm choice~\cite{talbot2009ensemblematrix}, parameters, error preferences~\cite{kapoor2010interactive}, etc. A promising direction for iML to out-perform traditional approaches to training ML models is to enable feature-level human input. Intuitively, direct manipulation of model features represents a much more efficient way to inject domain knowledge into a model~\cite{simard2017machine} than providing labeled instances. For example, FeatureInisght~\cite{brooks2015featureinsight} supports ``feature ideation'' for users to create dictionary features (semantically related groups of words) for text classification. EluciDebug~\cite{kulesza2015principles} allows users to add, remove and adjust the learned weights of keywords for text classifiers. Several interactive topic modeling systems allow users to select keywords or adjust keyword weights for a topic~\cite{choo2013utopian,smith2018closing}. Although the empirical results on whether feature-level input from end users improves performance per se have been mixed~\cite{kulesza2015principles,ahn2007open,wu2019local,stumpf2009interacting}, the consensus is that it is more efficient (i.e., fewer user actions) to achieve comparable results to instance labeling, and that it could produce models better aligned with an individual's needs or knowledge about a domain.
It is worth pointing out that all of the above-mentioned iML and AL systems supporting feature-level input are for text-based models~\cite{settles2011closing,raghavan2006active,stumpf2007toward,smithno,kulesza2015principles}. We suspect that, besides algorithmic interest, the reason is that it is much easier for lay people to consider keywords as top features for text classifiers compared to other types of data. For example, one may come up with keywords that are likely indicators for the topic of ``baseball'', but it is challenging to rank the importance of attributes in a tabular database of job candidates. One possible solution is to allow people to access the model's own reasoning with features and then make incremental adjustments. This idea underlies recent research into visual analytical tools that support debugging or feature engineering work~\cite{krause2016interacting,hohman2019gamut,wexler2019if}. However, their targeted users are data scientists who would then go back to the model development mode. For non-ML-experts, they would need more accessible information to understand the inner working of the model and provide direct input that does not require heavy work of programming or modeling. Therefore, we propose to leverage recent development in the field of explainable AI as interfaces for non-ML experts to understand and teach learning models.
\subsection{Explainable AI}\label{literature}
The field of explainable AI (XAI))~\cite{gunning2017explainable,guidotti2019survey}, often referred interchangeably as interpretable Machine Learning~\cite{carvalho2019machine,doshi2017towards}, started as a sub-field of AI that aims to produce methods and techniques that make AI's decisions understandable by people. The field has surged in recent years as complex and opaque AI technologies such as deep neural networks are now widely used. Explanations of AI are sought for various reasons, such as by regulators to assess model compliance, or by end users to support their decision-making~\cite{zhang2020effect,liao2020questioning,tomsett2018interpretable}. Most relevant to our work, explanations allow model developers to detect a model's faulty behaviors and evaluate its capability, fairness, and safety~\cite{doshi2017towards,dodge2019explaining}. Explanations are therefore increasingly incorporated in ML development tools supporting debugging tasks such as performance analysis~\cite{ren2016squares}, interactive debugging~\cite{kulesza2015principles}, feature engineering~\cite{krause2014infuse}, instance inspection and model comparison~\cite{hohman2019gamut,zhang2018manifold}.
There have been many recent efforts to categorize the ever-growing collection of explanation techniques~\cite{guidotti2019survey,mohseni2018multidisciplinary,anisi03,lim2019these,wang2019designing,lipton2018mythos,arya2019one}. We focus on those explaining ML classifiers (as opposed to other types of AI system such as planning~\cite{chakraborti2020emerging} or multi-agent systems~\cite{rosenfeld2019explainability}). Guidotti et al. summarized the many forms of explanations as solving three categories of problems: \textit{model explanation} (on the whole logic of the classifier), \textit{outcome explanation} (on the reasons of a decision on a given instance) and \textit{model inspection} (on how the model behaves if changing the input). The first two categories, model and outcome explanations, are also referred as \textit{global} and \textit{local} explanations~\cite{lipton2018mythos,mohseni2018multidisciplinary,arya2019one}. The HCI community have defined explanation taxonomies based on different types of user needs, often referred as intelligibility types~\cite{lim2009and,lim2019these,liao2020questioning} . Based on Lim and Dey's foundational work~\cite{lim2009and,lim2010toolkit}, intelligibility types can be represented by prototypical user questions to understand the AI, including inputs, outputs, certainty, why, why not, how to, what if and when. A recent work by Liao et al.~\cite{liao2020questioning} attempted to bridge the two streams of work by mapping the user-centered intelligibility types to existing XAI techniques. For example, global explanations answer the question ``\textit{how} does the system make predictions'', local explanations respond to ``\textit{why} is this instance given this prediction'', and model inspection techniques typically addresses \textit{why not}, \textit{what if} and \textit{how to}.
Our work leverages local explanations to accompany AL algorithms' instance queries. Compared to other approaches including example based and rule based explanations~\cite{guidotti2019survey}, \textit{Feature importance}~\cite{ribeiro2016should,guidotti2019survey} is the most popular form of local explanations. It justifies the model's decision for an instance by the instance's important features indicative of the decision (e.g., ``because the patient shows symptoms of sneezing, the model diagnosed him having a cold''). Local feature importance can be generated by different XAI algorithms depending on the underlying model and data. Some algorithms are model-agnostic~\cite{ribeiro2016should,lundberg2017unified}, making them highly desirable and popular techniques. Local importance can be presented to users in different formats~\cite{lipton2018mythos}, such as described in texts~\cite{dodge2019explaining}, or by visualizing the importance values~\cite{poursabzi2018manipulating,cheng2019explaining}.
While recent studies of XAI often found explanations to improve users' understanding of AI systems~\cite{cheng2019explaining,kocielnik2019will,buccinca2020proxy}, empirical results regarding its impact on users' subjective experience such as trust~\cite{cheng2019explaining,poursabzi2018manipulating,zhang2020effect} and acceptance~\cite{kocielnik2019will} have been mixed. One issue, as some argued~\cite{zhang2020effect}, is that explanation is not meant to enhance trust or satisfaction, but rather to appropriately \textit{calibrate} users' perceptions to the model quality. If the model is under-performing, explanations should work towards exposing the algorithmic limitations; if a model is on par with the expected capability, explanation should help foster confidence and trust. Calibrating trust is especially important for AL settings: if explanations could help the annotator appropriately increase their trust and confidence as the model learns, it could help improve their satisfaction with the teaching outcome and confidently apply stopping criteria (knowing when to stop). Meanwhile, how people react to flawed explanations generated by early-stage, naive models, and changing explanations as the model learns, remain open questions~\cite{smithno}. We will empirically answer these questions by comparing annotation experiences in two snapshots of an AL process: an \textit{early stage} annotation task with the initial model, and a \textit{late stage} when the model is close to the stopping criteria.
On the flip side, explanations present additional information and the risk of overloading users~\cite{narayanan2018humans}, although some showed that their benefit justifies the additional effort~\cite{kulesza2015principles}. Explanations were also found to incur over-reliance~\cite{stumpf2016explanations,poursabzi2018manipulating} which makes people less inclined or able to scrutinize AI system's errors. It is possible that explanations could bias, or \textit{anchor} annotators' judgment to the model's. While anchoring judgment is not necessarily counter-productive if the model predictions are competent, we recognize that the most popular sampling strategy of AL--uncertainty sampling--focuses on instances the model is most uncertain of. To test this, it is necessary to decouple the potential anchoring effect of the model's predictions~\cite{rosenthal2010towards}, and the model's explanations, as an XAL setting entails both. Therefore, we compare the model training results with XAL to two baseline conditions: traditional AL and \textit{coactive learning} (CL)~\cite{shivaswamy2015coactive}. CL is a sub-paradigm of AL, in which the model presents its predictions and the annotator is only required to make corrections if necessary. CL is favored for reducing annotator workload, especially when their availability is limited.
Last but not least, recent XAI work emphasizes that there is no ``one-fits-all'' solution and different user groups may react to AI explanations differently~\cite{arya2019one,liao2020questioning,dodge2019explaining}. Identifying individual factors that mediate the effect of AI explanation could help develop more robust insights to guide the design of explanations.
Our study provides an opportunity to identify key individual factors that mediate the preferences for model explanations in the machine teaching context. Specifically, we study the effect of \textit{Task (domain) Knowledge} and \textit{AI Experience} to test the possibilities of XAL for reducing knowledge barriers to train ML models. We also explore the effect of \textit{Need for cognition}~\cite{cacioppo1982need}, defined as an individual's tendency to engage in thinking or complex cognitive activities. Need for cognition has been extensively researched in social and cognitive psychology as a mediating factor for how one responds to cognitively demanding tasks (e.g.~\cite{cacioppo1983effects,haugtvedt1992personality}). Given that explanations present additional information, we hypothesize that individuals with different levels of Need for Cognition could have different responses.
\section{Explainable Active Learning and Research Questions}
We propose \textit{explainable active learning (XAL)} by combining active learning and \textit{local explanations}, which fits naturally with the AL workflow without requiring additional user input: instead of opaquely requesting instance labels, the model presents its own decision accompanied by its explanation for the decision, answering the question ``\textit{why} am I giving this instance this prediction". It then requests the annotator to confirm or reject. For the user study, we make the design choice of explaining AL with \textit{local feature importance} instead of other forms of local explanations (e.g., example or rule based explanations~\cite{guidotti2019survey}), given the former approach's popularity and intuitiveness--it reflects how the model weighs different features and gives people direct access to the inner working of the model. We also make the design choice of presenting local feature importance with a visualization (Figure~\ref{fig:interface_2}) instead of in texts, in the hope of reading efficiency.
Our idea differentiates from prior work on feature-querying AL and iML in two aspects. First, we present the model's own reasoning for a particular instance to query user feedback instead of requesting global feature weights from people~\cite{settles2011closing,raghavan2006active,kulesza2015principles,brooks2015featureinsight}. Recent work demonstrated that, while ML experts may be able to reason with model features globally, lay people prefer local explanations grounded in specific cases~\cite{arya2019one,kulesza2013too,hohman2019gamut,kulesza2011oriented}. Second, we look beyond text-based models as in existing work as discussed above, and consider a generalizable form of explanation--visualizing local feature importance. While we study XAL in a setting of tabular data, this explanation format can be applied to any type of data with model-agnostic explanation techniques (e.g.~\cite{ribeiro2016should}).
At a high level, we posit that this paradigm of presenting explanations and requesting feedback better mimics how humans teach and learn, allowing transparency for the annotation experience. Explanations can also potentially improve the teaching quality in two ways. First, it is possible that explanations make it easier for one to reject a faulty model decision and thus provide better labels, especially for challenging situations where the annotator lacks contextual information or complete domain knowledge~\cite{rosenthal2010towards}. Second, explanations could enable new forms of teaching feedback based on the explanation. These benefits were discussed in a very recent paper by Teso and Kersting~\cite{teso2018should}, which explored soliciting corrections for the model's explanation, specifically feedback that a mentioned feature should be considered irrelevant instead. This correction feedback is then used to generate counter examples as additional training data, which are identical to the instance except for the mentioned feature. While this work is closest to our idea, empirical studies were absent to understand how adding explanations impacts AL interactions.
We believe a user study is necessary for two reasons. First, accumulating evidence, as reviewed in the previous section, suggests that explanations have both benefits and drawbacks relevant to an AL setting. They merit a user study to test its feasibility. Second, a design principle of iML recommends that algorithmic advancement should be driven by people's natural tendency to interact with models~\cite{amershi2014power,cakmak2012designing,stumpf2009interacting}. Instead of fixing on a type of input as in Teso and Kersting~\cite{teso2018should}, an \textit{interaction elicitation study} could map out desired interactions for people to teach models based on its explanations and then inform algorithms that are able to take advantage of these interactions. A notable work by Stumpf et al.~\cite{stumpf2009interacting} conducted an elicitation study for interactively improving text-based models, and developed new training algorithms for Naïve Bayes models. Our study explores how people naturally want to teach a model with a local-feature-importance visualization, a popular and generalizable form of explanation. Based on the above discussions, this paper sets out to answer the following research questions and test the following hypotheses:
\begin{itemize}
\item \textbf{RQ1}: How do local explanations impact the annotation and training outcomes of AL?
\item \textbf{RQ2}: How do local explanations impact annotator experiences?
\begin{itemize}
\item \textbf{H1}: Explanations support \textit{trust calibration}, i.e. there is an interactive effect between the presence of explanations and the model learning stage (early v.s. late stage model) on annotator's trust in deploying the model.
\item \textbf{H2}: Explanations improve \textit{annotator satisfaction}.
\item \textbf{H3}: Explanations increase perceived \textit{cognitive workload}.
\end{itemize}{}
\item \textbf{RQ3}: How do individual factors, specifically \textit{task knowledge}, \textit{AI experience}, and \textit{Need for Cognition}, impact annotation and annotator experiences with XAL?
\begin{itemize}
\item \textbf{H4}: Annotators with lower task knowledge benefit more from XAL, i.e., there is an interactive effect between the presence of explanations and annotators' task knowledge on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload).
\item \textbf{H5}: Annotators inexperienced with AI benefit more from XAL, i.e., there is an interactive effect between the presence of explanations and annotators' experience with AI on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload).
\item \textbf{H6}: Annotators with lower Need for Cognition have a less positive experience with XAL, i.e., there is an interactive effect between the presence of explanations and annotators' Need for Cognition on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload),
\end{itemize}{}
\item \textbf{RQ4}: What kind of feedback do annotators naturally want to provide upon seeing local explanations?
\end{itemize}{}
\section{XAL Setup}
\subsection{Prediction task}
We aimed to design a prediction task that would not require deep domain expertise, where common-sense knowledge could be effective for teaching the model. The task should also involve decisions by weighing different features so explanations could potentially make a difference (i.e., not simple perception based judgment). Lastly, the instances should be easy to comprehend with a reasonable number of features. With these criteria, we chose the Adult Income dataset \cite{adultIncome} for a task of predicting whether the annual income of an individual is more or less than \$80,000~\footnote{After adjusting for inflation (1994-2019)~\cite{inflation}, while the original dataset reported on the income level of \$50,000}. The dataset is based on a Census survey database. Each row in the dataset characterizes a person with a mix of numerical and categorical variables like age, gender, education, occupation, etc., and a binary annual income variable, which was used as our ground truth.
In the experiment, we presented participants with a scenario of building an ML classification system for a customer database. Based on a customer's background information, the system predicts the customer's income level for a targeted service. The task for the participants was to judge the income level of instances that the system selected to learn from, as presented in Figure~\ref{fig:interface_1}. This is a realistic AL task where annotators might not provide error-free labels, and explanations could potentially help reveal faulty model beliefs. To improve participants' knowledge about the domain, we provided a practice task before the trials, which will be discussed in Section~\ref{domain}.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{graphs/interface_1.png}
\caption{Customer profile presented in all conditions for annotation}
\label{fig:interface_1}
\label{fig:sub1}
\end{subfigure} \hspace{5mm}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=.8\linewidth]{graphs/interface_2.png}
\caption{Explanation and questions presented in the XAL condition}
\label{fig:interface_2}
\label{fig:sub2}
\end{subfigure}
\caption{Experiment interface }
\label{fig:interface}
\end{figure*}
\subsection{Active learning setup}
AL requires the model to be retrained after new labels are fetched, so the model and explanations used for the experiment should be computationally inexpensive to avoid latency. Therefore we chose logistic regression (with L2 regularization), which was used extensively in the AL literature \cite{settles2009active, yang2018benchmark}. Logistic regression is considered directly interpretable, i.e., its local feature importance could be directly generated, as to be described in Section~\ref{explanation}. We note that this form of explanation could be generated by post-hoc techniques for any kind of ML model~\cite{ribeiro2016should}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{graphs/accuracy2.pdf}
\caption{Accuracy as a function of number of queries in the simulation experiment}
\label{fig:accuracy}
\end{figure}
Building an AL pipeline involves the design choices of sampling strategy, batch size, the number of initial labeled instances and test data. For this study, we used entropy-based uncertainty sampling to select the next instance to query, as it is the most commonly used sampling strategy \cite{yang2018benchmark} and also computationally inexpensive. We used a batch size of 1~\cite{batchSize}, meaning the model was retrained after each new queried label. We initialized the AL pipeline with two labeled instances. To avoid tying the experiment results to a particular sequence of data, we allocated different sets of initial instances to different participants, by randomly drawing from a pool of more than 100 pairs of labeled instances. The pool was created by randomly picking two instances with ground-truth labels, and being kept in the pool only if they produced a model with initial accuracy between 50\%-55\%. This was to ensure that the initial model would perform worse than humans and did not vary significantly across participants. 25\% of all data were reserved as test data for evaluating the model learning outcomes.
As discussed, we are interested in the effect of explanations at different stages of AL. We took two snapshots of an AL process--an early-stage model just started with the initial labeled instances, and a late-stage model that is close to the stopping criteria. We define the stopping criteria as plateau of accuracy improvement on the test data with more labeled data. To determine where to take the late-stage snapshot, we ran a simulation where AL queried instances were given the labels in the ground truth. The simulation was run with 10 sets of initial labels and the mean accuracy is shown in Figure \ref{fig:accuracy}. Based on the pattern, we chose the late stage model to be where 200 queries were executed. To create the late-stage experience without having participants answer 200 queries, we took a participant's allocated initial labeled instances and simulated an AL process with 200 queries answered by the ground-truth labels. The model was then used in the late-stage task for the same participant. This also ensured that the two tasks a participant experienced were independent of each other i.e. a participant's performance in the early-stage task did not influence the late-stage task. In each task, participants were queried for 20 instances. Based on the simulation result in Figure~\ref{fig:accuracy}, we expected an improvement of 10\%-20\% accuracy with 20 queries in the early stage, and a much smaller increase in the late stage.
\subsubsection{Explanation method}\label{explanation}
Figure~\ref{fig:interface_2} shows a screenshot of the local explanation presented in the XAL condition, for the instance shown in Figure~\ref{fig:sub1}. The explanation was generated based on the coefficients of the logistic regression, which determine the impact of each feature on the model's prediction. To obtain the \textit{feature importance} for a given instance, we computed the product of each of the instance's feature values with the corresponding coefficients in the model. The higher the magnitude of a feature's importance, the more impact it had on the model's prediction for this instance. A negative value implied that the feature value was tilting the model's prediction towards less than \$80,000 and vice versa. We sorted all features by their absolute importance and picked the top 5 features responsible for the model's prediction.
The selected features were shown to the participants in the form of a horizontal bar chart as in Figure~\ref{fig:interface_2}. The importance of a feature was encoded by the length of the bar where a longer bar meant greater impact and vice versa. The sign of the feature importance was encoded with color (green-positive, red-negative), and sorted to have the positive features at the top of the chart. Apart from the top contributing features, we also displayed the intercept of the logistic regression model as an orange bar at the bottom. Because it was a relatively skewed classification task (the majority of the population has an annual income of less than \$80,000), the negative base chance (intercept) needed to be understood for the model's decision logic. For example, in Figure \ref{fig:interface}, Occupation is the most important feature. Martial status and base chance are pointing towards less than \$80,000. While most features are tilting positively, the model prediction for this instance is still less than \$80,000 because of the large negative value of base chance.
\section{Experimental design}
We adopted a 3 $\times$ 2 experimental design, with the learning condition (AL, CL, XAL) as a between-subject treatment, and the learning stage (early v.s. late) as a within-subject treatment. That is, participants were randomly assigned to one of the conditions to complete two tasks, with queries from an early and a late stage AL model, respectively. The order of the early and late stage tasks was randomized and balanced for each participant to avoid order effect and biases from knowing which was the "improved" model.
We posted the experiment as a human intelligence task (HIT) on Amazon Mechanical Turk. We set the requirement to have at least 98\% prior approval rate and each worker could participate
only once. Upon accepting the HIT, a participant was assigned to one of the three conditions. The annotation task was given with a scenario of building a classification system for a customer database to provide targeted service for high- versus low-income customers, with a ML model that queries and learns in real time. Given that the order of the learning stage was randomized, we instructed the participants that they would be teaching two configurations of the system with different initial performance and learning capabilities.
With each configuration, a participant was queried for 20 instances, in the format shown in Figure~\ref{fig:interface_1}. A minimum of 10 seconds was enforced before they could proceed to the next query. In the AL condition, participants were presented with a customer's profile and asked to judge whether his or her annual income was above 80K. In the CL condition, participants were presented with the profile and the model's prediction. In the XAL condition, the model's prediction was accompanied by an explanation revealing the model's "rationale for making the prediction" (the top part of Figure~\ref{fig:interface_2}). In both the CL and XAL conditions, participants were asked to judge whether the model prediction was correct and optionally answer an open-form question to explain that judgement (the middle part of Figure~\ref{fig:interface_2}). In the XAL condition, participants were further asked to also give a rating to the model explanation and optionally explain their ratings with an open-form question (the bottom part of Figure~\ref{fig:interface_2}). After a participant submitted a query, the model was retrained, and performance metrics of accuracy and F1 score (on the 25\% reserved test data) were calculated and recorded, together with the participant's input and the time stamp.
After every 10 trials, the participants were told the percentage of their answers matching similar cases in the Census survey data, as a measure to help engaging the participants. An attention-check question was prompted in each learning stage task, showing the customer's profile in the prior query with two other randomly selected profiles as distractors. The participants were asked to select the one they just saw. Only one participant failed both attention-check questions, and was excluded from the analysis.
After completing 20 queries for each learning stage task, the participants were asked to fill out a survey regarding their subjective perception of the ML model they just finished teaching and the annotation task. The details of the survey will be discussed in Section ~\ref{survey}. At the end of the HIT we also collected participants' demographic information and factors of individual differences, to be discussed in Section~\ref{individual}.
\subsubsection{Domain knowledge training} \label{domain}
We acknowledge that MTurk workers may not be experts of an income prediction task, even though it is a common topic. Our study is close to \textit{human-grounded evaluation} proposed in ~\cite{doshi2017towards} as an evaluation approach for explainability, in which lay people are used as proxy to test general notions or patterns of the target application (i.e., by comparing outcomes between the baseline and the target treatment).
To improve the external validity, we took two measures to help participants gain domain knowledge. First, throughout the study, we provided a link to a supporting document with statistics of personal income based on the Census survey. Specifically, chance numbers--the chance of people with a feature-value to have income above 80K--were given for all feature-values the model used (by quantile if numerical features). Second, participants were given 20 practice trials of income prediction tasks and encouraged to utilize the supporting material. The ground truth--income level reported in the Census survey--was revealed after they completed each practice trial. Participants were told that the model would be evaluated based on data in the Census survey, so they should strive to bring the knowledge from the supporting material and the practice trials into the annotation task. They were also incentivized with a \$2 bonus if the consistency between their predictions and similar cases reported in the Census survey were among the top 10\% of all participants.
After the practice trials, the agreement of the participants' predictions with the ground-truth in the Census survey for the early-stage trials reached a mean of 0.65 (SE=0.08). We note the queried instances in AL using uncertainty-based sampling are challenging by nature. The agreement with ground truth by one of the authors, who is highly familiar with the data and the task, was 0.75.
\subsubsection{Survey measuring subjective experience}\label{survey}
To understand how explanation impacts annotators' subjective experiences (\textbf{RQ2}), we designed a survey for the participants to fill after completing each learning stage task. We asked the participants to self report the following (all based on a 5-point Likert Scale):
\textit{Trust} in deploying the model: We asked participants to assess how much they could trust the model they just finished teaching to be deployed for the target task (customer classification). Trust in technologies is frequently measured based on McKnight’s framework on Trust~\cite{mcknight1998initial,mcknight2002developing}, which considers the dimensions of \textit{capability}, \textit{benevolence}, \textit{integrity} for trust belief, and multiple action-based items (e.g., "I will be able to rely on the system for the target task") for trust intention. We also consulted a recent paper on trust scale for automation~\cite{korber2018theoretical} and added the dimension of \textit{predictability} for trust belief. We picked and adapted one item in each of the four trust belief dimensions (e.g., for benevolence, "Using predictions made by the system will harm customers’ interest") , and four items for trust intention, and arrived at an 8-item scale to measure trust (3 were reversed scale). The Cronbach's alpha is 0.89.
\textit{Satisfaction} of the annotation experience, by five items adapted from After-Scenario Questionnaire~\cite{lewis1995computer} and User Engagement Scale~\cite{o2018practical} (e.g. "I am satisfied with the ease of completing the task", "It was an engaging experience working on the task"). The Cronbach's alpha is 0.91
\textit{Cognitive workload} of the annotation experience, by selecting two applicable items from the NASA-TLX task load index (e.g., "How mentally demanding was the task: 1=very low; 5=very high"). The Cronbach's alpha is 0.86.
\subsubsection{Individual differences}\label{individual}
\textbf{RQ3} asks about the mediating effect of individual differences, specifically the following:
\textit{Task knowledge} to perform the income prediction judgement correctly. We used one's performance in the practice trails as a proxy, calculated by the percentage of trials judged correctly based on the ground truth of income level in the Census database.
\textit{AI experience}, for which we asked participants to self-report ``How much do you know about artificial Intelligence or machine learning algorithms.'' The original questions had four levels of experience. With few answered higher level of experience, we decided to combine the answers into a binary variable--without AI experience v.s. with AI experience.
\textit{Need for Cognition} measures individual differences in the tendency to engage in thinking and cognitively complex activities. To keep the survey short, we selected two items from the classic Need for Cognition scale developed by Cacioppo and Petty~\cite{cacioppo1982need}. The Cronbach's alpha is 0.88.
\subsubsection{Participants}
37 participants completed the study. One participant did not pass both attention-check tests and was excluded. The analysis was conducted with 12 participants in each condition. Among them, 27.8\% were female; 19.4\% under the age 30, and 13.9\% above the age 50; 30.6\% reported to have no knowledge of AI, 52.8\% with little knowledge ("know basic concepts in AI"), and the rest to have some knowledge ("know or used AI algorithms"). In total , participants spent about 20-40 min on the study and was compensated for \$4 with a 10\% chance for additional \$2 bonus, as discussed in Section~\ref{domain}
\section{Results}
For all analyses, we ran mixed-effects regression models to test the hypotheses and answer the research questions, with participants as random effects,
learning \textit{Stage}, \textit{Condition}, and individual factors (\textit{Task Knowledge}, \textit{AI Experience}, and \textit{Need for Cognition}) as fixed effects. RQ2 and RQ3 are concerned with interactive effects of Stage or Individual factors with learning Conditions. Therefore for every dependant variable we are interested in, we started with including all two-way interactions with Condition in the model, then removed insignificant interactive terms in reducing order. A VIF test was run to confirm there was no multicollinearity issue with any of the variables (all lower than 2). In each sub-section, we report statistics based on the final model and summarize the findings at the end.
\subsection{Annotation and learning outcomes (\textbf{RQ1}, \textbf{RQ3})}
First, we examined the model learning outcomes in different conditions. In Table~\ref{tab:performance} (the third to sixth columns), we report the statistics of performance metrics--\textit{Accuracy} and \textit{F1} scores-- after the 20 queries in each condition and learning stage. We also report the performance improvement, as compared to the initial model performance before the 20 queries.
For each of the performance and improvement metrics, we ran a mixed-effect regression model as described earlier. In all the models, we found only significant main effect of Stage for all performance and improvement metrics ($p<0.001$). The results indicate that participants were able to improve the early-stage model significantly more than the later-stage model, but the improvement did not differ across learning conditions.
\begin{table}
\caption{Results of model performance and labels }\label{tab:performance}
\begin{tabular}{p{1cm}p{1.2cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}}
\toprule
Stage&Condition&Acc.&Acc. improve&F1&F1 improve&\%Agree&Human Acc.\\
\midrule
&AL & 67.0\% & 13.7\% & 0.490 & 0.104 & 55.0\% & 66.7\%\\
Early &CL & 64.2\% & 11.7\% & 0.484 & 0.105 & 58.3\% & 62.1\%\\
&XAL & 64.0\% & 11.8\% & 0.475 & 0.093 & 62.9\% & 63.3\%\\
\midrule
&AL & 80.4\% & 0.1\% & 0.589 & 0.005 & 47.9\% & 54.2\%\\
Late &CL & 80.8\% & 0.2\% & 0.587 & 0.007 & 55.8\% & 58.8\%\\
&XAL & 80.3\% & -0.2\% & 0.585 & -0.001 & 60.0\% & 55.0\%\\
\bottomrule
\end{tabular}
\end{table}
In addition to the performance metrics, we looked at the \textit{Human accuracy}, defined as the percentage of labels given by a participant that were consistent with the ground truth. Interestingly, we found a significant interactive effect between Condition and participants' Task Knowledge (calculated as one's accuracy score in the training trials): taking CL condition as a reference level, XAL had a positive interactive effect with Task Knowledge ($\beta=0.67,SE=0.29, p=0.03$). In Figure~\ref{fig:human_acc}, we plot the pattern of the interactive effect by first performing a median split on Task Knowledge scores to categorize participants into \textit{high performers} and \textit{low performers}. The figure shows that, compared to the CL condition, adding explanations had a reverse effect for those with high or low task knowledge. While explanations helped those with high task knowledge to provide better labels, it impaired the judgment of those with low task knowledge. There was also a main negative effect of late Stage ($SE=0.21,t=3.87,p<0.001$), confirming that queried instances in the later stage were more challenging for participants to judge correctly.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/accuracy.png}
\caption{Human accuracy across conditions and task knowledge levels. All error bars represent +/- one standard error.}
\label{fig:human_acc}
\end{figure}
We conducted the same analysis on the \textit{Agreement} between each participant's labels and the model predictions and found a similar trend: using the CL condition as the reference level, there was a marginally significant interactive effect between XAL and Task Knowledge ($\beta=-0.75, SE=0.45,p=0.10$) \footnote{We consider $p<0.05$ as significant, and $0.05 \leq p<0.10$ as marginally significant, following statistical convention~\cite{cramer2004sage}}. The result suggests that explanations might have an "anchoring effect" on those with low task knowledge, making them more inclined to accept the model's predictions. Indeed, we zoomed in on trials where participants agreed with the model predictions, and looked at the percentage of \textit{wrong agreement} where the judgment was inconsistent with the ground truth. We found a significant interaction between XAL and Task Knowledge, using CL as a reference level ($\beta=-0.89, SE=0.45,p=0.05$). We plot this interactive effect in Figure~\ref{fig:wrong_agree}: adding explanations had a reverse effect for those with high or low task knowledge, making the latter more inclined to mistakenly agree with the model's predictions. We did not find such an effect for \textit{incorrect disagreement} looking at trials where participants disagreed with the model's predictions.
Taken together, to our surprise, we found the opposite results of \textbf{H4}: local explanations further polarized the annotation outcomes of those with high or low task knowledge, compared to only showing model predictions without explanations. While explanations may help those with high task knowledge to make better judgment, they have a \textbf{negative anchoring effect for those with low task knowledge by making them more inclined to agree with the model even if it is erroneous}. This could be a potential problem for XAL, even though we did not find this anchoring effect to have statistically significant negative impact on the model's learning outcome. We also showed that with uncertainty sampling of AL, \textbf{as the model matured, it became more challenging for annotators to make correct judgment and improve the model performance}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/wrong_agree.png}
\caption{Percentage of wrong agreement among all agreeing judgments with model predictions across Conditions and Task Knowledge levels. All error bars represent +/- one standard error.}
\label{fig:wrong_agree}
\end{figure}
\subsection{Annotator experience (\textbf{RQ2}, \textbf{RQ3})}
We then investigated how participants' self-reported experience differed across conditions by analyzing the following survey scales (measurements discussed in Section ~\ref{survey}): trust in deploying the model, interaction satisfaction, and perceived cognitive workload. Table~\ref{tab:survey} reports the mean ratings in different conditions and learning stage tasks. For each self-reported scale, we ran a mixed-effects regression model as discussed in the beginning of this section.
\begin{table}
\caption{Survey results }\label{tab:survey}
\begin{tabular}{p{0.8cm}p{1.2cm}p{1.5cm}p{1.5cm}p{1.5cm}}
\toprule
Stage&Condition&Trust&Satisfaction&Workload\\
\midrule
&AL &3.14 &4.23 &2.08\\
Early&CL &3.83 &3.69 &2.71 \\
&XAL &2.42 &3.31 &3.00 \\
\midrule
&AL &3 & 4.18&2.25\\
Late&CL &2.71 &3.63 &2.67\\
&XAL &2.99 &3.35&3.14 \\
\bottomrule
\end{tabular}
\end{table}
First, for trust in deploying the model, using AL as the reference level, we found a significant positive interaction between XAL Condition and Stage ($\beta=0.70, SE=0.31,p=0.03$). As shown in Table~\ref{tab:survey} and Figure~\ref{fig:trust_stage}, compared to the other two conditions, participants in the XAL Condition had significantly lower trust in deploying the early stage model, but enhanced their trust in the later stage model. The results confirmed \textbf{H1 }that \textbf{explanations help calibrate annotators' trust} in the model at different stages of the training process, while showing model predictions alone (CL) was not able to have that effect.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/trust_stage.png}
\caption{Trust in deploying the model across Conditions and Stages. All error bars represent +/- one standard error.}
\label{fig:trust_stage}
\end{figure}
We also found a two-way interaction between XAL Condition and participants' AI Experience (with/without experience) on trust in deploying the model ($\beta=1.43, SE=0.72,p=0.05$) (AL as the reference level). Figure ~\ref{fig:trust_AI} plots the effect: people without AI experience had exceptionally high ``blind'' trust and high variance of the trust (error bar) in deploying the model in the AL condition. With XAL they were able to an appropriate level of trust. The result highlight the \textbf{challenge for annotators to assess the trustworthiness of the model to be deployed, especially for those inexperienced with AI. Providing explanations could effectively appropriate their trust}, supporting \textbf{H5}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/trust_AI.png}
\caption{Trust in deploying the model across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:trust_AI}
\end{figure}
For interaction satisfaction, the descriptive results in Table~\ref{tab:survey} suggests a decreasing trend of satisfaction in XAL condition compared to baseline AL. By running the regression model we found a significant two-way interaction between XAL Condition and Need for Cognition ($\beta=0.54, SE=0.26, p=0.05$) (AL as reference level). Figure ~\ref{fig:satisfaction_nc} plots the interactive effect, with median split on Need for Cognition scores. It demonstrates that \textbf{explanations negatively impacted satisfaction, but only for those with low Need for Cognition}, supporting \textbf{H6} and rejecting \textbf{H2}. We also found a positive main effect of Task Knowledge ($SE=1.31,t=2.76,p=0.01$), indicating that people who were good at the annotation task reported higher satisfaction.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/satisfaction_nc.png}
\caption{Satisfaction across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:satisfaction_nc}
\end{figure}
For self-reported cognitive workload, the descriptive results in Table~\ref{tab:survey} suggests an increasing trend in XAL condition compared to baseline AL. Regression model found an interactive effect between the condition XAL and AI experience ($\beta=1.30, SE=0.59,p=0.04$). As plotted in Figure~\ref{fig:workload_AI}, the \textbf{XAL condition presented higher cognitive workload compared to baseline AL, but only for those with AI experience}. This partially supports \textbf{H3}, and potentially suggests that those with AI experience were able to more carefully examine the explanations.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/workload_AI.png}
\caption{Cognitive workload across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:workload_AI}
\end{figure}
We also found an interactive effect between CL condition and Need for Cognition on cognitive workload ($\beta=0.53, SE=0.19,p=0.01$), and the remaining negative main effect of Need for Cognition ($\beta=-0.41, SE=0.14,p=0.01$). Pair-wise comparison suggests that participants with low Need for Cognition reported higher cognitive workload than those with high Need for Cognition, except in the CL condition, where they only had to accept or reject the model's predictions. Together with the results on satisfaction, \textbf{CL may be a preferred choice for those with low Need for Cognition}.
In summary, to answer \textbf{RQ2}, participants' self-reported experience confirmed the benefit of explanations for calibrating trust and judging the maturity of the model. Hence XAL could potentially help annotators form stopping criteria with more confidence. Evidence was found that explanations increased cognitive workload, but only for those experienced with AI. We also identified an unexpected effect of explanations in reducing annotator satisfaction, but only for those self-identified to have low Need for Cognition, suggesting that the additional information and workload of explanation may avert annotators who have little interest or capacity to deliberate on the explanations.
The quantitative results with regard to \textbf{RQ3} confirmed the mediating effect of individual differences in Task Knowledge, AI Experience and Need for Cognition on one's reception to explanations in an AL setting. Specifically, people with better Task Knowledge and thus more capable of detecting AI's faulty reasoning, people inexperienced with AI who might be otherwise clueless about the model training task, and people with high Need for Cognition, may benefit more from XAL compared to traditional AL.
\subsection{Feedback for explanation (\textbf{RQ4})}
In the XAL condition, participants were asked to rate the system's rationale based on the explanations and respond to an optional question to explain their ratings. Analyzing answers to these questions allowed us to understand what kind of feedback participants naturally wanted to give the explanations (\textbf{RQ4}).
First, we inspected whether participants' explanation ratings could provide useful information for the model to learn from. Specifically, if the ratings could distinguish between correct and incorrect model predictions, then they could provide additional signals. Focusing on the XAL condition, we calculated, for each participant, in each learning stage task, the \textit{average explanation ratings} given to instances where the model made correct and incorrect predictions (compared to ground truth). The results are shown in Figure~\ref{fig:model_pred}. By running an ANOVA on the \textit{average explanation ratings}, with \textit{Stage} and \textit{Model Correctness} as within-subject variables, we found the main effect of \textit{Model Correctness} to be significant, $F(1, 11)=14.38$, $p<0.01$. This result indicates that participants were able to distinguish the rationales of correct and incorrect model predictions, in both the early and late stages, confirming the utility of annotators' rating on the explanations.
\begin{figure}
\centering
\includegraphics[scale=0.9]{graphs/model_pred.pdf}
\caption{Explanation ratings for correct and incorrect model predictions}
\label{fig:model_pred}
\end{figure}
One may further ask whether explanation ratings provided additional information beyond the judgement expressed in the labels. For example, among cases where the participants disagreed (agreed) with the model predictions, some of them could be correct (incorrect) predictions, as compared to the ground truth. If explanation ratings could distinguish right and wrong disagreement (agreement), they could serve as additional signals that supplement instance labels. Indeed, as shown in Figure~\ref{fig:disagree}, we found that among the \textit{disagreeing instances}, participants' average explanation rating given to \textit{wrong disagreement} (the model was making the correct prediction and should not have been rejected) was higher than those to the \textit{right disagreement} ($F(1, 11)=3.12$, $p=0.10$), especially in the late stage (interactive effect between \textit{Stage} and \textit{Disagreement Correctness} $F(1, 11)=4.04$, $p=0.07$). We did not find this differentiating effect of explanation for agreeing instances.
The above results are interesting as Teso and Kersting proposed to leverage feedback of ``weak acceptance'' to train AL ("right decision for the wrong reason"~\cite{teso2018should}), in which people agree with the system's prediction but found the explanation to be problematic. Empirically, we found that the tendency for people to give weak acceptance may be less than weak rejection. Future work could explore utilizing weak rejection to improve model learning, for example, with AL algorithms that can consider probabilistic annotations~\cite{song2018active}.
\begin{figure}
\centering
\includegraphics[scale=0.9]{graphs/disagreement.pdf}
\caption{Explanation ratings for disagreeing instances}
\label{fig:disagree}
\end{figure}
\subsubsection{Open form feedback}
\label{feedback}
We conducted content analysis on participants' open form answers to provide feedback, especially by comparing the ones in the CL and XAL conditions. In the XAL condition, participants had two fields as shown in Figure~\ref{fig:interface_2} to provide their feedback for the model decision and explanation. We combined them for the content analysis as some participants filled everything in one text field. In the CL condition, only the first field on model decision was shown. Two authors performed iterative coding on the types of feedback until a set of codes emerged and were agreed upon. In total, we gathered 258 entries of feedback on explanations in the XAL conditions (out of 480 trials). 44.96\% of them did not provide enough information to be considered valid feedback (e.g. simply expressing agreement or disagreement with the model).
The most evident pattern contrasting the CL and XAL conditions is a shift from commenting on the top features to determine an income prediction to more diverse types of comments based on the explanation. For example, in the CL condition, the majority of comments were concerned with the job category to determine one's income level, such as ``\textit{Craft repair likely doesn't pay more than 80000}.'' However, for the model, job category is not necessarily the most important feature for individual decisions, suggesting that people's direct feature-level input may not be ideal for the learning model to consume. In contrast, feedback based on model explanations is not only more diverse in their types, but also covers a wider range of features. Below we discuss the types of feedback, ranked by the occurring frequency.
\begin{itemize}
\item \textit{Tuning weights} ($N=81$): The majority of feedback focused on the weights bars in the explanation visualization, expressing disagreement and adjustment one wanted to make. E.g.,"\textit{marital status should be weighted somewhat less}". It is noteworthy that while participants commented on between one to four features, the median number of features was only one. Unlike in the CL condition where participants overly focused on the feature of job category, participants in the XAL condition often caught features that did not align with their expectation, e.g. ``\textit{Too much weight put into being married}'', or ``\textit{Age should be more negatively ranked}''. Some participants kept commenting on a feature in consecutive queries to keep tuning its weights, showing that they had a desired range in mind.
\item \textit{Removing, changing direction of, or adding features} ($N=28$): Some comments suggested, qualitatively, to remove, change the impact direction of, or add certain features. This kind of feedback often expressed surprise, especially on sensitive features such as race and gender, e.g."\textit{not sure why females would be rated negatively}", or "\textit{how is divorce a positive thing}". Only one participant mentioned \textit{adding} a feature not shown, e.g., "\textit{should take age into account}". These patterns echoed observations from prior work that local explanation heightens people's attention towards unexpected, especially sensitive features~\cite{dodge2019explaining}. We note that ``removing a feature to be irrelevant" is the feedback Teso and Kersting's AL algorithm incorporates~\cite{teso2018should}.
\item \textit{Ranking or comparing multiple feature weights} ($N=12$) : A small number of comments explicitly addressed the ranking or comparison of multiple features, such as "\textit{occupation should be ranked more positively than marital status}".
\item \textit{Reasoning about combination and relations of features} ($N=10$): Consistent with observation in Stumpf et al.'s study~\cite{stumpf2007toward}, some comments suggested the model to consider combined or relational effect of features--e.g., "\textit{years of education over a certain age is negligible}", or ``\textit{hours per week not so important in farming, fishing}''. This kind of feedback is rarely considered by current AL or iML systems.
\item \textit{Logic to make decisions based on feature importance} ($N=6$): The feature importance based explanation associates the model's prediction with the combined weights of all features. Some comments ($N=6$) expressed confusion, e.g. "\textit{literally all of the information points to earning more than 80,000}" (while the base chance was negative). Such comments highlight the need for a more comprehensible design of explanations, and also indicate people's natural tendency to provide feedback on the model's overall logic.
\item \textit{Changes of explanation} ($N=5$): Interacting with an online AL algorithm, some participants paid attention to the changes of explanations. For example, one participant in the condition seeing the late-stage model first noticed the declining quality of the system's rationale. Another participant commented that the weights in the model explanation ``\textit{jumps back and fourth, for the same job}''. Change of explanation is a unique property of the AL setting. Future work could explore interfaces that explicitly present changes or progress in the explanation and utilize the feedback.
\end{itemize}{}
To summarize, we identified opportunities to use local explanations to elicit knowledge input beyond instance labels. By simply soliciting a rating for the explanation, additional signals for the instance could be obtained for the learning model. Through qualitative analysis of the open-form feedback, we identified several categories of input that people naturally wanted to give by reacting to the local explanations. Future work could explore algorithms and systems that utilize annotators' input based on local explanations for the model's features, weights, feature ranks, relations, and changes during the learning process.
\section{Discussions and Future Directions}
Our work is motivated by the vision of creating natural experiences to teach learning models by seeing and providing feedback for the model's explanations of selected instances. While the results show promises and illuminate key considerations of user preferences, it is only a starting point. To realize the vision, supporting the needs of machine teachers and fully harnessing their feedback for model explanations, requires both algorithmic advancement and refining the ways to explain and interact. Below we provide recommendations for future work of XAL as informed by the study.
\subsection{Explanations for machine teaching}
Common goals of AI explanations, as reflected in much of the XAI literature, are to support a complete and sound understanding of the model~\cite{kulesza2015principles,carvalho2019machine}, and to foster trust in the AI~\cite{poursabzi2018manipulating,cheng2019explaining}. These goals may have to be revised in the context of machine teaching. First, explanations should aim to \textit{calibrate} trust, and in general the perception of model capability, by accurately and efficiently communicating the model's current limitations.
Second, while prior work often expects explanations to enhance adherence or persuasiveness~\cite{poursabzi2018manipulating}, we highlight the opposite problem in machine teaching, as an ``anchoring'' effect to a naive model's judgment could be counter-productive and impair the quality of human feedback. Future work should seek alternative designs to mitigate the anchoring effect. For example, it would be interesting to use a partial explanation that does not reveal the model's judgment (e.g., only a subset of top features~\cite{lai2019human}), or have people first make their own judgment before seeing the explanation.
Third, the premise of XAL is to make the teaching task accessible by focusing on individual instances and eliciting incremental feedback. It may be unnecessary to target a complete understanding of the model, especially as the model is constantly being updated. Since people have to review and judge many instances in a row, \textit{low cognitive workload} without sacrificing the quality of feedback should be a primary design goal of explanations for XAL. One potential solution is \textit{progressive disclosure} by starting from simplified explanations and progressively provide more details~\cite{springer2019progressive}. Since the early-stage model is likely to have obvious flaws, using simpler explanations could suffice and demand less cognitive resource. Another approach is to design explanations that are sensitive to the targeted feedback, for example by only presenting features that the model is uncertain about or people are likely to critique, assuming some notion of uncertainty or likelihood information could be inferred.
While we used a local feature importance visualization to explain the model, we could speculate on the effect of alternative designs based on the results. We chose a visualization design to show the importance values of multiple features at a glance. While it is possible to describe the feature importance with texts as in ~\cite{dodge2019explaining}, it is likely to be even more cognitively demanding to read and comprehend. We do not recommend further limiting the number of features presented, since people are more inclined to critique features they see rather than recalling ones not presented. Other design choices for local explanations include presenting similar examples with the same known outcome~\cite{bien2011prototype,gurumoorthy2017protodash}, and rules that the model believes to guarantee the prediction~\cite{ribeiro2018anchors} (e.g., ``someone with an executive job above the age of 40 is highly likely to earn more than 80K``). We suspect that the example based explanation might not present much new information for feedback. The rule-based explanation, on the other hand, could be an interesting design for future work to explore, as annotators may be able to approve or disapprove the rules, or judge between multiple candidate rules~\cite{hanafi2017seer}. This kind of feedback could be leveraged by the learning model. Lastly, we fixed on local explanations for the model to self-address the \textit{why} question (intelligibility type). We believe it fits naturally with the workflow of AL querying selected instances. A potential drawback is that it requires annotators to carefully reason with the explanation for every new queried instance. It would be interesting to explore using a global explanation so that annotators would only need to attend to changes of overall logic as the model learns. But it is unknown whether a global explanation is as easy for non-AI-experts to make sense and provide feedback on.
There are also opportunities to develop new explanation techniques by leveraging the temporal nature of AL. One is to \textit{explain model progress}, for example by explicitly showing changes in the model logic compared to prior versions. This could potentially help the annotators better assess the model progress and identify remaining flaws. Second is to utilize \textit{explanation and feedback history} to both improve explanation presentation (e.g., avoiding repetitive explanations) and infer user preferences (e.g., how many features is ideal to present).
Lastly, our study highlights the needs to tailor explanations based on the characteristics of the teacher. People from whom the model seeks feedback may not be experienced with ML algorithms, and not necessarily possess the complete domain knowledge or contextual information. Depending on their cognitive style or the context to teach, they may have limited cognitive resources to deliberate on the explanations. These individual characteristics may impact their preferences for the level of details, visual presentation, and whether explanation should be presented at all.
\subsection{Learning from explanation based feedback}
Our experiment intends to be an elicitation study to gather the types of feedback people naturally want to provide for model explanations. An immediate next step for future work is to develop new AL algorithms that could incorporate the types of feedback presented in Section~\ref{feedback}. Prior work, as reviewed in Section~\ref{literature}, proposed methods to incorporate feedback on top features or boosting the importance of features~\cite{raghavan2006active,druck2009active,settles2011closing,stumpf2007toward}, and removing features~\cite{teso2018should,kulesza2015principles}. However most of them are for text classifiers. Since feature-based feedback for text data is usually binary (a keyword should be considered a feature or not), prior work often did not consider the more quantitative feedback shown in our study, such as tuning the weights of features, comparatively ranking features, or reasoning about the logic or relations of multiple features. While much technical work is to be done, it is beyond the scope of this paper. Here we highlight a few key observations from people's natural tendency to provide feedback for explanations, which should be reflected in the assumptions that future algorithmic work makes.
First, people's natural feedback for explanations is \textit{incremental} and \textit{incomplete}. It tends to focus on a small number of features that are most evidently unaligned with one's expectation, instead of the full set of features. Second, people's natural feedback is \textit{imprecise}. For example, feature weights were suggested to be qualitatively increased, decreased, added, removed, or changing direction. It may be challenging for a lay person to accurately specify a quantitative correction for a model explanation, but a tight feedback loop should allow one to quickly view how an imprecise correction impacts the model and make follow-up adjustment. Lastly, people's feedback is \textit{heterogeneous}. Across individuals there are vast differences on the types of feedback, the number of features to critique, and the tendency to focus on specific features, such as whether a demographic feature should be considered fair to use~\cite{dodge2019explaining}.
Taken together, compared to providing instance labels, feedback for model explanations can be noisy and frail. Incorporating the feedback ``as it is'' to update the learned features may not be desirable. For example, some have warned against ``local decision pitfalls''~\cite{wu2019local} of human feedback in iML that overly focuses on modifying a subset of model features, commonly resulting in an overfitted model that fails to generalize. Moreover, not all ML models are feasible to update the learned features directly. While prior iML work often builds on directly modifiable models such as regression or naïve Bayes classifiers, our approach is motivated by the possibility to utilize popular \textit{post-hoc} techniques to generate local explanations~\cite{ribeiro2016should,lundberg2017unified} for any kind of ML models, even those not directly interpretable such as neural networks. It means that an explanation could give information about how the model weighs different features but it is not directly connected to its inner working. How to incorporate human feedback for post-hoc explanations to update the original model remains an open challenge. It may be interesting to explore approaches that take human feedback as weighted signals, constraints, a part of a co-training model or ensemble~\cite{stumpf2009interacting} , or impacting the data~\cite{teso2018should} or the sampling strategy.
A coupled aspect to make human feedback more robust and consumable for a learning algorithm is to design interfaces that scaffold the elicitation of high-quality, targeted type of feedback. This is indeed the focus of the bulk of iML literature. For example, allowing people to drag-and-drop to change the ranks of features, or providing sliders to change the feature weights, may encourage people to provide more precise and complete feedback. It would also be interesting to leverage the explanation and feedback history to extract more reliable signals from multiple entries of feedback, or purposely prompt people for confirmation of prior feedback. Given the heterogeneous nature of people's feedback, future work could also explore methods to elicit and cross-check input from multiple people to obtain more robust teaching signals.
\subsection{Explanation- and explainee-aware sampling}
Sampling strategy is the most important component of an AL algorithm to determine its learning efficiency. But existing AL work often ignores the impact of sampling strategy on annotators' experience. For example, our study showed that uncertainty sampling (selecting instance the model is most uncertain about to query) led to an increasing challenge for annotators to provide correct labels as the model matures.
For XAL algorithms to efficiently gather feedback and support a good teaching experience, sampling strategy should move beyond the current focus on decision uncertainty to considering the explanation for the next instance and what feedback to gain from that explanation. For the machine teacher, desired properties of explanations may include easiness to judge, non-repetitiveness, tailored to their preferences and tendency to provide feedback, etc.~\cite{sokol2020explainability}. For the learning model, it may gain value from explaining and soliciting feedback for features that it is uncertain about, have not been examined by people, or have high impact on the model performance. Future work should explore sampling strategies that optimize for these criteria of explanations and explainees.
\section{Limitations}
We acknowledge several limitations of the study. First, the participants were recruited on Mechanical Turk and not held accountable for consequences of the model, so their behaviors may not generalize to all SMEs. However, we attempted to improve the ecological validity by carefully designing the domain knowledge training task and reward mechanism (participants received bonus if among 10\% performer). Second, this is a relatively small-scale lab study. While the quantitative results showed significance with a small sample size, results from the qualitative data, specifically the types of feedback may not be considered an exhaustive list. Third, the dataset has a small number of features and the model is relatively simple. For more complex models, the current design of explanation with feature importance visualization could be more challenging to judge and provide meaningful feedback for.
\section{Conclusions}
While active learning has gained popularity for its learning efficiency, it has not been widely considered as an HCI problem despite its interactive nature. We propose explainable active learning (XAL), by utilizing a popular local explanation method as the interface for an AL algorithm. Instead of opaquely requesting labels for selected instances, the model presents its own prediction and explanation for its prediction, then requests feedback from the human. We posit that this new paradigm not only addresses annotators' needs for model transparency, but also opens up opportunities for new learning algorithms that learn from human feedback for the model explanations. Broadly, XAL allows training ML models to more closely resemble a ``teaching'' experience, and places explanations as a central element of machine teaching. We conducted an experiment to both test the feasibility of XAL and serve as an elicitation study to identify the types of feedback people naturally want to provide. The experiment demonstrated that explanations could help people monitor the model learning progress and calibrate their trust in the teaching outcome. But our results cautioned against the adverse effect of explanations in anchoring people's judgment to the naive model's, if the annotator lacks adequate knowledge to detect the model's faulty reasoning, and the additional workload that could avert people with low Need for Cognition. Besides providing a systematic understanding of user interaction with AL algorithms, our results have three broad implications for using model explanations as the interface for machine teaching. First, we highlight the design goals of explanations applied to the context of teaching a learning model, as distinct from common goals in XAI literature, including calibrating trust, mitigating anchoring effect and minimizing cognitive workload. Second, we identify important individual factors that mediate people's preferences and reception to model explanations, including task knowledge, AI experience and Need for Cognition. Lastly, we enumerate on the types of feedback people naturally want to provide for model explanations, to inspire future algorithmic work to incorporate such feedback.
\section*{Acknowledgments}
We wish to thank all participants and reviewers for their helpful feedback. This work was done as an internship project at IBM Research AI, and partially supported by NSF grants IIS 1527200 and IIS 1941613.
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
While Machine Learning technologies are increasingly used in a wide variety of domains ranging from critical systems to everyday consumer products, currently only a small group of people with formal training possess the skills to develop these technologies. Supervised ML, the most common type of ML technology, is typically trained with knowledge input in the form of labeled instances, often produced by subject matter experts (SMEs). Current ML development process presents at least two problems. First, the work to produce thousands of instance labels is tedious and time-consuming, and can impose high development costs. Second, the acquisition of human knowledge input is isolated from other parts of ML development, and often has to go through asynchronous iterations with data scientists as the mediator. For example, seeing suboptimal model performance, a data scientist has to spend extensive time obtaining additional labeled data from the SMEs, or gathering other feedback which helps in feature engineering or other steps in the ML development process~\cite{amershi2014power,brooks2015featureinsight}.
The research community and technology industry are working toward making ML more accessible through the recent movement of ``democratizing data science''~\cite{chou2014democratizing}. Among other efforts, interactive machine learning (iML) is a research field at the intersection of HCI and ML. iML work has produced a variety of tools and design guidelines~\cite{amershi2014power} that enable SMEs or end users to interactively drive the model towards desired behaviors so that the need for data scientists to mediate can be relieved. More recently, a new field of ``machine teaching" was called for to make the process of developing ML models as intuitive as teaching a student, with its emphasis on supporting ``the teacher and the teacher's interaction with
data''~\cite{simard2017machine}.
The technical ML community has worked on improving the efficiency of labeling work, for which Active Learning (AL) came to become a vivid research area. AL could reduce the labeling workload by having the model select instances to query a human annotator for labels. However, the interfaces to query human input are minimal in current AL settings, and there is surprisingly little work that studied how people interact with AL algorithms. Algorithmic work of AL assumes the human annotator to be an oracle that provides error-free labels~\cite{settles2009active}, while in reality annotation errors are commonplace and can be systematically biased by a particular AL setting. Without understanding and accommodating these patterns, AL algorithms can break down in practice. Moreover, this algorithm-centric view gives little attention to the needs of the annotators, especially their needs for transparency~\cite{amershi2014power}. For example, "stopping criteria", knowing when to complete the training with confidence remains a challenge in AL, since the annotator is unable to monitor the model's learning progress. Even if performance metrics calculated on test data are available, it is difficult to judge whether the model will generalize in the real-world context or is bias-free.
Meanwhile, the notion of model transparency has moved beyond the scope of descriptive characteristics of the model studied in prior iML work (e.g., output, performance, features used~\cite{kulesza2015principles,rosenthal2010towards, fails2003interactive,fogarty2008cueflik}). Recent work in the field of explainable AI (XAI)~\cite{gunning2017explainable} focuses on making the \textit{reasoning} of model decisions understandable by people of different roles, including those without formal ML training. In particular, \textit{local explanations} (e.g.~\cite{lundberg2017unified,ribeiro2016should}) is a cluster of XAI techniques that explain how the model arrived at a particular decision. Although researchers have only begun to examine how people actually interact with AI explanations, we believe explanations should be a core component of the interfaces to teach learning models.
Explanations play a critical role in human teaching and learning~\cite{wellman2004theory,meyer1997consensually}. Prompting students to generate explanations for a given answer or phenomenon is a common teaching strategy to deepen students' understanding. The explanations also enable the teacher to gauge the students' grasp of new concepts, reinforce successful learning, correct misunderstanding, repair gaps, as well as adjust the teaching strategies~\cite{lombrozo2012explanation}. Intuitively, the same mechanism could enable machine teachers to assess the model logic, oversee the machine learner's progress, and establish trust and confidence in the final model. Well-designed explanations could also allow people without ML training to access the inner working of the model and identify its shortcomings, thus potentially reducing the barriers to provide knowledge input and enriching teaching strategies, for example by giving direct feedback for the model's explanations.
Toward this vision of ``machine teaching through model explanations'', we propose a novel paradigm of \textit{explainable active learning} (XAL), by providing local explanations of the model's predictions of selected instances as the interface to query an annotator's knowledge input. We conduct an empirical study to investigate how local explanations impact the annotation quality and annotator experience. It also serves as an elicitation study to explore how people naturally want to teach a learning model with its explanations. The contributions of this work are threefold:
\begin{itemize}
\item We provide insights into the opportunities for explainable AI (XAI) techniques as an interface for machine teaching, specifically feature importance based local explanation. We illustrate both the benefits of XAI for machine teaching, including supporting trust calibration and enabling rich teaching feedback, and challenges that future XAI work should tackle, such as anchoring judgment and cognitive workload. We also identify important individual factors mediating one's reception to model explanations in the machine teaching context, including task knowledge, AI experience and Need for Cognition.
\item We conduct an in-depth empirical study of interaction with an active learning algorithm. Our results highlight several problems faced by annotators in an AL setting, such as increasing challenge to provide correct labels as the model matures and selects more uncertain instances, difficulty to know when to stop with confidence, and desire to provide knowledge input beyond labels. We claim that some of these problems can be mitigated by explanations.
\item We propose a new paradigm to teach ML models, \textit{explainable active learning (XAL)}, that has the model selectively query the machine teacher, and meanwhile allows the teacher to understand the model's reasoning and adjust their input. The user study provides a systematic understanding on the feasibility of this new model training paradigm. Based on our findings, we discuss future directions of technical advancement and design opportunities for XAL.
\end{itemize}{}
In the following, we first review related literature, then introduce the proposal for XAL, research questions and hypotheses for the experimental study. Then we discuss the XAL setup, methodology and results. Finally, we reflect on the results and discuss possible future directions.
\section{Related work}
\label{literature}
Our work is motivated by prior work on AL, interactive machine learning and explainable AI.
\subsection{Active learning}
The core idea of AL is that if a learning algorithm intelligently selects instances to be labeled, it could perform well with much less training data~\cite{settles2009active}. This idea resonates with the critical challenge in modern ML, that labeled data are time-consuming and expensive to obtain~\cite{zhu2005semi}. AL can be used in different scenarios like stream based~\cite{cohn1994improving} (from a stream of incoming data), pool based~\cite{lewis1994sequential} (from a large set of unlabeled instances), etc.~\cite{settles2009active}. To select the next instance for labeling, multiple query sampling strategies have been proposed in the literature \cite{qbc, qbc2, unc, dasgupta2008hierarchical, quire, entropy, confidence}. Most commonly used is \textit{Uncertainty sampling} \cite{unc, entropy, confidence, margin}, which selects instances the model is most uncertain about. Different AL algorithms exploit different notions of uncertainty, e.g. entropy \cite{entropy}, confidence \cite{confidence}, margin \cite{margin}, etc.
While the original definition of AL is concerned with instance labels, it has been broadened to query other types of knowledge input. Several works explored querying feedback for features, such as asking whether the presence of a feature is an indicator for the target concept~\cite{raghavan2006active,druck2009active,settles2011closing}. For example, DUALIST~\cite{settles2011closing} is an active learning tool that queries annotators for labels of both instances (e.g., whether a text document is about ``baseball'' or ``hockey'') and features (which keywords, if appeared in a document, are likely indicators that the document is about ``baseball''). Other AL paradigms include \textit{active class selection}~\cite{lomasky2007active} and \textit{active feature acquisition}~\cite{zheng2002active}, which query the annotator for additional training examples and missing features, respectively.
Although AL by definition is an interactive annotation paradigm, the technical ML community tends to simply assume the human annotators to be mechanically queried oracles. The above-mentioned AL algorithms were mostly experimented with simulated human input providing error-free labels. But labeling errors are inevitable, even for simple perceptual judgment tasks~\cite{cheng2015measuring}. Moreover, in reality, the targeted use cases for AL are often ones where high-quality labels are costly to obtain either because of knowledge barriers or effort to label. For example, AL can be used to solicit users' labels for their own records to train an email spam classifier or context-aware sensors ~\cite{kapoor2010interactive,rosenthal2010towards}, but a regular user may lack the knowledge or contextual information to make all judgments correctly. Many have criticized the unrealistic assumptions that AL algorithms make. For example, by solving a multi-instance, multi-oracle optimization problem, \textit{proactive learning}~\cite{donmez2008proactive} relaxes the assumptions that the annotator is infallible, indefatigable (always answers with the same level of quality), individual (only one oracle), and insensitive to costs.
Despite the criticism, we have a very limited understanding on how people actually interact with AL algorithms, hindering our ability to develop AL systems that perform in practice and provide a good annotator experience. Little attention has been given to the annotation interfaces, which in current AL works are undesirably minimal and opaque. To our knowledge, there has been few HCI work on this topic. One exception is in the field of human-robot interaction (HRI), where AL algorithms were used to develop robots that continuously learn by asking humans questions~\cite{cakmak2010designing,cakmak2012designing,chao2010transparent,gonzalez2014asking,saponaro2011generation}. In this context, the robot and its natural-language queries \textit{is} the interface for AL. For example, Cakmak et al. explored robots that ask three types of AL queries~\cite{cakmak2010designing,cakmak2012designing}: instance queries, feature queries and demonstration queries. The studies found that people were more receptive of feature queries and perceived robots asking about features to be more intelligent. The study also pointed out that a constant stream of queries led to a decline in annotators' situational awareness~\cite{cakmak2010designing}. This kind of empirical results challenged the assumptions made by AL algorithms, and inspired follow-up work proposing mixed-initiative AL: the robot only queries when certain conditions were met, e.g., following an uninformative label. Another relevant study by Rosenthal and Dey ~\cite{rosenthal2010towards} looked at information design for an intelligent agent that queries labels to improve its classification. They found that contextual information, such as keywords in a text document or key features in sensor input, and providing system's prediction (so people only need to confirm or reject labels) improved labeling accuracy. Although this work cited the motivation for AL, the study was conducted with an offline questionnaire without interacting with an actual AL algorithm.
We argue that it is necessary to study annotation interactions with a real-time AL algorithm
because temporal changes are key characteristics of AL settings. With an interactive learning algorithm, every annotation impacts the subsequent model behaviors, and the model should become better aligned with the annotator's knowledge over time. Moreover, systematic changes could happen in the process in both the type of queried instances, depending on the sampling strategy, and the annotator behaviors, for example fatigue~\cite{settles2011closing}. These complex patterns could only be understood by holistically studying the annotation and and the evolving model in real time.
Lastly, it is a nontrivial issue to understand how annotator characteristics impact their reception to AL system features. For example, it would be instrumental to understand what system features could narrow the performance gaps of people with different levels of domain expertise or AI experience, thus reducing the knowledge barriers to teach ML models.
\subsection{Interactive machine learning}
Active learning is sometimes considered a technique for iML. iML work is primarily motivated by enabling non-ML-experts to train a ML model
through ``rapid, focused, and incremental model updates''~\cite{amershi2014power}. However, conventional AL systems, with a minimum interface asking for labels, lack the fundamental element in iML--a tight interaction loop that transparently presents how every human input impacts the model, so that the non-ML-experts could adapt their input to drive the model into desired directions~\cite{amershi2014power,fails2003interactive}. Our work aims to move AL in that direction.
Broadly, iML encompasses all kinds of ML tasks including supervised ML, unsupervised ML (e.g., clustering ~\cite{choo2013utopian,smith2018closing}) and reinforcement learning~\cite{cakmak2010designing}. To enable interactivity, iML work has to consider two coupled aspects: \textit{what information} the model presents to people, and \textit{what input} people give to the model. Most iML systems present users with \textit{performance} information as impacted by their input, either performance metrics~\cite{kapoor2010interactive,amershi2015modeltracker}, or model output, for example by visualizing the output for a batch of instances~\cite{fogarty2008cueflik} or allowing users to select instances to inspect. An important lesson from the bulk of iML work is that users value \textit{transparency} beyond performance~\cite{rosenthal2010towards,kulesza2013too}, such as descriptive information about how the algorithm works or what features are used~\cite{kulesza2015principles,rosenthal2010towards}. Transparency is found to not only help improve users' mental model of the learning model and hence provide more effective input, but also satisfaction in their interaction outcomes~\cite{kulesza2013too}.
iML research has studied a variety of user input into the model such as providing labels, training examples~\cite{fails2003interactive}, as well as specifying model and algorithm choice~\cite{talbot2009ensemblematrix}, parameters, error preferences~\cite{kapoor2010interactive}, etc. A promising direction for iML to out-perform traditional approaches to training ML models is to enable feature-level human input. Intuitively, direct manipulation of model features represents a much more efficient way to inject domain knowledge into a model~\cite{simard2017machine} than providing labeled instances. For example, FeatureInisght~\cite{brooks2015featureinsight} supports ``feature ideation'' for users to create dictionary features (semantically related groups of words) for text classification. EluciDebug~\cite{kulesza2015principles} allows users to add, remove and adjust the learned weights of keywords for text classifiers. Several interactive topic modeling systems allow users to select keywords or adjust keyword weights for a topic~\cite{choo2013utopian,smith2018closing}. Although the empirical results on whether feature-level input from end users improves performance per se have been mixed~\cite{kulesza2015principles,ahn2007open,wu2019local,stumpf2009interacting}, the consensus is that it is more efficient (i.e., fewer user actions) to achieve comparable results to instance labeling, and that it could produce models better aligned with an individual's needs or knowledge about a domain.
It is worth pointing out that all of the above-mentioned iML and AL systems supporting feature-level input are for text-based models~\cite{settles2011closing,raghavan2006active,stumpf2007toward,smithno,kulesza2015principles}. We suspect that, besides algorithmic interest, the reason is that it is much easier for lay people to consider keywords as top features for text classifiers compared to other types of data. For example, one may come up with keywords that are likely indicators for the topic of ``baseball'', but it is challenging to rank the importance of attributes in a tabular database of job candidates. One possible solution is to allow people to access the model's own reasoning with features and then make incremental adjustments. This idea underlies recent research into visual analytical tools that support debugging or feature engineering work~\cite{krause2016interacting,hohman2019gamut,wexler2019if}. However, their targeted users are data scientists who would then go back to the model development mode. For non-ML-experts, they would need more accessible information to understand the inner working of the model and provide direct input that does not require heavy work of programming or modeling. Therefore, we propose to leverage recent development in the field of explainable AI as interfaces for non-ML experts to understand and teach learning models.
\subsection{Explainable AI}\label{literature}
The field of explainable AI (XAI))~\cite{gunning2017explainable,guidotti2019survey}, often referred interchangeably as interpretable Machine Learning~\cite{carvalho2019machine,doshi2017towards}, started as a sub-field of AI that aims to produce methods and techniques that make AI's decisions understandable by people. The field has surged in recent years as complex and opaque AI technologies such as deep neural networks are now widely used. Explanations of AI are sought for various reasons, such as by regulators to assess model compliance, or by end users to support their decision-making~\cite{zhang2020effect,liao2020questioning,tomsett2018interpretable}. Most relevant to our work, explanations allow model developers to detect a model's faulty behaviors and evaluate its capability, fairness, and safety~\cite{doshi2017towards,dodge2019explaining}. Explanations are therefore increasingly incorporated in ML development tools supporting debugging tasks such as performance analysis~\cite{ren2016squares}, interactive debugging~\cite{kulesza2015principles}, feature engineering~\cite{krause2014infuse}, instance inspection and model comparison~\cite{hohman2019gamut,zhang2018manifold}.
There have been many recent efforts to categorize the ever-growing collection of explanation techniques~\cite{guidotti2019survey,mohseni2018multidisciplinary,anisi03,lim2019these,wang2019designing,lipton2018mythos,arya2019one}. We focus on those explaining ML classifiers (as opposed to other types of AI system such as planning~\cite{chakraborti2020emerging} or multi-agent systems~\cite{rosenfeld2019explainability}). Guidotti et al. summarized the many forms of explanations as solving three categories of problems: \textit{model explanation} (on the whole logic of the classifier), \textit{outcome explanation} (on the reasons of a decision on a given instance) and \textit{model inspection} (on how the model behaves if changing the input). The first two categories, model and outcome explanations, are also referred as \textit{global} and \textit{local} explanations~\cite{lipton2018mythos,mohseni2018multidisciplinary,arya2019one}. The HCI community have defined explanation taxonomies based on different types of user needs, often referred as intelligibility types~\cite{lim2009and,lim2019these,liao2020questioning} . Based on Lim and Dey's foundational work~\cite{lim2009and,lim2010toolkit}, intelligibility types can be represented by prototypical user questions to understand the AI, including inputs, outputs, certainty, why, why not, how to, what if and when. A recent work by Liao et al.~\cite{liao2020questioning} attempted to bridge the two streams of work by mapping the user-centered intelligibility types to existing XAI techniques. For example, global explanations answer the question ``\textit{how} does the system make predictions'', local explanations respond to ``\textit{why} is this instance given this prediction'', and model inspection techniques typically addresses \textit{why not}, \textit{what if} and \textit{how to}.
Our work leverages local explanations to accompany AL algorithms' instance queries. Compared to other approaches including example based and rule based explanations~\cite{guidotti2019survey}, \textit{Feature importance}~\cite{ribeiro2016should,guidotti2019survey} is the most popular form of local explanations. It justifies the model's decision for an instance by the instance's important features indicative of the decision (e.g., ``because the patient shows symptoms of sneezing, the model diagnosed him having a cold''). Local feature importance can be generated by different XAI algorithms depending on the underlying model and data. Some algorithms are model-agnostic~\cite{ribeiro2016should,lundberg2017unified}, making them highly desirable and popular techniques. Local importance can be presented to users in different formats~\cite{lipton2018mythos}, such as described in texts~\cite{dodge2019explaining}, or by visualizing the importance values~\cite{poursabzi2018manipulating,cheng2019explaining}.
While recent studies of XAI often found explanations to improve users' understanding of AI systems~\cite{cheng2019explaining,kocielnik2019will,buccinca2020proxy}, empirical results regarding its impact on users' subjective experience such as trust~\cite{cheng2019explaining,poursabzi2018manipulating,zhang2020effect} and acceptance~\cite{kocielnik2019will} have been mixed. One issue, as some argued~\cite{zhang2020effect}, is that explanation is not meant to enhance trust or satisfaction, but rather to appropriately \textit{calibrate} users' perceptions to the model quality. If the model is under-performing, explanations should work towards exposing the algorithmic limitations; if a model is on par with the expected capability, explanation should help foster confidence and trust. Calibrating trust is especially important for AL settings: if explanations could help the annotator appropriately increase their trust and confidence as the model learns, it could help improve their satisfaction with the teaching outcome and confidently apply stopping criteria (knowing when to stop). Meanwhile, how people react to flawed explanations generated by early-stage, naive models, and changing explanations as the model learns, remain open questions~\cite{smithno}. We will empirically answer these questions by comparing annotation experiences in two snapshots of an AL process: an \textit{early stage} annotation task with the initial model, and a \textit{late stage} when the model is close to the stopping criteria.
On the flip side, explanations present additional information and the risk of overloading users~\cite{narayanan2018humans}, although some showed that their benefit justifies the additional effort~\cite{kulesza2015principles}. Explanations were also found to incur over-reliance~\cite{stumpf2016explanations,poursabzi2018manipulating} which makes people less inclined or able to scrutinize AI system's errors. It is possible that explanations could bias, or \textit{anchor} annotators' judgment to the model's. While anchoring judgment is not necessarily counter-productive if the model predictions are competent, we recognize that the most popular sampling strategy of AL--uncertainty sampling--focuses on instances the model is most uncertain of. To test this, it is necessary to decouple the potential anchoring effect of the model's predictions~\cite{rosenthal2010towards}, and the model's explanations, as an XAL setting entails both. Therefore, we compare the model training results with XAL to two baseline conditions: traditional AL and \textit{coactive learning} (CL)~\cite{shivaswamy2015coactive}. CL is a sub-paradigm of AL, in which the model presents its predictions and the annotator is only required to make corrections if necessary. CL is favored for reducing annotator workload, especially when their availability is limited.
Last but not least, recent XAI work emphasizes that there is no ``one-fits-all'' solution and different user groups may react to AI explanations differently~\cite{arya2019one,liao2020questioning,dodge2019explaining}. Identifying individual factors that mediate the effect of AI explanation could help develop more robust insights to guide the design of explanations.
Our study provides an opportunity to identify key individual factors that mediate the preferences for model explanations in the machine teaching context. Specifically, we study the effect of \textit{Task (domain) Knowledge} and \textit{AI Experience} to test the possibilities of XAL for reducing knowledge barriers to train ML models. We also explore the effect of \textit{Need for cognition}~\cite{cacioppo1982need}, defined as an individual's tendency to engage in thinking or complex cognitive activities. Need for cognition has been extensively researched in social and cognitive psychology as a mediating factor for how one responds to cognitively demanding tasks (e.g.~\cite{cacioppo1983effects,haugtvedt1992personality}). Given that explanations present additional information, we hypothesize that individuals with different levels of Need for Cognition could have different responses.
\section{Explainable Active Learning and Research Questions}
We propose \textit{explainable active learning (XAL)} by combining active learning and \textit{local explanations}, which fits naturally with the AL workflow without requiring additional user input: instead of opaquely requesting instance labels, the model presents its own decision accompanied by its explanation for the decision, answering the question ``\textit{why} am I giving this instance this prediction". It then requests the annotator to confirm or reject. For the user study, we make the design choice of explaining AL with \textit{local feature importance} instead of other forms of local explanations (e.g., example or rule based explanations~\cite{guidotti2019survey}), given the former approach's popularity and intuitiveness--it reflects how the model weighs different features and gives people direct access to the inner working of the model. We also make the design choice of presenting local feature importance with a visualization (Figure~\ref{fig:interface_2}) instead of in texts, in the hope of reading efficiency.
Our idea differentiates from prior work on feature-querying AL and iML in two aspects. First, we present the model's own reasoning for a particular instance to query user feedback instead of requesting global feature weights from people~\cite{settles2011closing,raghavan2006active,kulesza2015principles,brooks2015featureinsight}. Recent work demonstrated that, while ML experts may be able to reason with model features globally, lay people prefer local explanations grounded in specific cases~\cite{arya2019one,kulesza2013too,hohman2019gamut,kulesza2011oriented}. Second, we look beyond text-based models as in existing work as discussed above, and consider a generalizable form of explanation--visualizing local feature importance. While we study XAL in a setting of tabular data, this explanation format can be applied to any type of data with model-agnostic explanation techniques (e.g.~\cite{ribeiro2016should}).
At a high level, we posit that this paradigm of presenting explanations and requesting feedback better mimics how humans teach and learn, allowing transparency for the annotation experience. Explanations can also potentially improve the teaching quality in two ways. First, it is possible that explanations make it easier for one to reject a faulty model decision and thus provide better labels, especially for challenging situations where the annotator lacks contextual information or complete domain knowledge~\cite{rosenthal2010towards}. Second, explanations could enable new forms of teaching feedback based on the explanation. These benefits were discussed in a very recent paper by Teso and Kersting~\cite{teso2018should}, which explored soliciting corrections for the model's explanation, specifically feedback that a mentioned feature should be considered irrelevant instead. This correction feedback is then used to generate counter examples as additional training data, which are identical to the instance except for the mentioned feature. While this work is closest to our idea, empirical studies were absent to understand how adding explanations impacts AL interactions.
We believe a user study is necessary for two reasons. First, accumulating evidence, as reviewed in the previous section, suggests that explanations have both benefits and drawbacks relevant to an AL setting. They merit a user study to test its feasibility. Second, a design principle of iML recommends that algorithmic advancement should be driven by people's natural tendency to interact with models~\cite{amershi2014power,cakmak2012designing,stumpf2009interacting}. Instead of fixing on a type of input as in Teso and Kersting~\cite{teso2018should}, an \textit{interaction elicitation study} could map out desired interactions for people to teach models based on its explanations and then inform algorithms that are able to take advantage of these interactions. A notable work by Stumpf et al.~\cite{stumpf2009interacting} conducted an elicitation study for interactively improving text-based models, and developed new training algorithms for Naïve Bayes models. Our study explores how people naturally want to teach a model with a local-feature-importance visualization, a popular and generalizable form of explanation. Based on the above discussions, this paper sets out to answer the following research questions and test the following hypotheses:
\begin{itemize}
\item \textbf{RQ1}: How do local explanations impact the annotation and training outcomes of AL?
\item \textbf{RQ2}: How do local explanations impact annotator experiences?
\begin{itemize}
\item \textbf{H1}: Explanations support \textit{trust calibration}, i.e. there is an interactive effect between the presence of explanations and the model learning stage (early v.s. late stage model) on annotator's trust in deploying the model.
\item \textbf{H2}: Explanations improve \textit{annotator satisfaction}.
\item \textbf{H3}: Explanations increase perceived \textit{cognitive workload}.
\end{itemize}{}
\item \textbf{RQ3}: How do individual factors, specifically \textit{task knowledge}, \textit{AI experience}, and \textit{Need for Cognition}, impact annotation and annotator experiences with XAL?
\begin{itemize}
\item \textbf{H4}: Annotators with lower task knowledge benefit more from XAL, i.e., there is an interactive effect between the presence of explanations and annotators' task knowledge on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload).
\item \textbf{H5}: Annotators inexperienced with AI benefit more from XAL, i.e., there is an interactive effect between the presence of explanations and annotators' experience with AI on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload).
\item \textbf{H6}: Annotators with lower Need for Cognition have a less positive experience with XAL, i.e., there is an interactive effect between the presence of explanations and annotators' Need for Cognition on some of the annotation outcome and experience measures (trust, satisfaction or cognitive workload),
\end{itemize}{}
\item \textbf{RQ4}: What kind of feedback do annotators naturally want to provide upon seeing local explanations?
\end{itemize}{}
\section{XAL Setup}
\subsection{Prediction task}
We aimed to design a prediction task that would not require deep domain expertise, where common-sense knowledge could be effective for teaching the model. The task should also involve decisions by weighing different features so explanations could potentially make a difference (i.e., not simple perception based judgment). Lastly, the instances should be easy to comprehend with a reasonable number of features. With these criteria, we chose the Adult Income dataset \cite{adultIncome} for a task of predicting whether the annual income of an individual is more or less than \$80,000~\footnote{After adjusting for inflation (1994-2019)~\cite{inflation}, while the original dataset reported on the income level of \$50,000}. The dataset is based on a Census survey database. Each row in the dataset characterizes a person with a mix of numerical and categorical variables like age, gender, education, occupation, etc., and a binary annual income variable, which was used as our ground truth.
In the experiment, we presented participants with a scenario of building an ML classification system for a customer database. Based on a customer's background information, the system predicts the customer's income level for a targeted service. The task for the participants was to judge the income level of instances that the system selected to learn from, as presented in Figure~\ref{fig:interface_1}. This is a realistic AL task where annotators might not provide error-free labels, and explanations could potentially help reveal faulty model beliefs. To improve participants' knowledge about the domain, we provided a practice task before the trials, which will be discussed in Section~\ref{domain}.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{graphs/interface_1.png}
\caption{Customer profile presented in all conditions for annotation}
\label{fig:interface_1}
\label{fig:sub1}
\end{subfigure} \hspace{5mm}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=.8\linewidth]{graphs/interface_2.png}
\caption{Explanation and questions presented in the XAL condition}
\label{fig:interface_2}
\label{fig:sub2}
\end{subfigure}
\caption{Experiment interface }
\label{fig:interface}
\end{figure*}
\subsection{Active learning setup}
AL requires the model to be retrained after new labels are fetched, so the model and explanations used for the experiment should be computationally inexpensive to avoid latency. Therefore we chose logistic regression (with L2 regularization), which was used extensively in the AL literature \cite{settles2009active, yang2018benchmark}. Logistic regression is considered directly interpretable, i.e., its local feature importance could be directly generated, as to be described in Section~\ref{explanation}. We note that this form of explanation could be generated by post-hoc techniques for any kind of ML model~\cite{ribeiro2016should}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{graphs/accuracy2.pdf}
\caption{Accuracy as a function of number of queries in the simulation experiment}
\label{fig:accuracy}
\end{figure}
Building an AL pipeline involves the design choices of sampling strategy, batch size, the number of initial labeled instances and test data. For this study, we used entropy-based uncertainty sampling to select the next instance to query, as it is the most commonly used sampling strategy \cite{yang2018benchmark} and also computationally inexpensive. We used a batch size of 1~\cite{batchSize}, meaning the model was retrained after each new queried label. We initialized the AL pipeline with two labeled instances. To avoid tying the experiment results to a particular sequence of data, we allocated different sets of initial instances to different participants, by randomly drawing from a pool of more than 100 pairs of labeled instances. The pool was created by randomly picking two instances with ground-truth labels, and being kept in the pool only if they produced a model with initial accuracy between 50\%-55\%. This was to ensure that the initial model would perform worse than humans and did not vary significantly across participants. 25\% of all data were reserved as test data for evaluating the model learning outcomes.
As discussed, we are interested in the effect of explanations at different stages of AL. We took two snapshots of an AL process--an early-stage model just started with the initial labeled instances, and a late-stage model that is close to the stopping criteria. We define the stopping criteria as plateau of accuracy improvement on the test data with more labeled data. To determine where to take the late-stage snapshot, we ran a simulation where AL queried instances were given the labels in the ground truth. The simulation was run with 10 sets of initial labels and the mean accuracy is shown in Figure \ref{fig:accuracy}. Based on the pattern, we chose the late stage model to be where 200 queries were executed. To create the late-stage experience without having participants answer 200 queries, we took a participant's allocated initial labeled instances and simulated an AL process with 200 queries answered by the ground-truth labels. The model was then used in the late-stage task for the same participant. This also ensured that the two tasks a participant experienced were independent of each other i.e. a participant's performance in the early-stage task did not influence the late-stage task. In each task, participants were queried for 20 instances. Based on the simulation result in Figure~\ref{fig:accuracy}, we expected an improvement of 10\%-20\% accuracy with 20 queries in the early stage, and a much smaller increase in the late stage.
\subsubsection{Explanation method}\label{explanation}
Figure~\ref{fig:interface_2} shows a screenshot of the local explanation presented in the XAL condition, for the instance shown in Figure~\ref{fig:sub1}. The explanation was generated based on the coefficients of the logistic regression, which determine the impact of each feature on the model's prediction. To obtain the \textit{feature importance} for a given instance, we computed the product of each of the instance's feature values with the corresponding coefficients in the model. The higher the magnitude of a feature's importance, the more impact it had on the model's prediction for this instance. A negative value implied that the feature value was tilting the model's prediction towards less than \$80,000 and vice versa. We sorted all features by their absolute importance and picked the top 5 features responsible for the model's prediction.
The selected features were shown to the participants in the form of a horizontal bar chart as in Figure~\ref{fig:interface_2}. The importance of a feature was encoded by the length of the bar where a longer bar meant greater impact and vice versa. The sign of the feature importance was encoded with color (green-positive, red-negative), and sorted to have the positive features at the top of the chart. Apart from the top contributing features, we also displayed the intercept of the logistic regression model as an orange bar at the bottom. Because it was a relatively skewed classification task (the majority of the population has an annual income of less than \$80,000), the negative base chance (intercept) needed to be understood for the model's decision logic. For example, in Figure \ref{fig:interface}, Occupation is the most important feature. Martial status and base chance are pointing towards less than \$80,000. While most features are tilting positively, the model prediction for this instance is still less than \$80,000 because of the large negative value of base chance.
\section{Experimental design}
We adopted a 3 $\times$ 2 experimental design, with the learning condition (AL, CL, XAL) as a between-subject treatment, and the learning stage (early v.s. late) as a within-subject treatment. That is, participants were randomly assigned to one of the conditions to complete two tasks, with queries from an early and a late stage AL model, respectively. The order of the early and late stage tasks was randomized and balanced for each participant to avoid order effect and biases from knowing which was the "improved" model.
We posted the experiment as a human intelligence task (HIT) on Amazon Mechanical Turk. We set the requirement to have at least 98\% prior approval rate and each worker could participate
only once. Upon accepting the HIT, a participant was assigned to one of the three conditions. The annotation task was given with a scenario of building a classification system for a customer database to provide targeted service for high- versus low-income customers, with a ML model that queries and learns in real time. Given that the order of the learning stage was randomized, we instructed the participants that they would be teaching two configurations of the system with different initial performance and learning capabilities.
With each configuration, a participant was queried for 20 instances, in the format shown in Figure~\ref{fig:interface_1}. A minimum of 10 seconds was enforced before they could proceed to the next query. In the AL condition, participants were presented with a customer's profile and asked to judge whether his or her annual income was above 80K. In the CL condition, participants were presented with the profile and the model's prediction. In the XAL condition, the model's prediction was accompanied by an explanation revealing the model's "rationale for making the prediction" (the top part of Figure~\ref{fig:interface_2}). In both the CL and XAL conditions, participants were asked to judge whether the model prediction was correct and optionally answer an open-form question to explain that judgement (the middle part of Figure~\ref{fig:interface_2}). In the XAL condition, participants were further asked to also give a rating to the model explanation and optionally explain their ratings with an open-form question (the bottom part of Figure~\ref{fig:interface_2}). After a participant submitted a query, the model was retrained, and performance metrics of accuracy and F1 score (on the 25\% reserved test data) were calculated and recorded, together with the participant's input and the time stamp.
After every 10 trials, the participants were told the percentage of their answers matching similar cases in the Census survey data, as a measure to help engaging the participants. An attention-check question was prompted in each learning stage task, showing the customer's profile in the prior query with two other randomly selected profiles as distractors. The participants were asked to select the one they just saw. Only one participant failed both attention-check questions, and was excluded from the analysis.
After completing 20 queries for each learning stage task, the participants were asked to fill out a survey regarding their subjective perception of the ML model they just finished teaching and the annotation task. The details of the survey will be discussed in Section ~\ref{survey}. At the end of the HIT we also collected participants' demographic information and factors of individual differences, to be discussed in Section~\ref{individual}.
\subsubsection{Domain knowledge training} \label{domain}
We acknowledge that MTurk workers may not be experts of an income prediction task, even though it is a common topic. Our study is close to \textit{human-grounded evaluation} proposed in ~\cite{doshi2017towards} as an evaluation approach for explainability, in which lay people are used as proxy to test general notions or patterns of the target application (i.e., by comparing outcomes between the baseline and the target treatment).
To improve the external validity, we took two measures to help participants gain domain knowledge. First, throughout the study, we provided a link to a supporting document with statistics of personal income based on the Census survey. Specifically, chance numbers--the chance of people with a feature-value to have income above 80K--were given for all feature-values the model used (by quantile if numerical features). Second, participants were given 20 practice trials of income prediction tasks and encouraged to utilize the supporting material. The ground truth--income level reported in the Census survey--was revealed after they completed each practice trial. Participants were told that the model would be evaluated based on data in the Census survey, so they should strive to bring the knowledge from the supporting material and the practice trials into the annotation task. They were also incentivized with a \$2 bonus if the consistency between their predictions and similar cases reported in the Census survey were among the top 10\% of all participants.
After the practice trials, the agreement of the participants' predictions with the ground-truth in the Census survey for the early-stage trials reached a mean of 0.65 (SE=0.08). We note the queried instances in AL using uncertainty-based sampling are challenging by nature. The agreement with ground truth by one of the authors, who is highly familiar with the data and the task, was 0.75.
\subsubsection{Survey measuring subjective experience}\label{survey}
To understand how explanation impacts annotators' subjective experiences (\textbf{RQ2}), we designed a survey for the participants to fill after completing each learning stage task. We asked the participants to self report the following (all based on a 5-point Likert Scale):
\textit{Trust} in deploying the model: We asked participants to assess how much they could trust the model they just finished teaching to be deployed for the target task (customer classification). Trust in technologies is frequently measured based on McKnight’s framework on Trust~\cite{mcknight1998initial,mcknight2002developing}, which considers the dimensions of \textit{capability}, \textit{benevolence}, \textit{integrity} for trust belief, and multiple action-based items (e.g., "I will be able to rely on the system for the target task") for trust intention. We also consulted a recent paper on trust scale for automation~\cite{korber2018theoretical} and added the dimension of \textit{predictability} for trust belief. We picked and adapted one item in each of the four trust belief dimensions (e.g., for benevolence, "Using predictions made by the system will harm customers’ interest") , and four items for trust intention, and arrived at an 8-item scale to measure trust (3 were reversed scale). The Cronbach's alpha is 0.89.
\textit{Satisfaction} of the annotation experience, by five items adapted from After-Scenario Questionnaire~\cite{lewis1995computer} and User Engagement Scale~\cite{o2018practical} (e.g. "I am satisfied with the ease of completing the task", "It was an engaging experience working on the task"). The Cronbach's alpha is 0.91
\textit{Cognitive workload} of the annotation experience, by selecting two applicable items from the NASA-TLX task load index (e.g., "How mentally demanding was the task: 1=very low; 5=very high"). The Cronbach's alpha is 0.86.
\subsubsection{Individual differences}\label{individual}
\textbf{RQ3} asks about the mediating effect of individual differences, specifically the following:
\textit{Task knowledge} to perform the income prediction judgement correctly. We used one's performance in the practice trails as a proxy, calculated by the percentage of trials judged correctly based on the ground truth of income level in the Census database.
\textit{AI experience}, for which we asked participants to self-report ``How much do you know about artificial Intelligence or machine learning algorithms.'' The original questions had four levels of experience. With few answered higher level of experience, we decided to combine the answers into a binary variable--without AI experience v.s. with AI experience.
\textit{Need for Cognition} measures individual differences in the tendency to engage in thinking and cognitively complex activities. To keep the survey short, we selected two items from the classic Need for Cognition scale developed by Cacioppo and Petty~\cite{cacioppo1982need}. The Cronbach's alpha is 0.88.
\subsubsection{Participants}
37 participants completed the study. One participant did not pass both attention-check tests and was excluded. The analysis was conducted with 12 participants in each condition. Among them, 27.8\% were female; 19.4\% under the age 30, and 13.9\% above the age 50; 30.6\% reported to have no knowledge of AI, 52.8\% with little knowledge ("know basic concepts in AI"), and the rest to have some knowledge ("know or used AI algorithms"). In total , participants spent about 20-40 min on the study and was compensated for \$4 with a 10\% chance for additional \$2 bonus, as discussed in Section~\ref{domain}
\section{Results}
For all analyses, we ran mixed-effects regression models to test the hypotheses and answer the research questions, with participants as random effects,
learning \textit{Stage}, \textit{Condition}, and individual factors (\textit{Task Knowledge}, \textit{AI Experience}, and \textit{Need for Cognition}) as fixed effects. RQ2 and RQ3 are concerned with interactive effects of Stage or Individual factors with learning Conditions. Therefore for every dependant variable we are interested in, we started with including all two-way interactions with Condition in the model, then removed insignificant interactive terms in reducing order. A VIF test was run to confirm there was no multicollinearity issue with any of the variables (all lower than 2). In each sub-section, we report statistics based on the final model and summarize the findings at the end.
\subsection{Annotation and learning outcomes (\textbf{RQ1}, \textbf{RQ3})}
First, we examined the model learning outcomes in different conditions. In Table~\ref{tab:performance} (the third to sixth columns), we report the statistics of performance metrics--\textit{Accuracy} and \textit{F1} scores-- after the 20 queries in each condition and learning stage. We also report the performance improvement, as compared to the initial model performance before the 20 queries.
For each of the performance and improvement metrics, we ran a mixed-effect regression model as described earlier. In all the models, we found only significant main effect of Stage for all performance and improvement metrics ($p<0.001$). The results indicate that participants were able to improve the early-stage model significantly more than the later-stage model, but the improvement did not differ across learning conditions.
\begin{table}
\caption{Results of model performance and labels }\label{tab:performance}
\begin{tabular}{p{1cm}p{1.2cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}}
\toprule
Stage&Condition&Acc.&Acc. improve&F1&F1 improve&\%Agree&Human Acc.\\
\midrule
&AL & 67.0\% & 13.7\% & 0.490 & 0.104 & 55.0\% & 66.7\%\\
Early &CL & 64.2\% & 11.7\% & 0.484 & 0.105 & 58.3\% & 62.1\%\\
&XAL & 64.0\% & 11.8\% & 0.475 & 0.093 & 62.9\% & 63.3\%\\
\midrule
&AL & 80.4\% & 0.1\% & 0.589 & 0.005 & 47.9\% & 54.2\%\\
Late &CL & 80.8\% & 0.2\% & 0.587 & 0.007 & 55.8\% & 58.8\%\\
&XAL & 80.3\% & -0.2\% & 0.585 & -0.001 & 60.0\% & 55.0\%\\
\bottomrule
\end{tabular}
\end{table}
In addition to the performance metrics, we looked at the \textit{Human accuracy}, defined as the percentage of labels given by a participant that were consistent with the ground truth. Interestingly, we found a significant interactive effect between Condition and participants' Task Knowledge (calculated as one's accuracy score in the training trials): taking CL condition as a reference level, XAL had a positive interactive effect with Task Knowledge ($\beta=0.67,SE=0.29, p=0.03$). In Figure~\ref{fig:human_acc}, we plot the pattern of the interactive effect by first performing a median split on Task Knowledge scores to categorize participants into \textit{high performers} and \textit{low performers}. The figure shows that, compared to the CL condition, adding explanations had a reverse effect for those with high or low task knowledge. While explanations helped those with high task knowledge to provide better labels, it impaired the judgment of those with low task knowledge. There was also a main negative effect of late Stage ($SE=0.21,t=3.87,p<0.001$), confirming that queried instances in the later stage were more challenging for participants to judge correctly.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/accuracy.png}
\caption{Human accuracy across conditions and task knowledge levels. All error bars represent +/- one standard error.}
\label{fig:human_acc}
\end{figure}
We conducted the same analysis on the \textit{Agreement} between each participant's labels and the model predictions and found a similar trend: using the CL condition as the reference level, there was a marginally significant interactive effect between XAL and Task Knowledge ($\beta=-0.75, SE=0.45,p=0.10$) \footnote{We consider $p<0.05$ as significant, and $0.05 \leq p<0.10$ as marginally significant, following statistical convention~\cite{cramer2004sage}}. The result suggests that explanations might have an "anchoring effect" on those with low task knowledge, making them more inclined to accept the model's predictions. Indeed, we zoomed in on trials where participants agreed with the model predictions, and looked at the percentage of \textit{wrong agreement} where the judgment was inconsistent with the ground truth. We found a significant interaction between XAL and Task Knowledge, using CL as a reference level ($\beta=-0.89, SE=0.45,p=0.05$). We plot this interactive effect in Figure~\ref{fig:wrong_agree}: adding explanations had a reverse effect for those with high or low task knowledge, making the latter more inclined to mistakenly agree with the model's predictions. We did not find such an effect for \textit{incorrect disagreement} looking at trials where participants disagreed with the model's predictions.
Taken together, to our surprise, we found the opposite results of \textbf{H4}: local explanations further polarized the annotation outcomes of those with high or low task knowledge, compared to only showing model predictions without explanations. While explanations may help those with high task knowledge to make better judgment, they have a \textbf{negative anchoring effect for those with low task knowledge by making them more inclined to agree with the model even if it is erroneous}. This could be a potential problem for XAL, even though we did not find this anchoring effect to have statistically significant negative impact on the model's learning outcome. We also showed that with uncertainty sampling of AL, \textbf{as the model matured, it became more challenging for annotators to make correct judgment and improve the model performance}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/wrong_agree.png}
\caption{Percentage of wrong agreement among all agreeing judgments with model predictions across Conditions and Task Knowledge levels. All error bars represent +/- one standard error.}
\label{fig:wrong_agree}
\end{figure}
\subsection{Annotator experience (\textbf{RQ2}, \textbf{RQ3})}
We then investigated how participants' self-reported experience differed across conditions by analyzing the following survey scales (measurements discussed in Section ~\ref{survey}): trust in deploying the model, interaction satisfaction, and perceived cognitive workload. Table~\ref{tab:survey} reports the mean ratings in different conditions and learning stage tasks. For each self-reported scale, we ran a mixed-effects regression model as discussed in the beginning of this section.
\begin{table}
\caption{Survey results }\label{tab:survey}
\begin{tabular}{p{0.8cm}p{1.2cm}p{1.5cm}p{1.5cm}p{1.5cm}}
\toprule
Stage&Condition&Trust&Satisfaction&Workload\\
\midrule
&AL &3.14 &4.23 &2.08\\
Early&CL &3.83 &3.69 &2.71 \\
&XAL &2.42 &3.31 &3.00 \\
\midrule
&AL &3 & 4.18&2.25\\
Late&CL &2.71 &3.63 &2.67\\
&XAL &2.99 &3.35&3.14 \\
\bottomrule
\end{tabular}
\end{table}
First, for trust in deploying the model, using AL as the reference level, we found a significant positive interaction between XAL Condition and Stage ($\beta=0.70, SE=0.31,p=0.03$). As shown in Table~\ref{tab:survey} and Figure~\ref{fig:trust_stage}, compared to the other two conditions, participants in the XAL Condition had significantly lower trust in deploying the early stage model, but enhanced their trust in the later stage model. The results confirmed \textbf{H1 }that \textbf{explanations help calibrate annotators' trust} in the model at different stages of the training process, while showing model predictions alone (CL) was not able to have that effect.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/trust_stage.png}
\caption{Trust in deploying the model across Conditions and Stages. All error bars represent +/- one standard error.}
\label{fig:trust_stage}
\end{figure}
We also found a two-way interaction between XAL Condition and participants' AI Experience (with/without experience) on trust in deploying the model ($\beta=1.43, SE=0.72,p=0.05$) (AL as the reference level). Figure ~\ref{fig:trust_AI} plots the effect: people without AI experience had exceptionally high ``blind'' trust and high variance of the trust (error bar) in deploying the model in the AL condition. With XAL they were able to an appropriate level of trust. The result highlight the \textbf{challenge for annotators to assess the trustworthiness of the model to be deployed, especially for those inexperienced with AI. Providing explanations could effectively appropriate their trust}, supporting \textbf{H5}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/trust_AI.png}
\caption{Trust in deploying the model across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:trust_AI}
\end{figure}
For interaction satisfaction, the descriptive results in Table~\ref{tab:survey} suggests a decreasing trend of satisfaction in XAL condition compared to baseline AL. By running the regression model we found a significant two-way interaction between XAL Condition and Need for Cognition ($\beta=0.54, SE=0.26, p=0.05$) (AL as reference level). Figure ~\ref{fig:satisfaction_nc} plots the interactive effect, with median split on Need for Cognition scores. It demonstrates that \textbf{explanations negatively impacted satisfaction, but only for those with low Need for Cognition}, supporting \textbf{H6} and rejecting \textbf{H2}. We also found a positive main effect of Task Knowledge ($SE=1.31,t=2.76,p=0.01$), indicating that people who were good at the annotation task reported higher satisfaction.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/satisfaction_nc.png}
\caption{Satisfaction across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:satisfaction_nc}
\end{figure}
For self-reported cognitive workload, the descriptive results in Table~\ref{tab:survey} suggests an increasing trend in XAL condition compared to baseline AL. Regression model found an interactive effect between the condition XAL and AI experience ($\beta=1.30, SE=0.59,p=0.04$). As plotted in Figure~\ref{fig:workload_AI}, the \textbf{XAL condition presented higher cognitive workload compared to baseline AL, but only for those with AI experience}. This partially supports \textbf{H3}, and potentially suggests that those with AI experience were able to more carefully examine the explanations.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{graphs/workload_AI.png}
\caption{Cognitive workload across conditions and experience with AI. All error bars represent +/- one standard error.}
\label{fig:workload_AI}
\end{figure}
We also found an interactive effect between CL condition and Need for Cognition on cognitive workload ($\beta=0.53, SE=0.19,p=0.01$), and the remaining negative main effect of Need for Cognition ($\beta=-0.41, SE=0.14,p=0.01$). Pair-wise comparison suggests that participants with low Need for Cognition reported higher cognitive workload than those with high Need for Cognition, except in the CL condition, where they only had to accept or reject the model's predictions. Together with the results on satisfaction, \textbf{CL may be a preferred choice for those with low Need for Cognition}.
In summary, to answer \textbf{RQ2}, participants' self-reported experience confirmed the benefit of explanations for calibrating trust and judging the maturity of the model. Hence XAL could potentially help annotators form stopping criteria with more confidence. Evidence was found that explanations increased cognitive workload, but only for those experienced with AI. We also identified an unexpected effect of explanations in reducing annotator satisfaction, but only for those self-identified to have low Need for Cognition, suggesting that the additional information and workload of explanation may avert annotators who have little interest or capacity to deliberate on the explanations.
The quantitative results with regard to \textbf{RQ3} confirmed the mediating effect of individual differences in Task Knowledge, AI Experience and Need for Cognition on one's reception to explanations in an AL setting. Specifically, people with better Task Knowledge and thus more capable of detecting AI's faulty reasoning, people inexperienced with AI who might be otherwise clueless about the model training task, and people with high Need for Cognition, may benefit more from XAL compared to traditional AL.
\subsection{Feedback for explanation (\textbf{RQ4})}
In the XAL condition, participants were asked to rate the system's rationale based on the explanations and respond to an optional question to explain their ratings. Analyzing answers to these questions allowed us to understand what kind of feedback participants naturally wanted to give the explanations (\textbf{RQ4}).
First, we inspected whether participants' explanation ratings could provide useful information for the model to learn from. Specifically, if the ratings could distinguish between correct and incorrect model predictions, then they could provide additional signals. Focusing on the XAL condition, we calculated, for each participant, in each learning stage task, the \textit{average explanation ratings} given to instances where the model made correct and incorrect predictions (compared to ground truth). The results are shown in Figure~\ref{fig:model_pred}. By running an ANOVA on the \textit{average explanation ratings}, with \textit{Stage} and \textit{Model Correctness} as within-subject variables, we found the main effect of \textit{Model Correctness} to be significant, $F(1, 11)=14.38$, $p<0.01$. This result indicates that participants were able to distinguish the rationales of correct and incorrect model predictions, in both the early and late stages, confirming the utility of annotators' rating on the explanations.
\begin{figure}
\centering
\includegraphics[scale=0.9]{graphs/model_pred.pdf}
\caption{Explanation ratings for correct and incorrect model predictions}
\label{fig:model_pred}
\end{figure}
One may further ask whether explanation ratings provided additional information beyond the judgement expressed in the labels. For example, among cases where the participants disagreed (agreed) with the model predictions, some of them could be correct (incorrect) predictions, as compared to the ground truth. If explanation ratings could distinguish right and wrong disagreement (agreement), they could serve as additional signals that supplement instance labels. Indeed, as shown in Figure~\ref{fig:disagree}, we found that among the \textit{disagreeing instances}, participants' average explanation rating given to \textit{wrong disagreement} (the model was making the correct prediction and should not have been rejected) was higher than those to the \textit{right disagreement} ($F(1, 11)=3.12$, $p=0.10$), especially in the late stage (interactive effect between \textit{Stage} and \textit{Disagreement Correctness} $F(1, 11)=4.04$, $p=0.07$). We did not find this differentiating effect of explanation for agreeing instances.
The above results are interesting as Teso and Kersting proposed to leverage feedback of ``weak acceptance'' to train AL ("right decision for the wrong reason"~\cite{teso2018should}), in which people agree with the system's prediction but found the explanation to be problematic. Empirically, we found that the tendency for people to give weak acceptance may be less than weak rejection. Future work could explore utilizing weak rejection to improve model learning, for example, with AL algorithms that can consider probabilistic annotations~\cite{song2018active}.
\begin{figure}
\centering
\includegraphics[scale=0.9]{graphs/disagreement.pdf}
\caption{Explanation ratings for disagreeing instances}
\label{fig:disagree}
\end{figure}
\subsubsection{Open form feedback}
\label{feedback}
We conducted content analysis on participants' open form answers to provide feedback, especially by comparing the ones in the CL and XAL conditions. In the XAL condition, participants had two fields as shown in Figure~\ref{fig:interface_2} to provide their feedback for the model decision and explanation. We combined them for the content analysis as some participants filled everything in one text field. In the CL condition, only the first field on model decision was shown. Two authors performed iterative coding on the types of feedback until a set of codes emerged and were agreed upon. In total, we gathered 258 entries of feedback on explanations in the XAL conditions (out of 480 trials). 44.96\% of them did not provide enough information to be considered valid feedback (e.g. simply expressing agreement or disagreement with the model).
The most evident pattern contrasting the CL and XAL conditions is a shift from commenting on the top features to determine an income prediction to more diverse types of comments based on the explanation. For example, in the CL condition, the majority of comments were concerned with the job category to determine one's income level, such as ``\textit{Craft repair likely doesn't pay more than 80000}.'' However, for the model, job category is not necessarily the most important feature for individual decisions, suggesting that people's direct feature-level input may not be ideal for the learning model to consume. In contrast, feedback based on model explanations is not only more diverse in their types, but also covers a wider range of features. Below we discuss the types of feedback, ranked by the occurring frequency.
\begin{itemize}
\item \textit{Tuning weights} ($N=81$): The majority of feedback focused on the weights bars in the explanation visualization, expressing disagreement and adjustment one wanted to make. E.g.,"\textit{marital status should be weighted somewhat less}". It is noteworthy that while participants commented on between one to four features, the median number of features was only one. Unlike in the CL condition where participants overly focused on the feature of job category, participants in the XAL condition often caught features that did not align with their expectation, e.g. ``\textit{Too much weight put into being married}'', or ``\textit{Age should be more negatively ranked}''. Some participants kept commenting on a feature in consecutive queries to keep tuning its weights, showing that they had a desired range in mind.
\item \textit{Removing, changing direction of, or adding features} ($N=28$): Some comments suggested, qualitatively, to remove, change the impact direction of, or add certain features. This kind of feedback often expressed surprise, especially on sensitive features such as race and gender, e.g."\textit{not sure why females would be rated negatively}", or "\textit{how is divorce a positive thing}". Only one participant mentioned \textit{adding} a feature not shown, e.g., "\textit{should take age into account}". These patterns echoed observations from prior work that local explanation heightens people's attention towards unexpected, especially sensitive features~\cite{dodge2019explaining}. We note that ``removing a feature to be irrelevant" is the feedback Teso and Kersting's AL algorithm incorporates~\cite{teso2018should}.
\item \textit{Ranking or comparing multiple feature weights} ($N=12$) : A small number of comments explicitly addressed the ranking or comparison of multiple features, such as "\textit{occupation should be ranked more positively than marital status}".
\item \textit{Reasoning about combination and relations of features} ($N=10$): Consistent with observation in Stumpf et al.'s study~\cite{stumpf2007toward}, some comments suggested the model to consider combined or relational effect of features--e.g., "\textit{years of education over a certain age is negligible}", or ``\textit{hours per week not so important in farming, fishing}''. This kind of feedback is rarely considered by current AL or iML systems.
\item \textit{Logic to make decisions based on feature importance} ($N=6$): The feature importance based explanation associates the model's prediction with the combined weights of all features. Some comments ($N=6$) expressed confusion, e.g. "\textit{literally all of the information points to earning more than 80,000}" (while the base chance was negative). Such comments highlight the need for a more comprehensible design of explanations, and also indicate people's natural tendency to provide feedback on the model's overall logic.
\item \textit{Changes of explanation} ($N=5$): Interacting with an online AL algorithm, some participants paid attention to the changes of explanations. For example, one participant in the condition seeing the late-stage model first noticed the declining quality of the system's rationale. Another participant commented that the weights in the model explanation ``\textit{jumps back and fourth, for the same job}''. Change of explanation is a unique property of the AL setting. Future work could explore interfaces that explicitly present changes or progress in the explanation and utilize the feedback.
\end{itemize}{}
To summarize, we identified opportunities to use local explanations to elicit knowledge input beyond instance labels. By simply soliciting a rating for the explanation, additional signals for the instance could be obtained for the learning model. Through qualitative analysis of the open-form feedback, we identified several categories of input that people naturally wanted to give by reacting to the local explanations. Future work could explore algorithms and systems that utilize annotators' input based on local explanations for the model's features, weights, feature ranks, relations, and changes during the learning process.
\section{Discussions and Future Directions}
Our work is motivated by the vision of creating natural experiences to teach learning models by seeing and providing feedback for the model's explanations of selected instances. While the results show promises and illuminate key considerations of user preferences, it is only a starting point. To realize the vision, supporting the needs of machine teachers and fully harnessing their feedback for model explanations, requires both algorithmic advancement and refining the ways to explain and interact. Below we provide recommendations for future work of XAL as informed by the study.
\subsection{Explanations for machine teaching}
Common goals of AI explanations, as reflected in much of the XAI literature, are to support a complete and sound understanding of the model~\cite{kulesza2015principles,carvalho2019machine}, and to foster trust in the AI~\cite{poursabzi2018manipulating,cheng2019explaining}. These goals may have to be revised in the context of machine teaching. First, explanations should aim to \textit{calibrate} trust, and in general the perception of model capability, by accurately and efficiently communicating the model's current limitations.
Second, while prior work often expects explanations to enhance adherence or persuasiveness~\cite{poursabzi2018manipulating}, we highlight the opposite problem in machine teaching, as an ``anchoring'' effect to a naive model's judgment could be counter-productive and impair the quality of human feedback. Future work should seek alternative designs to mitigate the anchoring effect. For example, it would be interesting to use a partial explanation that does not reveal the model's judgment (e.g., only a subset of top features~\cite{lai2019human}), or have people first make their own judgment before seeing the explanation.
Third, the premise of XAL is to make the teaching task accessible by focusing on individual instances and eliciting incremental feedback. It may be unnecessary to target a complete understanding of the model, especially as the model is constantly being updated. Since people have to review and judge many instances in a row, \textit{low cognitive workload} without sacrificing the quality of feedback should be a primary design goal of explanations for XAL. One potential solution is \textit{progressive disclosure} by starting from simplified explanations and progressively provide more details~\cite{springer2019progressive}. Since the early-stage model is likely to have obvious flaws, using simpler explanations could suffice and demand less cognitive resource. Another approach is to design explanations that are sensitive to the targeted feedback, for example by only presenting features that the model is uncertain about or people are likely to critique, assuming some notion of uncertainty or likelihood information could be inferred.
While we used a local feature importance visualization to explain the model, we could speculate on the effect of alternative designs based on the results. We chose a visualization design to show the importance values of multiple features at a glance. While it is possible to describe the feature importance with texts as in ~\cite{dodge2019explaining}, it is likely to be even more cognitively demanding to read and comprehend. We do not recommend further limiting the number of features presented, since people are more inclined to critique features they see rather than recalling ones not presented. Other design choices for local explanations include presenting similar examples with the same known outcome~\cite{bien2011prototype,gurumoorthy2017protodash}, and rules that the model believes to guarantee the prediction~\cite{ribeiro2018anchors} (e.g., ``someone with an executive job above the age of 40 is highly likely to earn more than 80K``). We suspect that the example based explanation might not present much new information for feedback. The rule-based explanation, on the other hand, could be an interesting design for future work to explore, as annotators may be able to approve or disapprove the rules, or judge between multiple candidate rules~\cite{hanafi2017seer}. This kind of feedback could be leveraged by the learning model. Lastly, we fixed on local explanations for the model to self-address the \textit{why} question (intelligibility type). We believe it fits naturally with the workflow of AL querying selected instances. A potential drawback is that it requires annotators to carefully reason with the explanation for every new queried instance. It would be interesting to explore using a global explanation so that annotators would only need to attend to changes of overall logic as the model learns. But it is unknown whether a global explanation is as easy for non-AI-experts to make sense and provide feedback on.
There are also opportunities to develop new explanation techniques by leveraging the temporal nature of AL. One is to \textit{explain model progress}, for example by explicitly showing changes in the model logic compared to prior versions. This could potentially help the annotators better assess the model progress and identify remaining flaws. Second is to utilize \textit{explanation and feedback history} to both improve explanation presentation (e.g., avoiding repetitive explanations) and infer user preferences (e.g., how many features is ideal to present).
Lastly, our study highlights the needs to tailor explanations based on the characteristics of the teacher. People from whom the model seeks feedback may not be experienced with ML algorithms, and not necessarily possess the complete domain knowledge or contextual information. Depending on their cognitive style or the context to teach, they may have limited cognitive resources to deliberate on the explanations. These individual characteristics may impact their preferences for the level of details, visual presentation, and whether explanation should be presented at all.
\subsection{Learning from explanation based feedback}
Our experiment intends to be an elicitation study to gather the types of feedback people naturally want to provide for model explanations. An immediate next step for future work is to develop new AL algorithms that could incorporate the types of feedback presented in Section~\ref{feedback}. Prior work, as reviewed in Section~\ref{literature}, proposed methods to incorporate feedback on top features or boosting the importance of features~\cite{raghavan2006active,druck2009active,settles2011closing,stumpf2007toward}, and removing features~\cite{teso2018should,kulesza2015principles}. However most of them are for text classifiers. Since feature-based feedback for text data is usually binary (a keyword should be considered a feature or not), prior work often did not consider the more quantitative feedback shown in our study, such as tuning the weights of features, comparatively ranking features, or reasoning about the logic or relations of multiple features. While much technical work is to be done, it is beyond the scope of this paper. Here we highlight a few key observations from people's natural tendency to provide feedback for explanations, which should be reflected in the assumptions that future algorithmic work makes.
First, people's natural feedback for explanations is \textit{incremental} and \textit{incomplete}. It tends to focus on a small number of features that are most evidently unaligned with one's expectation, instead of the full set of features. Second, people's natural feedback is \textit{imprecise}. For example, feature weights were suggested to be qualitatively increased, decreased, added, removed, or changing direction. It may be challenging for a lay person to accurately specify a quantitative correction for a model explanation, but a tight feedback loop should allow one to quickly view how an imprecise correction impacts the model and make follow-up adjustment. Lastly, people's feedback is \textit{heterogeneous}. Across individuals there are vast differences on the types of feedback, the number of features to critique, and the tendency to focus on specific features, such as whether a demographic feature should be considered fair to use~\cite{dodge2019explaining}.
Taken together, compared to providing instance labels, feedback for model explanations can be noisy and frail. Incorporating the feedback ``as it is'' to update the learned features may not be desirable. For example, some have warned against ``local decision pitfalls''~\cite{wu2019local} of human feedback in iML that overly focuses on modifying a subset of model features, commonly resulting in an overfitted model that fails to generalize. Moreover, not all ML models are feasible to update the learned features directly. While prior iML work often builds on directly modifiable models such as regression or naïve Bayes classifiers, our approach is motivated by the possibility to utilize popular \textit{post-hoc} techniques to generate local explanations~\cite{ribeiro2016should,lundberg2017unified} for any kind of ML models, even those not directly interpretable such as neural networks. It means that an explanation could give information about how the model weighs different features but it is not directly connected to its inner working. How to incorporate human feedback for post-hoc explanations to update the original model remains an open challenge. It may be interesting to explore approaches that take human feedback as weighted signals, constraints, a part of a co-training model or ensemble~\cite{stumpf2009interacting} , or impacting the data~\cite{teso2018should} or the sampling strategy.
A coupled aspect to make human feedback more robust and consumable for a learning algorithm is to design interfaces that scaffold the elicitation of high-quality, targeted type of feedback. This is indeed the focus of the bulk of iML literature. For example, allowing people to drag-and-drop to change the ranks of features, or providing sliders to change the feature weights, may encourage people to provide more precise and complete feedback. It would also be interesting to leverage the explanation and feedback history to extract more reliable signals from multiple entries of feedback, or purposely prompt people for confirmation of prior feedback. Given the heterogeneous nature of people's feedback, future work could also explore methods to elicit and cross-check input from multiple people to obtain more robust teaching signals.
\subsection{Explanation- and explainee-aware sampling}
Sampling strategy is the most important component of an AL algorithm to determine its learning efficiency. But existing AL work often ignores the impact of sampling strategy on annotators' experience. For example, our study showed that uncertainty sampling (selecting instance the model is most uncertain about to query) led to an increasing challenge for annotators to provide correct labels as the model matures.
For XAL algorithms to efficiently gather feedback and support a good teaching experience, sampling strategy should move beyond the current focus on decision uncertainty to considering the explanation for the next instance and what feedback to gain from that explanation. For the machine teacher, desired properties of explanations may include easiness to judge, non-repetitiveness, tailored to their preferences and tendency to provide feedback, etc.~\cite{sokol2020explainability}. For the learning model, it may gain value from explaining and soliciting feedback for features that it is uncertain about, have not been examined by people, or have high impact on the model performance. Future work should explore sampling strategies that optimize for these criteria of explanations and explainees.
\section{Limitations}
We acknowledge several limitations of the study. First, the participants were recruited on Mechanical Turk and not held accountable for consequences of the model, so their behaviors may not generalize to all SMEs. However, we attempted to improve the ecological validity by carefully designing the domain knowledge training task and reward mechanism (participants received bonus if among 10\% performer). Second, this is a relatively small-scale lab study. While the quantitative results showed significance with a small sample size, results from the qualitative data, specifically the types of feedback may not be considered an exhaustive list. Third, the dataset has a small number of features and the model is relatively simple. For more complex models, the current design of explanation with feature importance visualization could be more challenging to judge and provide meaningful feedback for.
\section{Conclusions}
While active learning has gained popularity for its learning efficiency, it has not been widely considered as an HCI problem despite its interactive nature. We propose explainable active learning (XAL), by utilizing a popular local explanation method as the interface for an AL algorithm. Instead of opaquely requesting labels for selected instances, the model presents its own prediction and explanation for its prediction, then requests feedback from the human. We posit that this new paradigm not only addresses annotators' needs for model transparency, but also opens up opportunities for new learning algorithms that learn from human feedback for the model explanations. Broadly, XAL allows training ML models to more closely resemble a ``teaching'' experience, and places explanations as a central element of machine teaching. We conducted an experiment to both test the feasibility of XAL and serve as an elicitation study to identify the types of feedback people naturally want to provide. The experiment demonstrated that explanations could help people monitor the model learning progress and calibrate their trust in the teaching outcome. But our results cautioned against the adverse effect of explanations in anchoring people's judgment to the naive model's, if the annotator lacks adequate knowledge to detect the model's faulty reasoning, and the additional workload that could avert people with low Need for Cognition. Besides providing a systematic understanding of user interaction with AL algorithms, our results have three broad implications for using model explanations as the interface for machine teaching. First, we highlight the design goals of explanations applied to the context of teaching a learning model, as distinct from common goals in XAI literature, including calibrating trust, mitigating anchoring effect and minimizing cognitive workload. Second, we identify important individual factors that mediate people's preferences and reception to model explanations, including task knowledge, AI experience and Need for Cognition. Lastly, we enumerate on the types of feedback people naturally want to provide for model explanations, to inspire future algorithmic work to incorporate such feedback.
\section*{Acknowledgments}
We wish to thank all participants and reviewers for their helpful feedback. This work was done as an internship project at IBM Research AI, and partially supported by NSF grants IIS 1527200 and IIS 1941613.
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,088,603 | arxiv | \section{Introduction}
There are numerous reasons to study geometry and physics of noncommutative spaces. Besides the purely mathematical interest in geometries that may be associated to general, or to $C^\ast$-algebras,
spaces with noncommuting coordinates appear as effective in string theory and loop quantum gravity. Extending the common quantum-mechanical and general-relativistic intuition, one expects that noncommutative geometry is a general feature of models of spacetime that emerge from quantum gravity, and that it can give valuable insight into the physics at the Planck scale.
The adjective "fuzzy" (as in fuzzy space) is often associated with geometries of finite-dimensional matrix algebras. The term was originally introduced by John Madore in relation to his noncommutative frame formalism \cite{Madore:2000aq}, which also describes geometries of algebras with infinite-dimensional irreducible representations: we use it in this broader sense.
One of the basic ingredients of the noncommutative frame formalism is a version of the correspondence principle, a rule that to a classical geometry, described by the moving frame $\{\tilde e_\alpha\}$, associates a noncommutative frame $\{e_\alpha\}$,
\begin{equation}\label{correspondence}
\tilde e^\mu_\alpha = \tilde e_\alpha \tilde x^\mu \quad \longrightarrow \quad e^\mu_\alpha =[p_\alpha,x^\mu]\ .
\end{equation}
The elements $p_\alpha$ and $x^\mu$ on the right, referred to as momenta and coordinates, belong to a noncommutative algebra. As functions of coordinates, the noncommutative frame coefficients $\, e^\mu_\alpha(x)\,$ in the commutative limit are equal or reduce to $\, \tilde e^\mu_\alpha(\tilde x)\,$.
The best known example of a fuzzy space is the fuzzy sphere \cite{Madore:1991bw}, for which both coordinates and momenta are taken as generators of $SO(3)$ in the $N$-dimensional unitary irreducible representation. The equation of the sphere
\begin{equation}\label{embedding}
g_{\mu\nu} x^\mu x^\nu = \text{const}
\end{equation}
is then satisfied due to constancy of the quadratic Casimir in the representation. In general, starting from coordinates and momenta, the frame formalism systematically develops notions of ordinary differential geometry, such as a metric, connection and curvature, in the noncommutative setup. This leads to differential and tensor calculi closely analogous to commutative ones and allows for the construction of field theories over the noncommutative space. However, consistency between the various geometric structures implies that not any set of operators $(p_\alpha,x^\mu)$ leads to a geometry. In fact, consistency conditions are quite constraining and require, among other things, the momenta $p_\alpha$ to satisfy a quadratic algebra.
Most fuzzy geometries that have been constructed are spaces with a high degree of symmetry and include quantum Euclidean spaces \cite{Fiore:1999sd,Cerchiai:2000qu}, the $h$-deformed hyperbolic plane \cite{cho,Madore:1999fi}, complex projective spaces \cite{Balachandran:2001dd,Grosse:2004wm,Dolan:2006tx} etc. Among the examples are also maximally symmetric spaces in $2n$ dimensions, \cite{Jurman:2013ota,Buric:2017yes}. Classically, these may be defined by the embedding \eqref{embedding} inside the flat $(2n+1)$-dimensional space of appropriate signature. The noncommutative analogue of the embedding relation is achieved using the Pauli-Lubanski vector $W^\mu$ of the isometry algebra $\,\mathfrak{so}(p,q) = \text{span}\{M_{\mu\nu}\}$,
\begin{equation} \label{still}
W^\mu = \epsilon^{\mu \mu_1\mu_2\dots\mu_{2n}}\, M_{\mu_1\mu_2}\dots \ M_{\mu_{2n-1}\mu_{2n}} \, .
\end{equation}
The element $\,W_\mu W^\mu$ is the highest order Casimir of $\mathfrak{so}(p,q)$ and thus a constant operator in any irreducible representation. This motivates the identification of fuzzy coordinates as operators $\, x^\mu \sim W^\mu\,$ in a UIR of $SO(p,q)$. However, according to \eqref{correspondence}, the noncommutative geometry is defined not only by the choice of coordinates but also momenta; different choices of momenta leading to geometries with interesting physical properties were considered in \cite{Buric:2015wta}. Yet, a pattern similar to \eqref{still} cannot be used to construct maximally symmetric fuzzy spaces in odd number of dimensions, as the corresponding isometry groups do not admit the Pauli-Lubanski vector.
It is perhaps fair to say that non-local properties of fuzzy spaces have not yet been discussed conclusively. To address this question would, however, be very desirable in the context of gravity where spacetimes of interesting global causal structure, such as black holes, are of particular importance. In the present work, we will make a step in this direction by constructing a model of the fuzzy BTZ black hole equipped with a local differential calculus as well as a suitably defined global structure. A particular special case of the model may be regarded as the fuzzy AdS$_3$ space. While we cannot follow the ideas of \cite{Madore:1991bw,Buric:2017yes} directly, the fuzzy space will still be defined in terms of the algebra of operators in an irreducible representation of the classical isometry group, $SO(2,2)$. This ensures that commutative and noncommutative geometries have the same symmetries.
Before describing our construction in more details, let us review some of the models of AdS$_3$ and BTZ spaces within other approaches to noncommutative geometry. A widespread idea for construction of noncommutative geometries is to express the gravitational field in terms of a gauge theory. The appeal of this approach is that gauge symmetries, in the framework of noncommutative field theory, are very well understood. Moreover, there is a mapping (Seiberg-Witten map) between noncommutative and commutative gauge fields given in the form of a series expansion in the noncommutativity parameter (here denoted by ${\mathchar'26\mkern-9muk}$). The Seiberg-Witten map can be interpreted as a perturbative expansion in ${\mathchar'26\mkern-9muk}$, thus giving the commutative limit of the theory, as well as the leading order noncommutative corrections. Specifically, in three dimensions the gravitational action can be written as a difference of two Chern-Simons actions, \cite{Achucarro:1986uwr,Witten:1988hc}. A noncommutative generalisation of this description was given in \cite{Banados:2001xw} for the Euclidean signature, and in \cite{Cacciatori:2002gq} for the Lorentzian signature (see also e.g. \cite{Chamseddine:2000si,Chamseddine:2002fd,Dimitrijevic:2014iwa} for related statements in various spacetime dimensions and signatures). Several subsequent papers considered the linear noncommutative corrections to the BTZ geometry \cite{Kim:2007nx,Chang-Young:2008zbi}, and thermodynamics \cite{Anacleto:2015kca}.
In these works, the noncommutative coordinate algebra carries the Moyal-Weyl product, with one commuting (central) coordinate and two that satisfy the Heisenberg commutation relations (the same approach was followed in \cite{Pinzul:2005ta} except that the harmonic-oscillator basis for the Heisenberg algebra was used, with matrices eventually being truncated to a finite size). A systematic investigation of commutation relations between coordinates was performed in \cite{Bieliavsky:2002ki,Bieliavsky:2004yp,Dolan:2006hv}, where the authors studied families of regular Poisson structures on the classical BTZ black hole background and subsequently quantised them. Our approach differs from the above ones and the algebra underlying the noncommutative space, which contains both coordinates and momenta, can be thought of as having four generators.
{\bf Summary of results}
The model proposed here may be regarded as a quantisation of the BTZ black hole for several reasons. Much as for the fuzzy sphere, the isometry group of the classical space underlies its quantisation. Our starting point is the operator algebra $\mathcal{A}$ of an irreducible representation of $SO(2,2)$, the isometry group of AdS$_3$. This group is locally isomorphic to $SL(2,\mathbb{R})\times SL(2,\mathbb{R})$ and the representation taken is the tensor product of discrete series representations of the two factor groups. Inside this algebra, coordinates $x^\mu$ and momenta $p_\alpha$ for the fuzzy AdS$_3$ space are defined, with momenta being three particular elements of $\mathfrak{so}(2,2)$. When $SO(2,2)$ is viewed as the two-dimensional conformal group, these momenta are generators of translations and dilations. The coordinate operators quantise classical Poincar\'e coordinates. The conformal boundary of the quantised AdS$_3$ is defined: it turns out to be flat and commutative.
Differential and tensor calculi over $\mathcal{A}$ are constructed starting from $(p_\alpha,x^\mu)$ in the framework of the frame formalism. They share many properties of their commutative counterparts. For instance, we show that $\mathcal{A}$ satisfies vacuum Einstein's equations with a negative cosmological constant and that it has a constant negative curvature. Further, we construct the Laplace-Beltrami operator and compute its action on arbitrary differential forms.
In addition to the local calculus, that has similarities to other quantum maximally symmetric spaces such as the $h$-deformed hyperbolic plane \cite{Madore:1999fi} or the fuzzy de Sitter space \cite{Buric:2017yes}, we will take into account the global properties of the BTZ black hole. After all, it is only in its global structure that the black hole differs from AdS$_3$. The fuzzy BTZ operator algebra will be obtained from ${\cal A}$ by making discrete identifications of the form $\,x\sim Ux\,U^{-1}$, with $\,x\in{\cal A}$. Here, $U\in{\cal A}$ is a unitary operator chosen in such a way to reproduce the classical action of BTZ identifications on Poincar\'e coordinates, \cite{Banados:1992gq}.
Having defined the model, we will begin the exploration of its properties by studying the radial coordinate operator. The eigenvalue problem for this operator turns out to be equivalent to the Schr\"odinger equation for a particle in the inverse square potential. Bound and scattering states correspond to the geometric regions inside and outside the outer black hole horizon, respectively. After regularisation, needed to make sense of the inverse square Schr\"odinger problem, the spectrum of bound states is shown to be infinite and discrete.
The paper is organised as follows. In Section 2 we give a brief review of the AdS$_3$ and BTZ geometry, focusing on those points that will play a role in the subsequent quantisation. After recalling basic elements of the frame formalism, Section 3 defines the fuzzy AdS$_3$ operator algebra and develops in detail differential geometry over it. In Section 4, we perform discrete identifications and solve the eigenvalue problem for the radial coordinate. Section 5 contains a summary and discussion of the points to be addressed in the future. Three appendices give more details on some of the calculations, and backgrounds on the BTZ geometry and representation theory of $SL(2,\mathbb{R})$.
\section{Anti-de Sitter space and BTZ black hole}
In this introductory section we review local and global aspects of the three-dimensional anti-de Sitter space and the Ba\~nados-Teitelboim-Zanelli (BTZ) black hole. Our purpose is two-fold: to establish notation and to point to those properties of AdS$_3$ and BTZ spacetimes that will play a prominent role in the quantisation of later sections. The first of these properties is the fact that the BTZ black hole can be covered by an infinite number of Poincar\'e coordinate patches. While the classical geometry does not depend on the the choice of coordinates, the quantisation procedure does and it is the Poincar\'e coordinates that will be used for quantisation. Relations between the Poincar\'e and other types of coordinates will be spelled out in the first subsection. In the second, we will recall the identification of AdS$_3$ with the Lie group $SL(2,\mathbb{R})$. Representation theory of this group will play an important role in subsequent constructions. For the most part, our conventions follow \cite{Banados:1992gq}.
\subsection{Coordinate systems}
The AdS$_3$ space can be defined as the hyperboloid
\begin{equation}
-v^2-u^2+x^2+y^2= -\ell^2\,,
\end{equation}
$v$, $u$, $x$, $y\in(-\infty,\infty)$, inside the four-dimensional flat space of signature $(--++)$,
\begin{equation}
ds^2=-dv^2-du^2+dx^2+dy^2\ .
\end{equation}
It is a solution to three-dimensional vacuum Einstein's equations with a negative cosmological constant, $\Lambda=-1/\ell^2$. The AdS$_3$ space is often represented in the global coordinate system $(\tau,\rho,\theta)$, in which the line element reads
\begin{equation}\label{AdS}
ds^2 = \ell^2(-\cosh^2\rho\, d\tau^2 + d\rho^2 + \sinh^2\rho\, d\theta^2)\,,
\end{equation}
or in the polar coordinates $(t,r,\theta)$, with the line element
\begin{equation}
ds^2 =- \left(\frac{r^2}{\ell^2}+1\right) dt^2 +\frac{1}{\ \dfrac{r^2}{\ell^2}+1\ }\, dr^2 +r^2 d\theta^2\ .
\end{equation}
Both $\tau$ and $\theta$ are periodic coordinates and thus AdS$_3$ admits closed timelike curves. For this reason, more usually considered is the universal covering space, obtained by "unwrapping" the $\tau$-direction. For this space, denoted $\widetilde{\text{AdS}_3}$, $(\tau,\rho,\theta)$ with $\,\tau\in(-\infty,\infty)$ is a global system of coordinates.
In considerations here, we will mostly use the Poincar\'e coordinates $\, (\gamma,\beta,z)$, introduced as
\begin{equation} \label{Poincare}
z=\frac{\ell}{u+x}\, ,\quad \beta = \frac{y}{u+x}\, ,\quad \gamma = -\frac{v}{u+x} \ .
\end{equation}
They cover one half of the hyperboloid. The line element in Poincar\'e coordinates is conformally flat,
\begin{equation}\label{Poincare-line-element}
ds^2 = \frac{\ell^2}{z^2}\, ( -d\gamma^2 + d\beta^2 + dz^2)\ .
\end{equation}
Properties of various coordinate systems and coordinate changes with more details are discussed in Appendix A.
Like the anti-de~Sitter space, the BTZ black hole is a solution to vacuum Einstein's equations. Its metric has the general form
\begin{equation}\label{BTZ-metric}
ds^2 = -N^2 dt^2 +\frac{1}{N^2} \, dr^2 + r^2(N^\phi dt + d\phi)^2\ ,\quad \ N^2 = \frac{r^2}{\ell^2} - M +\frac{J^2}{4r^2}\ , \quad N^\phi = -\frac{J}{2r^2}\ ,
\end{equation}
with $\,t\in (-\infty, \infty)$, $\,r\in (0,\infty)\, $ and periodic $\phi$, $\,\phi\sim\phi+2\pi n \,$. The BTZ space \eqref{BTZ-metric} describes a rotating black hole of mass $M$ and angular momentum $J$: the singularity at $r=0\,$ is one in the causal structure. It is usually assumed that parameters satisfy $\,|J|\leq M \ell\,$; in the remainder we also take $J\geq0$. The BTZ black hole has two horizons, outer at $\,r = r_+$ and inner at $\,r=r_-\,$, with
\begin{equation}
r_+\pm r_- =\sqrt{M\ell^2\pm J\ell\, }\ . \label{horizons}
\end{equation}
As a solution to vacuum Einstein's equations in three dimensions, the BTZ black hole is locally isometric to AdS$_3$ and can be obtained as a discrete quotient of a subset $\widetilde{\text{AdS}'_3}$ of its universal cover. The black hole is constructed from the latter by identifying points $X$ under the action of the discrete subgroup of isometries $\,\Gamma\cong\mathbb{Z}$, generated by a certain Killing vector field $\xi'$, \cite{Banados:1992gq}
\begin{equation}
X\to e^{2\pi n\,\xi'}\,X \,, \quad n\in \mathbb{Z} \ . \label{discrete}
\end{equation}
The resulting spacetime consists of an infinite number of regions of types I, II and III, separated by inner and outer horizons. The regions admit local Poincar\'e coordinates in which the metric reads \eqref{Poincare-line-element}, and local sets of coordinates $(t,r,\phi)$ with the metric \eqref{BTZ-metric}. Changes between these coordinates in different regions are collected in Appendix A; they provide an infinite set of Poincar\'e patches that cover the BTZ space.
An important property for our purposes is that in each of the regions, the radial coordinate $r\,$ is related to $z$, $\gamma$ and $\beta$ in the same way,
\begin{equation} \label{radius-BTZ}
r^2 - r_+^2 = (r_+^2-r_-^2)\, \frac{B(r)}{\ell^2} = (r_+^2-r_-^2)\,\frac{y^2 - v^2}{\ell^2} = (r_+^2-r_-^2)\, \frac{\beta^2 - \gamma^2}{z^2}\ .
\end{equation}
In the following, we shall in calculations often use $B(r)$ instead of $\,r$,
\begin{equation}
B(r)=\ell^2\,\frac {r^2-r_+^2}{r_+^2-r_-^2} = \ell^2\, \frac{\beta^2 -\gamma^2}{z^2}\ . \label{B(r)}
\end{equation}
In any coordinate patch, the discrete transformation \eqref{discrete} acts as $(t,r,\phi)\mapsto(t,r,\phi+2\pi n)$, and on Poincar\'e coordinates by
\begin{equation}\label{identifications-Poincare}
z \mapsto z \, e^{-\frac{2\pi r_+ n}{\ell}},\ \quad (\beta - \gamma) \mapsto (\beta - \gamma) \,e^{-\frac{2\pi(r_+ + r_-) n}{\ell}},\ \quad (\beta + \gamma) \mapsto (\beta + \gamma)\, e^{-\frac{2\pi(r_+ - r_-)n}{\ell}}\ .
\end{equation}
This concludes our review of coordinate systems used to describe AdS$_3$ and the BTZ black hole.
\subsection{The identification of AdS$_3$ and $SL(2,\mathbb{R})$}
In the remainder of the text, we will often use the fact that AdS$_3$ is isometric to the Lie group $SL(2,\mathbb{R})$; the notation regarding this group and its Lie algebra is given in Appendix B. Given a point $X$ in $\mathbb{R}^{2,2}$, one can construct the matrix
\begin{equation}
g(X) = \frac{1}{\ell}\begin{pmatrix}
u+x & y+v\\[2pt]
y-v & u-x\end{pmatrix}\,,
\end{equation}
and the equation of the hyperboloid is equivalent to $\text{det}(g(X))=1$. Moreover, the mapping $X\mapsto g(X)$ carries the AdS$_3$ metric to the bi-invariant metric on $SL(2,\mathbb{R})$. The identification of the two spaces allows to also identify their isometry groups\footnote{For any Lie group $G$, we write $G_e$ to denote its identity connected component. However, to simplify notation, we will usually just write $SO(2,2)$ instead of $SO(2,2)_e$.}
\begin{equation} \label{SO22}
SO(2,2)_e = \frac{SL(2,\mathbb{R})\times SL(2,\mathbb{R})}{Z(SL(2,\mathbb{R}))} = \frac{SL(2,\mathbb{R})\times SL(2,\mathbb{R})}{\mathbb{Z}_2}\ .
\end{equation}
An element $(g_1,g_2)\in SL(2,\mathbb{R})\times SL(2,\mathbb{R})$ acts on $g(X)$ according to
\begin{equation}\label{action-on-the-group}
(g_1,g_2)\cdot g(X) = w g_1 w^{-1}\ g(X)\ g_2^{-1}\ .
\end{equation}
The presence of the Weyl inversion $w$, a particular element of $\,SL(2,\mathbb{R})$ defined in Appendix B, in the action is a matter of convention. The last equation leads to the following expressions for the action of elements of the Lie algebra \eqref{sl2R},
\begin{equation} \label{classical-generators-zbg}
H + \bar H = - z\partial_z - \beta\partial_\beta - \gamma\partial_\gamma, \quad E_+ + \bar E_+ = \partial_\beta, \quad E_+ - \bar E_+ = \partial_\gamma\ .
\end{equation}
Since spaces AdS$_3$ and $SL(2,\mathbb{R})$ are isometric, so are their universal covers. Identifications that lead to the BTZ black hole take the form $g\sim \rho_L \,g\, \rho_R\,$ from the $\widetilde{SL(2,\mathbb{R})}$ point of view. Here $\rho_{L,R}\,$ are particular elements of the universal covering group whose projections to $SL(2,\mathbb{R})$ are the $2\times2$ matrices, \cite{Carlip:1995qv}
\begin{equation}
\rho_L = \begin{pmatrix}
e^{\pi(r_+-r_-)/\ell} & 0 \\ 0 & e^{-\pi (r_+ - r_-) /\ell}
\end{pmatrix},\qquad \rho_R =\begin{pmatrix}
e^{\pi(r_++r_-)/\ell} & 0 \\ 0 & e^{-\pi (r_++r_-)/\ell}
\end{pmatrix}\ .
\end{equation}
As AdS$_3$ is acted on by $SO(2,2)$, the space of functions $L^1(\text{AdS}_3)$ carries the corresponding geometric representation of this group\footnote{We will not be precise about the classes of functions on which the groups act in this work. The interested reader is referred to \cite{Kirillov}.}. The identification $\text{AdS}_3\cong SL(2,\mathbb{R})$ means that $L^1(\text{AdS}_3)$ is the regular representation of $SL(2,\mathbb{R})$ and its decomposition into irreducibles of $SO(2,2)$ (i.e. $SL(2,\mathbb{R})$-bimodules) follows from the Peter-Weyl theorem. Notice that BTZ identifications break the symmetry and the black hole spacetime has only two independent global Killing vectors.
\section{Differential geometry in Poincar\'e coordinates}
In this section, we will introduce a model of the fuzzy AdS$_3$ space. After reviewing elements of the noncommutative frame formalism in the first subsection, we will define the algebra $\mathcal{A}$ of noncommutative coordinates and momenta in the second. The remainder of the section develops differential geometry on $\mathcal{A}$. We will first construct an algebra of differential forms over $\mathcal{A}$ with a differential that obeys all the usual properties. This underlying structure will be further refined by introducing a metric and a compatible, torsionless connection. The resulting noncommutative geometry satisfies Einstein's equations with a negative cosmological constant and has a constant negative scalar curvature. In the last part of the section, we construct the Laplace-Beltrami operator and its action on arbitrary differential forms. Global properties that distinguish between AdS$_3$ space and the BTZ black hole are considered in the next section.
\subsection{Noncommutative frame formalism}
Noncommutative space is an associative $\ast$-algebra ${\cal A}\,$ generated by hermitian elements, noncommutative coordinates $x^\mu$, which obey the commutation relations
\begin{equation} \label{J}
[x^\mu,x^\nu]=i{\mathchar'26\mkern-9muk} J^{\mu\nu}(x^\rho)\ .
\end{equation}
The commutator $J^{\mu\nu}(x^\sigma)$ can, in principle, be an arbitrary function of coordinates. The constant of noncommutativity ${\mathchar'26\mkern-9muk}$ is assumed to be larger or of the order of magnitude of the Planck length, ${\mathchar'26\mkern-9muk}\geq\ell_{Pl}^2$; the very form of \eqref{J} presumes the existence of a commutative or classical limit. We will in fact allow for a slightly more general definition of a noncommutative space, where $\mathcal{A}$ is any $\ast$-algebra and $x^\mu$ some hermitian elements of it.
In classical general relativity, the gravitational field can be described by a moving frame with local components $\,\tilde e^\mu_\alpha$, $\mu$ being a coordinate index and $\alpha$ a frame index. Generalising this formulation, the noncommutative moving frame is defined as a set of inner derivations $e_\alpha$ of ${\cal A}$, specified by momenta $p_\alpha\in{\cal A}$,
\begin{equation}\label{*}
e_\alpha f = [p_\alpha,f]\, , \quad f\in{\cal A}\ .
\end{equation}
Notice that by contrast, in commutative geometry all inner derivations vanish and elements of a frame are linear combinations of partial derivatives. A priori, no assumption on the number of frame derivations is made and coordinate and frame indices might run over sets of different cardinalities. We do require, however, that commutators of momenta and coordinates can be expressed solely in terms of coordinates,
\begin{equation}\label{frame}
e^\mu_\alpha \equiv e_\alpha x^\mu =[p_\alpha,x^\mu] \in \langle x^\mu\rangle\ .
\end{equation}
For spaces satisfying \eqref{J}, we obtain a consistency condition
\begin{equation}
i{\mathchar'26\mkern-9muk} \, [p_\alpha,J^{\mu\nu}] = [e^\mu_\alpha,x^\nu] + [x^\mu,e^\nu_\alpha]\,,
\end{equation}
which can be seen as a differential equation for $J^{\mu\nu}$ in terms of $e^\mu_\alpha$. It relates noncommutativity i.e. the algebraic structure of ${\cal A}$, with its geometry.
Dual to $\{e_\alpha\}$ is the co-frame of 1-forms $\{\theta^\alpha\}$, $\ \theta^\alpha(e_\beta) = \delta^\alpha_\beta$. Co-frame forms are required to commute with functions
\begin{equation}\label{co-frame-forms}
[\theta^\alpha, f]=0\,, \quad f\in{\cal A}\,,
\end{equation}
and the space of 1-forms $\Omega^1({\cal A})$ is freely generated over ${\cal A}\,$ by $\,\{\theta^\alpha\}\,$. Notice that, due to noncommutativity of $\mathcal{A}$, general 1-forms do not commute with functions. Tensors of arbitrary rank are defined in terms of $\Omega^1({\cal A})$ via tensor products and duals. The condition \eqref{co-frame-forms} can be understood as coming from the requirement that local components of the metric tensor are constant, $\ g(\theta^\alpha\otimes\theta^\beta)=\eta^{\alpha\beta}$.
The differential $d$ on functions is defined by the usual expression
\begin{equation}\label{d}
df=(e_\alpha f)\, \theta^\alpha\equiv -[\theta,f] \,, \quad f\in{\cal A}\ .
\end{equation}
The 1-form $\theta=-p_\alpha\theta^\alpha$ is sometimes referred to as the Maurer-Cartan form. One requires that $\mathcal{A}$ and $\Omega^1(\mathcal{\mathcal{A}})$ embed into a larger differential graded algebra of all forms, $\Omega^\ast(\mathcal{A})$, to which $d$ can be extended. Linearity and other relations such as $\,d^2=0\,$ turn out to be quite restrictive. One of the most important consequences of the consistency constraints is that momenta must satisfy a quadratic relation,
\begin{equation}\label{quadratic}
2P^{\alpha\beta}{}_{\gamma\delta}\, p_\alpha p_\beta -F^\alpha{}_{\gamma\delta}\, p_\alpha-K_{\gamma\delta}=0\,,
\end{equation}
where $P^{\alpha\beta}{}_{\gamma\delta}$, $F^\alpha{}_{\gamma\delta}$ and $K_{\gamma\delta}$ are constants, that is, elements of the centre $Z({\cal A})$. These structure constants have a natural interpretation in the algebra of forms $\Omega^\ast(\mathcal{A}).$ The $\,P^{\alpha\beta}{}_{\gamma\delta}\,$ are the coefficients used to define the exterior product of 1-forms
\begin{equation}\label{coefficients-P}
\theta^\alpha\wedge \theta^\beta \equiv \theta^\alpha\theta^\beta = P^{\alpha\beta}{}_{\gamma\delta}\, \theta^\gamma\theta^\delta\ .
\end{equation}
On the other hand, coefficients $\,F^\alpha{}_{\beta\gamma}\,$ define the action of the differential on 1-forms
\begin{equation}
d\theta^\alpha = -\{\theta,\theta^\alpha\} - \frac12 F^\alpha{}_{\beta\gamma}\theta^\beta \theta^\gamma\ .
\end{equation}
Finally, the $\,K_{\alpha\beta}\,$ measure the failure of the Maurer-Cartan equation, $\, d\theta+\theta^2 = -\frac12 K_{\alpha\beta}\theta^\alpha\theta^\beta\, $.
In cases when momenta form a Lie algebra, relations \eqref{quadratic} simplify. Coefficients $F^\gamma{}_{\alpha\beta}$ coincide with the Lie algebra structure constants,
\begin{equation}
[p_\alpha,p_\beta] = F^\gamma{}_{\alpha\beta} \,p_\gamma\,,
\end{equation}
the central charges vanish, $\, K_{\alpha\beta} =0\,$, and $\, P^{\alpha\beta}{}_{\gamma\delta}\,$ is the usual antisymmetrisation, $\, 2P^{\alpha\beta}{}_{\gamma\delta}= \delta^\alpha_\gamma \delta^\beta_\delta - \delta^\alpha_\delta \delta^\beta_\gamma\, $. In particular, frame 1-forms $\theta^\alpha$ anticommute. This implies that the structure of the algebra of differential forms $\Omega^*({\cal A})$, up to noncommutativity of functions, is the same as in commutative differential geometry.
The differential structure that we discussed may be refined by addition of a metric and a connection\footnote{In the terminology of \cite{Madore:2000aq}, what we refer to as connection is called a {\it linear connection}, and is to be distinguished from the weaker notion of a {\it Yang-Mills connection}.} in analogy with the commutative case, although structural statements such as the existence and uniqueness of the Levi-Civita connection generally do not hold. While the connection and curvature can be defined abstractly, the existence of a frame allows to describe them efficiently using connection 1-forms $\omega^\alpha_{\ \beta}$ and curvature 2-forms $\Omega^\alpha_{\ \beta}$. Starting with the set $\{\omega^\alpha_{\ \beta}\}$, the covariant derivative and the curvature forms are given by
\begin{equation}
D\theta^\alpha =- \omega^\alpha{}_\beta\otimes
\theta^\beta = -\omega^\alpha{}_{\gamma\beta}\,\theta^\gamma
\otimes
\theta^\beta, \qquad \Omega^\alpha_{\ \beta} = d\omega^\alpha_{\ \beta} + \omega^\alpha_{\ \gamma}\omega^\gamma_{\ \beta} = \frac12 R^\alpha_{\ \beta\gamma\delta}\theta^\gamma\theta^\delta\ .
\end{equation}
These are the same as classical expressions, except that obviously $\omega^\alpha_{\ \gamma\beta}$ and $R^\alpha_{\ \beta\gamma\delta}$ are elements of the algebra $\mathcal{A}$. For many more details, we refer the reader to \cite{Madore:2000aq}.
\subsection{Coordinates and frame relations}
To quantise a classical spacetime, one replaces its algebra of functions $\tilde{\cal A}\,$ by a noncommutative algebra ${\cal A}\,$ in such a way that the latter resembles the former in some suitable sense. In the noncommutative frame formalism, the relations \eqref{frame} assume the role of the correspondence principle. A set of coordinate functions $\tilde x^\mu$ and vector fields $\tilde e_\alpha$ that obey $\tilde e_\alpha\tilde x^\mu = \tilde e^\mu_\alpha(\tilde x)\,$ is to be replaced by operators $x^\mu$ and $p_\alpha$ with $\, e_\alpha x^\mu=[p_\alpha,x^\mu] = e^\mu_\alpha(x)$, at least in the leading order. With such a starting point, the differential geometry described in the previous subsection typically shares many properties of the commutative one. When the classical metric depends on just one of the coordinates, it is often possible to obtain a noncommutative frame identical to the commutative one: then ${\cal A}\,$ acquires "the same metric" as $ \tilde {\cal A}\,$.\footnote{By a slight abuse of notation, all quantities from ordinary geometry will carry a tilde from now on. E.g. a local Poincare coordinate that was denoted by $z$ in Section 2 will now be written as $\tilde z$.}
Another aspect of the relation between $\mathcal{\tilde A}\,$ and $\mathcal{A}\,$ is the notion of a commutative limit. For example, in the strict deformation quantisation, $\mathcal{\tilde A}\,$ and $\mathcal{A}\,$ are isomorphic as vector spaces and only differ in their algebra structures. Similarly, in the case of the fuzzy sphere, the algebra of functions ${\cal A}_N$ defined in the $N$-dimensional UIR of $\mathfrak{so}(3)$ "tends to" the algebra of functions on the commutative two-sphere as $\,N\to \infty\,$. For us, the commutative limit will be realised as the set of relations \eqref{J} between appropriately chosen coordinate functions.
Following the idea that spacetime symmetries provide a natural framework for quantisation, in the case of AdS$_3$ and BTZ spaces we will construct $\mathcal{A}$ using the Lie algebra $\,\mathfrak{so}(2,2)$. As the group $SO(2,2)$ is locally isomorphic to $SL(2,\mathbb{R})\times SL(2,\mathbb{R})$, $\,\mathfrak{so}(2,2)$ is a direct sum of two $\,\mathfrak{sl}(2,\mathbb{R})$ subalgebras. Their "antihermitian" generators are denoted by $H$, $E_+$, $E_-$ and $\bar H$, $\bar E_+$, $\bar E_-$, and have the non-zero brackets
\begin{eqnarray}
& [H, E_+] = E_+, \qquad [H, E_-] = - E_-, \qquad [E_+,E_-] = 2H\,, \label{sl2R}
\\[4pt]
& [\bar H, \bar E_+] = \bar E_+, \qquad [\bar H, \bar E_-] = - \bar E_-, \qquad [\bar E_+,\bar E_-] = 2\bar H\ .
\end{eqnarray}
In the noncommutative frame formalism, since the frame components $e^\mu_\alpha$ are given by commutators \eqref{*}, we need to identify the spacetime coordinates and momenta simultaneously. It is beneficial to use coordinates that give the simplest form of the metric: we therefore quantise the AdS$_3$ and BTZ spaces in Poincar\'e coordinates. The corresponding classical orthonormal frame is
\begin{equation}
\tilde e^\mu_\alpha = \frac{\tilde z}{\ell}\, \delta^\mu_\alpha\ .
\end{equation}
This frame is quantised by a set of operators $\,\{z,\beta,\gamma,p_z,p_\beta,p_\gamma\}\,$ that obey relations
\begin{equation}\label{momenta-coords}
[p_\gamma, \gamma] = \frac z \ell \, ,\qquad [p_\beta, \beta] = \frac z \ell\, , \qquad [p_z,z] =\frac z \ell \,,
\end{equation}
while all other momentum-coordinate commutators vanish. It can be readily verified that these frame relations are satisfied by operators
\begin{align}
& p_z = \frac{1}{ \ell} \, (H + \bar H) , && z = 2 i\, \frac{\ell^2}{{\mathchar'26\mkern-9muk}}\, \, E_+^a \, \bar E_+^{1-a}\,, \label{Z1} \\[6pt]
& p_\beta = \frac{\ell}{{\mathchar'26\mkern-9muk}}\, \,(E_+ + \bar E_+), &&\beta = -i E_+^{a-1}\bar E_+^{1-a}\left(H+\frac{a-1}{2}\right) - i E_+^a \, \bar E_+^{-a}\left(\bar H - \frac{a}{2}\right)\,, \label{B1} \\[4pt]
& p_\gamma = \frac{\ell}{{\mathchar'26\mkern-9muk}}\, \,(E_+ - \bar E_+), &&\gamma = -i E_+^{a-1}\bar E_+^{1-a}\left(H+\frac{a-1}{2}\right) + i E_+^a \, \bar E_+^{-a}\left(\bar H - \frac{a}{2}\right)\ . \label{G1}
\end{align}
For the moment $\, a$ is any positive real number, and elements of $\mathfrak{so}(2,2)$ are regarded as operators acting in the tensor product of two discrete series representations $\mathcal{H}=T_l^-$ and $\mathcal{\bar H} = T_{\bar l}^-$ of $\mathfrak{sl}(2,\mathbb{R})$. Therefore, the momentum-coordinate operator algebra is $\mathcal{A} = \text{End}(\mathcal{H}\otimes\mathcal{\bar H})$. Operators $\,iE_+$ and $\,i\bar E_+$ in discrete series representations are positive, so their powers appearing above are well-defined. Equations (\ref{Z1}-\ref{G1}) define our noncommutative AdS$_3$ space. The remainder of this section and the next one are devoted to the study of the associated geometry.
To begin, observe that momentum operators form a Lie algebra
\begin{equation}\label{momenta-momenta}
[p_z,p_\gamma] = \frac 1 \ell \,p_\gamma, \qquad [p_z,p_\beta]=\frac 1 \ell \, p_\beta, \qquad [p_\beta,p_\gamma]=0\ .
\end{equation}
Comparing with \eqref{classical-generators-zbg}, we see that $p_\beta$ and $p_\gamma$ canonically quantise momenta associated with coordinates $\beta$ and $\gamma$. Upon identifying $\mathfrak{so}(2,2)$ with the $(1+1)$-dimensional conformal algebra, $p_\beta$ and $p_\gamma$ are seen to become translation operators, while $p_z$ becomes the generator of dilations. Therefore, our momenta are analogous to ones introduced in \cite{Buric:2015wta} for the four-dimensional de Sitter space.
The classical solution to frame relations \eqref{momenta-coords} consists of commutative coordinates $\,\tilde z,\ \tilde\beta,\ \tilde\gamma$, regarded as multiplication operators on $C^\infty(\text{AdS}_3)$ and frame vector fields $\,\tilde p_z = \tilde e_z,\ \tilde p_\beta = \tilde e_\beta,\ \tilde p_\gamma = \tilde e_\gamma$. Since the momenta defined above are elements of $\mathfrak{so}(2,2)$, they can also be naturally represented as vector fields on AdS$_3$. One thus may wonder whether these vector fields coincide with the classical frame, i.e. whether $\,p_\alpha = \tilde p_\alpha$. One readily sees that they do not: indeed $\tilde p_\alpha$ are not Killing vectors, while $p_\alpha$ by definition are. The relation between $\tilde p_\alpha$ and $p_\alpha$ becomes clearer by considering the (local) foliation of the space into surfaces of constant $\tilde z$. From the form of the metric in Poincar\'e coordinates, it is clear that these surfaces are flat. The foliation is preserved by the map
\begin{equation} \label{beta'}
\Phi : \text{AdS}_3 \to \text{AdS}_3, \qquad (\tilde z,\tilde\beta,\tilde\gamma) \mapsto (\tilde z',\tilde\beta',\tilde\gamma') = \left(\frac{1}{\tilde z},\frac{\tilde\beta}{\tilde z},\frac{\tilde\gamma}{\tilde z}\right)\,,
\end{equation}
which reduces to the identity on the plane $\tilde z=1$. It is the map $\Phi\,$ that relates $p_\alpha$ and $\tilde p_\alpha$ through the push-forward\footnote{The proof is exhibited by the relation
\begin{equation*}
- \tilde z\partial_{\tilde z} - \tilde\beta\partial_{\tilde\beta} - \tilde\gamma\partial_{\tilde\gamma} = \tilde z'\partial_{\tilde z'},\quad \partial_{\tilde\beta} = \tilde z' \partial_{\tilde\beta'}, \quad \partial_{\tilde\gamma} = \tilde z' \partial_{\tilde\gamma'}\ .
\end{equation*}
Vector fields on the left are $\,p_z$, $p_\beta$, $p_\gamma$ and on the right are $\,\tilde p_{\tilde z'}$, $\tilde p_{\tilde\beta'}$, $\tilde p_{\tilde\gamma'}$.} (here $\ell={\mathchar'26\mkern-9muk}=1$)
\begin{equation}\label{palpha-ealpha}
\Phi^\ast(p_\alpha) = \tilde p_\alpha\ .
\end{equation}
It may be observed that coordinate operators $z$, $\beta$ and $\gamma$ do not generate the full algebra $\mathcal{A}$. As a complete set of generators, one may take coordinates $\beta'$, $\gamma'$ and momenta $p_\beta$, $p_\gamma$, together with their inverses. They form a pair of Heisenberg algebras,
\begin{equation}\label{boundary}
\left[\frac \ell 2\, (p_\beta+p_\gamma), \,\frac{\beta+\gamma}{z}\right]=1\, , \qquad \left[\frac \ell 2\, (p_\beta-p_\gamma),\, \frac{\beta-\gamma}{z}\right]=1\,,
\end{equation}
with all other commutators vanishing (elements $\, (\beta\pm\gamma)/z\,$ are defined as symmetrised products, \eqref{symmetrised}). These relations can be given an interesting interpretation in terms of the conformal boundary, $\{\tilde z = 0\}$. We have already mentioned that $p_\beta$ and $p_\gamma$ are translation generators of the conformal group. As for the coordinates, the metric on any surface $\tilde z =$const is given by
\begin{equation}
ds^2 = \ell^2\left(d\tilde\beta'^2 - d\tilde\gamma'^2\right)\ .
\end{equation}
Therefore, we may view $\, (\beta\pm\gamma)/z\,$ as lightcone coordinates on the quantum boundary, $\{ z = 0\}$: relations \eqref{boundary} state that the quantum boundary is a commutative flat plane.
Finally, let us explain appearances of ${\mathchar'26\mkern-9muk}$ in (\ref{Z1}-\ref{G1}). Poincar\'e coordinates are dimensionless and thus contain no classical length scale that can be compared to the quantum one (in measurements, uncertainty relations, etc.). Therefore we would expect that commutation relations \eqref{J}, with an appropriate commutative limit, hold not for $\,\gamma,\beta,z$, but for the embedding coordinates $\,v,u,x,y$. Following this logic one can track the length dimensions in the BTZ coordinate algebra. Embedding coordinates satisfy the algebra of the Snyder type \cite{Snyder:1946qz}: we have, for example
\begin{equation}
[y-v, u+x] =\frac{ a {\mathchar'26\mkern-9muk}}{\ell}\, (u+x) \,E_+^{-1} \ . \label{***}
\end{equation}
Similar relations were obtained for the fuzzy dS$_4$ space in \cite{Buric:2017yes}. The commutative limit ${\mathchar'26\mkern-9muk}\to 0$ in \eqref{***} is explicit, though in some sense formal: it assumes that group generators are of order one.
To simplify the formulas we shall in the following put ${\mathchar'26\mkern-9muk} =1$ and $\,\ell=1$. In some particular expressions dimensional constants will be reinstated, for physical clarity.
\subsection{Differential forms, metric, connection and curvature}
The algebra of differential forms $\Omega^\ast(\mathcal{A})$ that we will use is based on a frame with three elements $\{\theta^z,\theta^\beta,\theta^\gamma\}$. Elements of the frame commute with $\mathcal{A}$ and anticommute with one another. Therefore, we have
\begin{equation}
\Omega^\ast(\mathcal{A}) = \mathcal{A} \otimes \Lambda_3\,,
\end{equation}
where $\Lambda_3$ denotes the Grassmann algebra on three generators. In accord with the general theory, the differential is defined on functions by
\begin{equation}
d = \theta^z \text{ad}_{p_z} + \theta^\beta \text{ad}_{p_\beta} + \theta^\gamma \text{ad}_{p_\gamma}\,,
\end{equation}
and on elements of the frame by
\begin{equation}
d\theta^z=0, \quad d\theta^\beta=-\theta^z\theta^\beta, \quad d\theta^\gamma=-\theta^z\theta^\gamma\ .
\end{equation}
These requirements together with the Leibniz rule define $d$ uniquely and one can explicitly check that $d^2=0$. Thus, we get a consistent differential graded algebra $(\Omega^\ast(\mathcal{A}),d)$. For future convenience, let us spell out the action of $d$ on coordinates, momenta and 2-forms:
\begin{align}
& dz = z\theta^z, \quad d\beta = z\theta^\beta, \quad d\gamma = z\theta^\gamma\,,\\[4pt]
& dp_z = -p_\beta \theta^\beta - p_\gamma \theta^\gamma, \quad dp_\beta = p_\beta \theta^z, \quad dp_\gamma = p_\gamma \theta^z\,,\\[4pt]
& d(\theta^z\theta^\beta)=0, \quad d(\theta^z\theta^\gamma)=0, \quad d(\theta^\beta\theta^\gamma) = -2\theta^z\theta^\beta\theta^\gamma\ .
\end{align}
The metric is given by the same expression as in the commutative case
\begin{equation}\label{metr}
g^{zz}=1, \quad g^{\beta\beta}=1, \quad g^{\gamma\gamma} = -1\,,
\end{equation}
so the commutative limit of the metric is correct, in fact exact. Notice that the indices in \eqref{metr} refer to elements of the frame, e.g. $g^{zz}$ stands for $g(\theta^z \otimes \theta^z)$ rather than $g(dz,dz)$. While in general the construction of a Levi-Civita connection $D:\Omega^\ast({\cal A})\to\Omega^\ast({\cal A})\otimes_{{\cal A}}\Omega^1({\cal A})$ requires one to introduce a generalised flip which controls the right Leibniz rule, in the case at hand no such flip is needed. We will set
\begin{equation}\label{connection}
D\theta^z = -\theta^\beta\otimes\theta^\beta + \theta^\gamma\otimes\theta^\gamma, \quad D\theta^\beta = \theta^\beta\otimes\theta^z, \quad D\theta^\gamma = \theta^\gamma\otimes\theta^z\,,
\end{equation}
and require $D$ to satisfy the ordinary Leibniz rule both from the left and the right. From the action of $D$ one reads off the non-vanishing connection 1-forms
\begin{equation}
\omega^z_{\ \beta}=\theta^\beta, \quad \omega^z_{\ \gamma} = -\theta^\gamma, \quad \omega^\beta_{\ z} = -\theta^\beta, \quad \omega^\gamma_{\ z} = -\theta^\gamma\ .
\end{equation}
To show that $D$ is torsionless, it is enough to verify that the torsion $\Theta$ vanishes on elements of the frame. Recall that $\Theta$ is the map from 1-forms to 2-forms given by $\,\Theta = d - \pi\circ D\,$, where $\,\pi(\theta^\alpha\otimes\theta^\beta)=\theta^\alpha\theta^\beta$. We have
\begin{equation}
\Theta(\theta^z) = \theta^\beta \theta^\beta - \theta^\gamma\theta^\gamma =0, \quad \Theta(\theta^\beta) = -\theta^z\theta^\beta-\theta^\beta\theta^z=0, \quad \Theta(\theta^\gamma) = -\theta^z\theta^\gamma-\theta^\gamma\theta^z=0\ .
\end{equation}
The compatibility of $D$ with the metric is expressed by the equation
\begin{equation}
\omega^\alpha{}_{\eta\zeta}\, g^{\delta\zeta} + \omega^\delta{}_{\eta\zeta}\, g^{\alpha\zeta} = 0\ .
\end{equation}
One can verify the last equation by substituting the connection 1-forms. From the connection one also constructs the curvature tensor. Curvature 2-forms are found to be
\begin{equation}
\Omega^z_{\ \gamma} = \theta^z\theta^\gamma,\quad \Omega^z_{\ \beta} = -\theta^z\theta^\beta, \quad \Omega^\beta_{\ z} = \theta^z\theta^\beta, \quad \Omega^\beta_{\ \gamma} = \theta^\beta\theta^\gamma, \quad \Omega^\gamma_{\ z} = \theta^z\theta^\gamma, \quad \Omega^\gamma_{\ \beta} = \theta^\beta\theta^\gamma\,,
\end{equation}
and they lead to the following set of non-zero components of the Riemann tensor
\begin{align}
& R^z_{\ \beta z\beta} = - R^z_{\ \beta\beta z} = -1, \quad R^z_{\ \gamma z \gamma} = -R^z_{\ \gamma\gamma z} = 1, \quad R^\beta_{\ z\beta z} = -R^\beta_{\ zz\beta} = -1\,,\\
& R^\gamma_{\ z\gamma z} = -R^\gamma_{\ zz\gamma} = -1, \quad R^\beta_{\ \gamma\beta\gamma}=-R^\beta_{\ \gamma\gamma\beta}= 1, \quad R^\gamma_{\ \beta\gamma\beta} = -R^\gamma_{\ \beta\beta\gamma} = -1\ .
\end{align}
These values coincide with classical expressions. Therefore, the components of the Ricci tensor do so as well,
\begin{equation}
R_{zz} = -2, \quad R_{\beta\beta} = -2, \quad R_{\gamma\gamma} = 2\ .
\end{equation}
In particular, Einstein's equations $\, R_{ab}=-2g_{ab}\, $ are satisfied.
\subsection{Laplace-Beltrami operator}
As in commutative geometry, the Riemannian Laplace-Beltrami operator may be constructed from the differential and the Hodge star operation. Since the metric components of the geometry discussed above are the same as those of the commutative AdS$_3$, so is the Hodge operator
\begin{align}
& \ast1 = \theta^z \theta^\beta \theta^\gamma, \quad \ast \theta^z = \theta^\beta \theta^\gamma, \quad \ast \theta^\beta = \theta^\gamma \theta^z, \quad \ast\theta^\gamma = \theta^\beta \theta^z\,,\\[4pt]
& \ast(\theta^z\theta^\beta) = \theta^\gamma, \quad \ast(\theta^z \theta^\gamma) = \theta^\beta, \quad \ast(\theta^\beta\theta^\gamma) = -\theta^z, \quad \ast(\theta^z\theta^\beta\theta^\gamma)=-1\ .
\end{align}
The $\ast$ is an $\mathcal{A}$-left-right linear map $\Omega^\ast(\mathcal{A})\to\Omega^\ast(\mathcal{A})$ and satisfies $\ast^2=-1$. The simplest way to define the Laplacian from $d$ and $\ast$ passes through the co-differential $\delta$. On $p$-forms the co-differential is defined by $\delta = (-1)^{p-1}\ast d \ast$. The Laplacian then reads
\begin{equation}\label{Laplacian}
\Delta = d\delta + \delta d\ .
\end{equation}
Results of the previous subsection allow us to find the action of $\Delta$ on arbitrary forms. For functions, a computation gives
\begin{equation}\label{laplacian-functions}
\Delta f = -[p_z,[p_z,f] - [p_\beta,[p_\beta,f]] + [p_\gamma,[p_\gamma,f]] + 2 [p_z,f]\ .
\end{equation}
Similarly for 1-forms, we get
\begin{align}\label{lap-1forms}
&\Delta(f_z\theta^z + f_\beta\theta^\beta + f_\gamma\theta^\gamma) = (\Delta f_z - [p_\beta,f_\beta] + [p_\gamma,f_\gamma])\,\theta^z\\[2pt]
& + (\Delta f_\beta - 2[p_z,f_\beta] + 3[p_\beta,f_z])\,\theta^\beta + (\Delta f_\gamma - 2[p_z,f_\gamma] + 3[p_\gamma,f_z])\,\theta^\gamma\ .\nonumber
\end{align}
For 2- and 3-forms, the simplest way to find $\Delta$ is by using the property $\ast\Delta = \Delta\ast$. Thus for 2-forms
\begin{align}\label{lap-2forms}
&\Delta(f_{z\beta}\theta^z \theta^\beta + f_{z\gamma}\theta^z \theta^\gamma + f_{\beta\gamma}\theta^\beta\theta^\gamma) = (\Delta f_{\beta\gamma}+[p_\beta,f_{z\gamma}]-[p_\gamma,f_{z\beta}])\,\theta^\beta\theta^\gamma\\[2pt]
&+ (\Delta f_{z\beta}-2[p_z,f_{z\beta}]-3[p_\gamma,f_{\beta\gamma}])\,\theta^z \theta^\beta + (\Delta f_{z\gamma}-2[p_z,f_{z\gamma}]-3[p_\beta,f_{\beta\gamma}])\,\theta^z\theta^\gamma\ .\nonumber
\end{align}
Similarly, the action on 3-forms is written directly from \eqref{laplacian-functions}
\begin{equation}\label{lap-3forms}
\Delta(f_{z\beta\gamma}\theta^z\theta^\beta\theta^\gamma) = (\Delta f_{z\beta\gamma})\theta^z\theta^\beta\theta^\gamma\ .
\end{equation}
With these expressions, we conclude the discussion of the local differential geometry over the noncommutative AdS$_3$ space $\mathcal{A}$.
\section{Discrete quotient and spectra}
Differential geometry that was developed in the previous section is very general in the sense that it only depends on commutation relations between momenta, and those between momenta and coordinates. Therefore, the geometry can be constructed over any algebra $\mathcal{A}$ which contains elements obeying these relations. Moreover, since the relations in a frame are local, one cannot use them to distinguish between locally isometric spaces such as AdS$_3$ and the BTZ black hole.
In this section, we turn to global properties and find the following. It is possible, in the proposed noncommutative model, to implement the action of discrete identifications on Poincar\'e coordinates via conjugation by a unitary operator $U$ and thereby obtain the fuzzy BTZ black hole. In order to do this, the parameter $a$ in (\ref{Z1}-\ref{G1}) is a fixed function of BTZ horizon radii $r_\pm\,$.
We will find the operator $U$ in the first subsection. In the second, the algebra before and after identifications is realised in terms of differential operators acting on a particular function space. The last subsection discusses the operator of radius $r$, or more precisely $B(r)$, \eqref{B(r)}. This operator turns out to be equivalent to a one-particle Schr\"odinger operator with the inverse square potential. Scattering states correspond to the black hole exterior $r>r_+$, and bound states to the interior. The quantum-mechanical Hamiltonian requires regularisation, as it is self-adjoint only formally; the need of regularisation also arises naturally from the physical requirement that $r^2\geq 0$. The regularised Hamiltonian has a continuum of scattering states together with an infinite discrete set of bound states.
\subsection{Discrete identifications}
In \eqref{identifications-Poincare} we have written the effect of discrete identifications that characterise a BTZ black hole on the Poincar\'e coordinates. In the quantum theory, we have two distinct natural possibilities for implementing this action: either to consider the action of the identification group element $(\rho_L,\rho_R)$ in the algebra $\text{End}(\mathcal{H}\otimes\mathcal{\bar H})$ and impose appropriate invariance under it, or to impose invariance under some transformation that reproduces \eqref{identifications-Poincare} on the quantum level. We will follow the second strategy.
Let $z$, $\beta$ and $\gamma$ be the operators (\ref{Z1}-\ref{G1}). One readily verifies the commutation relations
\begin{equation}
[H - \bar H,z] = (2a-1)z,\quad [H -\bar H,\beta + \gamma] = 2(a-1)(\beta+\gamma), \quad [H-\bar H,\beta-\gamma] = 2a(\beta-\gamma)\ .
\end{equation}
Therefore, these combinations of coordinates are rescaled by finite transformations as
\begin{equation} \label{finite-transformations}
U z U^{-1} = e^{\alpha(2a-1)}z, \quad U (\beta+\gamma) U^{-1} = e^{2\alpha(a-1)}(\beta+\gamma), \quad U (\beta-\gamma) U^{-1} = e^{2\alpha a}(\beta-\gamma)\ ,
\end{equation}
where we have introduced the unitary operator
\begin{equation}\label{U}
U = e^{\alpha(H - \bar H)}\ .
\end{equation}
Transformations \eqref{finite-transformations} assume the same form as classical BTZ identifications \eqref{identifications-Poincare}. By demanding that the two coincide, we get an overdetermined set of equations for $\alpha$ and $a$, which however has the unique solution
\begin{equation}\label{aalpha}
\alpha =- \frac{2\pi r_-}{\ell}, \quad a = \frac{r_+ + r_-}{2r_-}\ .
\end{equation}
In particular, the identification condition fixes the choice of $\,a$ in definition of coordinates (\ref{Z1}-\ref{G1}), which was up to this point arbitrary. In the remainder of the text, $a$ and $\alpha$ will always be assumed to take the values \eqref{aalpha}.
\subsection{Realisation on a function space}
We turn to a concrete realisation of operators (\ref{Z1}-\ref{G1}) on a function space. This will allow us to determine various properties of physically relevant coordinates, such as their spectra or eigenfunctions.
As mentioned above, unitary irreducible representations of the AdS$_3$ isometry group are of the form $\mathcal{H} \otimes \mathcal{\bar H}$, where $\mathcal{H}$ and $\mathcal{\bar H}$ are unitary irreducibles of $SL(2,\mathbb{R})$. We will take these two representations in the "negative" discrete series
\begin{equation}
\mathcal{H}\cong T^-_l\, , \quad \mathcal{\bar H}\cong T^-_{\bar l}\ ,
\end{equation}
with $\,2l$, $\,2\bar l$ negative integers. Our conventions about these representations are collected in Appendix B. The coordinate-momentum algebra is that of operators on $\,\mathcal{H}\otimes\mathcal{\bar H}$,
\begin{equation} \label{coord-momentum-algebra}
\mathcal{A} = \text{End}(\mathcal{H}\otimes\mathcal{\bar H})\ .
\end{equation}
Discrete series representations of $SL(2,\mathbb{R})$ are most commonly realised on the space of holomorphic functions on the upper half-plane (or the Poincar\'e disc). For our purposes, it turns out to be more convenient to work with the Fourier-space realisation in which the generators take the form \eqref{generators-Fourier}, \cite{Vilenkin}. This realisation is based on the fact that holomorphic functions in the upper half-plane are Fourier transforms of functions defined on the positive real line, and makes it easy to take non-integer powers that appear in equations (\ref{Z1}-\ref{G1}). We find
\begin{align} \label{Fourier}
& p_z = x\partial_x + \bar x \partial_{\bar x} + l + \bar l + 2, && z =2 x^a \, \bar x^{1-a},\\[2pt]
& p_\beta =- i(x+\bar x), && \beta+\gamma = -2i\left(\frac{x}{\bar x}\right)^{a-1}\left(x\partial_x + l + \frac{a+1}{2}\right),\label{Fourier2}\\
& p_\gamma = -i(x-\bar x), && \beta-\gamma = -2i\left(\frac{x}{\bar x}\right)^a\left(\bar x \partial_{\bar x} + \bar l + 1 - \frac{a}{2}\right)\ . \label{Fourier3}
\end{align}
These operators act on functions of two real variables $\, x,\bar x>0$. The inner product for discrete series representations is written in \eqref{inner-produt-Fourier}: coordinates are hermitian with respect to the inner product on $\mathcal{H}\otimes\mathcal{\bar H}$, while momenta are anti-hermitian.
In the given representation, the discrete element $U$ acts on functions of coordinates $f(x,\bar x)$ as
\begin{equation}
(Uf)(x,\bar x) = e^{\alpha(l - \bar l)} f\left(e^\alpha x,e^{-\alpha}\bar x\right)\ .
\end{equation}
The form of this action suggests to change variables. We introduce $\,\chi\in(0,\infty)$ and $\,\eta\in(-\infty,\infty)$ by
\begin{equation}
x = \chi e^\eta, \qquad \bar x = \chi e^{-\eta}\ . \label{change}
\end{equation}
For $\, l=\bar l$, we find that $U$ acts only on $\eta\,$, as a finite translation
\begin{equation}
H - \bar H = x\partial_x - \bar x \partial_{\bar x} = \partial_\eta\, ,\qquad (Uf)(\chi,\eta) = f(\chi,\eta+\alpha)\ .
\end{equation}
We will require invariance of the BTZ wave functions under the discrete subgroup generated by $U$ by assuming that $\eta$ is a periodic coordinate,
\begin{equation} \label{eta}
\eta\sim\eta+ \alpha n, \quad n\in\mathbb{Z}\ .
\end{equation}
This restriction is quite similar to the restriction \eqref{phi} imposed on the classical BTZ space, that coordinate $\phi$ be periodic. In the following we study the representation with $\, l=\bar l\,$: the case with $\,l\neq \bar l\, $ may be considered as well, but we shall not do so.
\subsection{Spectrum of the radial coordinate}
We now turn to the BTZ radial coordinate \eqref{radius-BTZ}: for the remainder of this section, we will consider the closely related function $B(r)$. The expression for $\,B$ in Poincar\'e coordinates holds in each of the regions I, II, III, \cite{Banados:1992gq}: in regions I outside the outer horizon ($r> r_+$) $\,B$ is positive, and in the black hole interior ($r< r_+$), $B$ is negative. In addition, the relation $r^2\geq 0\,$ in terms of $\,B$ gives the condition
\begin{equation}\label{minimum-B}
B\geq -\frac{r_+^2}{r_+^2-r_-^2}\, \ .
\end{equation}
To define $B$ as an operator, we need to make a choice of the operator ordering as $\beta$ and $\gamma$ do not commute with $z$. We will use the symmetrised products:
\begin{equation}\label{symmetrised}
\frac{\beta+\gamma}{z} := \frac12 \left\{\beta+\gamma,z^{-1}\right\} = - E_+^{-1}\Big(H-\frac12\Big) , \qquad \frac{\beta-\gamma}{z} := \frac12 \left\{\beta-\gamma,z^{-1}\right\} = - \bar E_+^{-1}\Big(\bar H - \frac12\Big)\ .
\end{equation}
This gives
\begin{equation}
B = E_+^{-1} \Big(H-\frac12\Big)\,\bar E_+^{-1} \Big(\bar H - \frac12\Big)\ . \\[2pt]
\end{equation}
In particular, $B$ factorises into chiral and anti-chiral pieces. As differential operators, these are
\begin{equation}
\frac{\beta+\gamma}{z} = - i\left(\partial_x + \frac{l+\frac12}{x}\right), \qquad \frac{\beta-\gamma}{z} =- i\left(\partial_{\bar x} + \frac{\bar l+\frac12}{\bar x}\right)\ .
\end{equation}
We are however more interested in the expression for $B$ in $(\chi,\eta)$ variables. In these coordinates the operator $B$ becomes
\begin{equation}
B = \frac14\left( -\partial_\chi^2 - \frac{4l+3}{\chi}\,\partial_\chi - \frac{(2l+1)^2}{\chi^2} + \frac{1}{\chi^2}\,\partial_\eta^2\right)\ .
\end{equation}
By construction, $B$ commutes with $\, H-\bar H = \partial_\eta\,$. Therefore we can restrict ourselves to solving for eigenvectors of $B$ among functions with a fixed $\eta-$Fourier mode,
\begin{equation}
B f_{n,\lambda}(\chi,\eta) = \lambda^2 f_{n,\lambda}(\chi, \eta)\, , \qquad f_{n,\lambda}(\chi, \eta) = e^{\frac{2\pi i n }{\alpha}\, \eta}\, f_\lambda(\chi), \quad n\in \mathbb{Z}\ .
\end{equation}
It is useful to write the resulting eigenvalue equation for $f_\lambda(\chi)$ in the Schr\"odinger form. Introducing
\begin{equation}\label{Sch1}
f_\lambda(\chi) = \chi^{-2l-3/2}\, h_\lambda(\chi)\,,
\end{equation}
we obtain equation
\begin{equation}\label{eigenh}
-\frac{d^2h_\lambda\,}{d\chi^2} - \left(c^2+\frac 14\right)\, \frac{1}{\chi^2}\, h_\lambda =4\lambda^2\, h_\lambda\,,
\end{equation}
with
\begin{equation}\label{c}
c=c(n)=- \frac{2\pi n}{\alpha} = \frac{n \ell\,}{r_-}\ .
\end{equation}
Equation \eqref{eigenh} is the eigenvalue equation for a particle moving on a line in the attractive $\chi^{-2}$ potential,
\begin{equation}\label{V}
V(\chi) =- \left(c^2+\frac 14\right)\,\frac{1}{\chi^2}\ .
\end{equation}
Solutions to \eqref{eigenh} for positive eigenvalues $\lambda^2$ and fixed $ c$, the scattering states of the potential \eqref{V}, are the Bessel functions of imaginary order which we will write in terms of the Hankel functions $H^{(1,2)}_{ic} $,
\begin{equation}
f_{\lambda}(\chi) = \sqrt{\chi} \left(C_1 \, H^{(1)}_{ic}(2\lambda\chi) + C_2\, H^{(2)}_{ic}(2\lambda\chi)\right)\ .
\end{equation}
They behave asymptotically as plane waves,
\begin{equation}
\sqrt{\chi}\, H_{ic}^{(1,2)}(2\lambda\chi)\sim \frac{1\pm i}{\sqrt{2\pi\lambda}}\ e^{\pm\left(2i\lambda\chi+\frac{c\pi}{2}\right)}\,, \quad \chi\to\infty\,,
\end{equation}
and vanish at $\chi= 0$. There are continuously many scattering states, that is, continuously many eigenstates of $r^2$ outside the outer horizon, with eigenvalues $\ r_+^2+\lambda^2(r_+^2 - r_-^2)\,$.
States of the BTZ black hole inside the horizon $ r_+$ are the eigenstates of $B$ with $\lambda^2<0$, i.e. the bound states of the potential \eqref{V}. With regard to the bound states, the attractive $\chi^{-2}$ potential is usually considered unphysical. The reason can be seen already in equation \eqref{eigenh}: equal scaling of the kinetic and the potential terms implies that, if $\, h_\lambda(\chi)$ is a solution, then so is the whole family $\, h_{\lambda/\mu} (\mu\chi)$, $\mu\in\mathbb{R}_+$. Applied to bound states this means that if there is one bound state, there is a continuum of them. Given that function $H^{(1)}_{ic}$ exponentially decreases at infinity i.e. that it is normalisable
\begin{equation}
\sqrt{\chi} \, H^{(1)}_{ic}(2i\lambda\chi) = \sqrt{\chi}\, K_{ic}(2\kappa\chi)\sim \frac12\sqrt{\frac{\pi}{\kappa}}\ e^{-2\kappa\chi} \,, \quad \chi\to\infty\,,
\end{equation}
($ \kappa^2=-\lambda^2$, $\,\kappa>0$)\,, the bound states exist. Mathematically, the problem arises because $B$ is not a self-adjoint operator, but only formally self-adjoint, \cite{Hutson}.\footnote{The chiral and anti-chiral parts of $B$ are also formally self-adjoint, but in intervals $\,x\in [0,\infty)$, $\,\bar x\in [0,\infty)\,$ they do not have self-adjoint extensions. This fact shows up in non-orthogonality of their eigenfunctions: we have, for example
\begin{equation*}
(x^{-l-1/2} e^{i\kappa_1 x}, x^{-l-1/2} \, e^{i\kappa_2 x}) = 2^{2l+1}\pi \int\limits_0^\infty dx\ e^{i(\kappa_2-\kappa_1) x} = \frac{2^{2l+1}\pi i}{\kappa_2 - \kappa_1}\,,
\quad \text{for}\quad \kappa_1\neq\kappa_2\ .
\end{equation*}}
Self-adjoint extensions of $B$ can be obtained by imposing the appropriately chosen boundary conditions, \cite{x^-2}. This, however, does not resolve the problem, as each choice of a self-adjoint extension gives a different spectrum of $B$. In \cite{Landau}, this property of the bound state spectrum is interpreted as feature of the attractive inverse-square potential that all negative-energy particles fall to the centre ($\chi =0$, $\, B=-\infty\,$).
In our analysis, $B$ is related to the radius $r$ via \eqref{B(r)}. But $r$ is the radial (real) coordinate, and we should ensure that $r^2\geq 0\,$ holds at the quantum level. In consequence, the eigenvalues of $B$ are to be constrained as
\begin{equation}
-\kappa^2= \lambda^2\geq -\, \frac{r_+^2}{r_+^2-r_-^2}\ ,
\end{equation}
\begin{figure}[ht]
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{regularisation1.pdf}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[scale=0.5]{regularisation2.pdf}
\end{minipage}
\caption{Inverse square potential and its modifications}
\end{figure}
that is they have a (negative) lower bound. A natural way to enforce this condition is to modify the potential $\,V(\chi)$ by giving it a minimal value, $\, V_{min} = -4\kappa_0^2\,$. The simplest modifications considered in the literature \cite{Landau,x^-2}, given in Figure~1, are
\begin{equation}\label{Vmod}
V^\star(\chi) =\left\{ \begin{array}{ll}
-\, \dfrac{4 r_+^2}{r_+^2-r_-^2} \, , \quad & \chi\in (0, \chi_0) \\[12pt]
- \left(c^2+\dfrac 14\right)\,\dfrac{1 }{\chi^2} \,, \ \ & \chi\geq\chi_0
\end{array} \right.
\ \ \text{and} \quad
V^\ast(\chi) =\left\{ \begin{array}{ll}
\infty \, , \quad & \chi\in (0, \chi_0) \\[12pt]
- \left(c^2+\dfrac 14\right)\,\dfrac{1 }{\chi^2} \, , \ \ & \chi\geq\chi_0
\end{array} \right.
\end{equation}
where in both cases
\begin{equation}
4\kappa_0^2= \frac{4 r_+^2}{r_+^2-r_-^2} = \left(c^2+\frac 14\right)\, \frac{1}{\chi_0^2}\ .\\[2pt]
\end{equation}
We discuss the eigenstates corresponding to these potentials in Appendix C. Both modifications have the same qualitative properties: for a fixed $c$ there is a discrete infinite set of bound states whose eigenvalues exponentially accumulate at $\lambda^2\to 0-\, $. This means that fuzzy BTZ black hole has an infinite discrete set of states inside the outer horizon, $r_+\,$.
\section{Summary and outlook}
In this work, we have constructed noncommutative models of the three-dimensional anti-de~Sitter space and the BTZ black hole. We shall shortly summarise the obtained results and discuss some future research directions.
Our starting point was the set of expressions (\ref{Z1}-\ref{G1}) that define noncommutative analogues of the Poincar\'e coordinates, together with a moving frame, as operators on a Hilbert space. While such a definition is in part a matter of choice, we showed that it allows to develop differential geometry closely analogous to the commutative one. The noncommutative metric \eqref{metr}, essentially fixed by the frame, admits a compatible torsion-free connection \eqref{connection}, and the associated curvature satisfies vacuum Einstein's equations with a negative cosmological constant. We derived the Laplace-Beltrami operator, including its action on arbitrary differential forms, (\ref{laplacian-functions}-\ref{lap-3forms}). The boundary of the fuzzy AdS$_3$ is found to be commutative and flat, \eqref{boundary}.
Noncommutative coordinates (\ref{Z1}-\ref{G1}) allow for a free parameter $a$, which does not enter into structure functions of the differential calculus. We showed that $a$ may be related to the global spacetime structure: requiring that the effect of BTZ identifications on Poincar\'e coordinates is implemented on the quantum level by a unitary operator $\,U=e^{\alpha(H-\bar H)}\,$ fixes $a$ and $\alpha$ uniquely, \eqref{aalpha}. Relations \eqref{aalpha} show how the model depends on the parameters that characterise the classical BTZ black hole, the radii of its inner and outer horizons. Notice however that $a$ and $\alpha$ are not defined for $\, r_-=0$, that is, the given construction does not describe the non-rotating fuzzy black hole. This is, in a way, in accord with the fact that non-rotating and rotating BTZ black holes have different classical geometries, \cite{Banados:1992gq}. On a related note, extending $r_\pm$ beyond physical values and putting $r_+=0$, $\, r_-=i\ell$, we formally obtain
\begin{equation}
M=-1\, , \quad a=\frac 12\, ,\quad \alpha =-2\pi i\ ,
\end{equation}
which makes the BTZ identifications \eqref{U} trivial. In this sense, the value $a=1/2\,$ describes the fuzzy AdS$_3$ space, with $\, B(r) =r^2/\ell^2$.
Among the most important properties of a noncommutative space are the spectra of physically relevant coordinates. We analysed here the radial coordinate $r$. In computations, we used the Fourier space realisation of discrete series representations, (\ref{Fourier}-\ref{Fourier3}): in this realisation, identifications by $U$ are implemented by making one coordinate periodic, \eqref{eta}.
The eigenvalue equation for $B(r)$, \eqref{B(r)} can be written in a Schr\"odinger form \eqref{eigenh}, upon which eigenstates above and inside the horizon at $r_+$ are identified with the scattering and bound states. The spectrum of scattering states is continuous and that of bound states is infinite and discrete. It might be worth noting that the fact that the outer horizon $r=r_+$ separates the continuous and discrete parts of the spectrum comes out of a computation, rather than being satisfied by construction. Bound states accumulate at $\, r_+\vert_{-0}\,$ exponentially, \eqref{asymptotic-energies}. The treatment of bound states required regularisation, that could be seen physically as excluding values $\,r^2<0\,$ by a modification of the potential, \eqref{Vmod}: we may regard the regularisation procedure as a part of the quantisation prescription. Luckily, by all accounts different procedures, of which we studied two, lead to same qualitative results.
One feature of the above model that should be mentioned is that there is no isomorphism between the commutative algebra of functions $\mathcal{\tilde{A}}$ and its noncommutative replacement $\mathcal{A}$. This property is for example satisfied by the fuzzy sphere, in a certain limiting sense. Let us discuss this question for the AdS$_3$, which has the unbroken $SO(2,2)$ symmetry. Due to the isometry AdS$_3\cong SL(2,\mathbb{R})$, the space of functions on AdS$_3$ decomposes as the regular representation\footnote{By $\pi^\ast$ we denote the dual, i.e. the contragredient representation of $\pi$.}
\begin{equation}\label{Peter-Weyl}
L^1(SL(2,\mathbb{R})) = \sum_{\pi\in PD} \text{End} (\pi) = \sum_{\pi\in PD} \pi\otimes\pi^\ast\,,
\end{equation}
where $PD$ is the set of principal and discrete series unitary irreducible representations of $SL(2,\mathbb{R})$, described in Appendix B. More precisely, matrix elements $\pi_{ij}\,$ span a dense subspace of $L^1(SL(2,\mathbb{R}))$. Notice that we have written the decomposition into left-right bimodules of $SL(2,\mathbb{R})$, or equivalently, representations of $SL(2,\mathbb{R})\times SL(2,\mathbb{R})$. On the other hand, the algebra $\mathcal{A}$ can be decomposed into irreducibles of $SL(2,\mathbb{R})$ using results of \cite{Repka} that we collect in Appendix B. The decomposition reads
\begin{equation}\label{quantum-space-deocomposition}
\mathcal{A} = \text{End}(\mathcal{H} \otimes \mathcal{\bar H}) = (\mathcal{H}\otimes\mathcal{H}^\ast)\otimes(\mathcal{\bar H}\otimes\mathcal{\bar H}^\ast) = \int_{\mathbb{R}_+^2} d\rho d\bar\rho\ T_{i\rho-1/2,0}\otimes T_{i\bar\rho-1/2,0} = L^2(\mathbb{H}^2 \times \mathbb{H}^2)\ .
\end{equation}
In the last line, it is understood that hyperbolic spaces $\,\mathbb{H}^2$ carry the standard measure $y^{-2}dx\, dy$. By comparing \eqref{Peter-Weyl} and \eqref{quantum-space-deocomposition}, it seems that the quantum space is of one dimension higher than the classical one. While the presence of additional Kaluza-Klein modes here follows from simple facts of representation theory, these modes are a common feature in various gravitational theories, and their meaning deserves further study. It was argued in \cite{Sperling:2019xar,Steinacker:2019fcb} that Kaluza-Klein modes are a generic feature of quantum spaces in more than two dimensions and these were interpreted in terms of higher spin fields. We will investigate in the future whether the same interpretation is appropriate here.
The last question will also inevitably be addressed when constructing quantum field theories over $\mathcal{A}$. A necessary prerequisite for investigations of field theories is the knowledge of the Laplacian and its eigenfunctions. We have defined this operator in Section 3. The Laplacian acts on the algebra of functions $\mathcal{A}$ and more generally that of differential forms $\Omega^\ast(\mathcal{A})$, but it also acts on the Hilbert space of the theory itself. In the coordinates $(\chi,\eta)$, acting on functions \eqref{Sch1}, the operator is independent of $\eta$ and assumes the very simple form
\begin{equation}
\Delta = -\chi^2 \partial_\chi^2 + \left(4\chi^2 + \frac34\right) \sim -\partial_X^2 + (4e^{2X} + 1)\ .
\end{equation}
By the last line, we mean that one may bring $\Delta$ to the rightmost form by changing the variable as $\chi = e^{X}$ and performing a similarity transformation. The eigenfunctions of $\Delta$ are readily constructed in terms of Bessel functions. We will solve the more difficult eigenvalue problem for $\Delta$ on $\mathcal{A}$ in the future work. The resulting fuzzy harmonics will serve as the starting point for developing quantum field theory on $\mathcal{A}$. Besides quantum field theory on the fixed fuzzy background $\mathcal{A}$, an obvious challenge is to embed $\mathcal{A}$ as a ground state of a dynamical theory of gravity. Probably the right framework to attempt this are matrix models, similarly to what was done in \cite{Jurman:2013ota,Sperling:2019xar,Steinacker:2016vgf}.
Several other problems can and should be addressed in the given framework. One is to understand properties of the inner horizon $r_-\,$: in our model, the point $r=r_-$ is in no way special and likely to be absent from the spectrum of the radial coordinate. A related issue is to establish a model of the fuzzy non-rotating BTZ black hole, $r_-=0$, either by using another set of classical coordinates, or by a different quantisation of $\,(z,\gamma,\beta)$.
A further important question is, which irreducible representations of $SO(2,2)$ define the fuzzy BTZ black holes? Our choice of discrete series representations $\,T_l^-\otimes T_{\bar l}^-\,$ was dictated by consistency of the definition of Poincar\'e coordinates $(z,\gamma, \beta)$, as well as by simplicity ($\,\bar l =l$), and not, for example, by a physical requirement or constraint on the Casimir values $\,l(l+1)$, $\, \bar l(\bar l+1)$.
Finally, a problem that deserves a separate study is the entropy of the fuzzy BTZ black hole. In the exact quantum gravity solution we would expect that the number of black hole states below the horizon is finite, providing the known value of the black hole entropy. In our model all below-horizon states are discrete, but their number is infinite. The endeavour to refine or modify the present model by a further regularisation or restriction of (still arbitrary) parameters $l$, $\bar l\,$, $r_\pm$, $\ell\,$ in relation to the black hole entropy is a very important further task.
\vskip10pt
{\bf Acknowledgements:} This work is funded by a research grant under the project H2020 ERC STG 2017 G.A. 758903 "CFT-MAP" and by 451-03-9/2021-14/200162 Development Grant of MPNTR, Serbia.
|
2,877,628,088,604 | arxiv | \section{Introduction}
Elucidating the nature of the electroweak symmetry
breaking sector of the Standard Model (SM) is the main goal of the Large Hadron Collider currently running at CERN. It
is widely believed that the simplest scenario involving a single scalar Higgs field is untenable due to the fine tuning and triviality problems which arise
in scalar field theories. One natural solution to these problems can be found by assuming that the Higgs sector
in the Standard Model arises as an effective field theory describing the dynamics of a composite field arising
from strongly bound fermion-antifermion pairs. These models are generically called
technicolor theories.
However, to obtain fermion masses in these scenarios requires additional model building, as in extended technicolor models~\cite{ETC-1,ETC-2,schrock2,schrock3} and models of top-condensation~\cite{Miransky:1988xi,Miransky:1989ds,Bardeen:1989ds,Marciano:1989mj}. In the latter
models four-fermion interactions drive the formation and condensation of a scalar top--anti-top bound state which plays
the role of the Higgs at low energies.
Our motivation in this paper is to study how
the inclusion of such four fermion interactions may influence the
phase structure and low energy behavior of non-abelian gauge theories in general.
Specifically we have examined a model with both gauge interactions and a chirally invariant
four fermi interaction - a model known in the literature as the gauged NJL model \cite{DSB-yamawaki}.
The focus of the current work is to explore the phase diagram when fermions are charged under a non-abelian gauge group. Indeed, arguments have been given in the continuum that
the gauged NJL model may exhibit different critical behavior at the boundary
between the symmetric and broken phases \footnote{Notice that the appearance of a true
phase transition in the gauged NJL models depends on the approximation that we can neglect the
running of the gauge coupling} corresponding to the appearance of a line
of new fixed points associated with a mass anomalous dimension varying in the
range
$1 < \gamma_{\mu} < 2$ \cite{DSB-yamawaki,walking-francesco2}. The evidence for this behavior
derives from calculations utilizing the ladder approximation in Landau gauge to the Schwinger-Dyson equations. A primary goal of the current study was to use lattice simulation to check the validity of
these conclusions and specifically to search for qualitatively new critical behavior in the
gauged model
as compared to the pure NJL theory. While we will present results that indicate that
the phase structure of the gauged NJL model is indeed different from pure NJL, we shall
argue that our results are \emph{not} consistent
with the presence of any new fixed points in the theory.
In the work reported here and described in detail in \cite{us} we have concentrated on the four flavor theory
corresponding to
two copies of the basic Dirac doublet used in the lattice construction.
The four flavor theory is expected to be chirally
broken and confining at zero four fermi coupling and is free from
sign problems for gauge group $SU(2)$. Understanding the effects of the four fermion term in this
theory can then serve as a benchmark for future studies of theories which, for zero
four fermi coupling, lie near or inside the conformal window. In the latter
case the addition of
a four fermion term will break conformal invariance but in principle that breaking may be made arbitrarily small
by tuning the four fermi coupling. It is entirely possible that the phase diagrams of such conformal or walking
theories in the presence of four fermi terms may exhibit very different features than those seen for
a confining gauge theory.
\section{Details of the model}
We will consider a model which consists of $N_f/2$ doublets of gauged massless Dirac fermions in the
fundamental representation of an $SU(2)$ gauge group and
incorporating an $SU(2)_L\times SU(2)_R$ chirally invariant four fermi interaction.
The action for a single doublet takes the form
\begin{eqnarray}
S &=& \int d^4x\; \overline{\psi} ( i \slashed{\partial} - \slashed{A}) \psi - \frac{G^2}{2N_f} [ (\bar{\psi} \psi)^{2} + (\bar{\psi} i \gamma_{5} \tau^{a} \psi )^{2} ] \nonumber \\
&-& \frac{1}{2g^2} Tr [F_{\mu \nu} F^{\mu \nu}] ,
\label{eq:etcnjlaction}
\end{eqnarray}
where G is the four-fermi coupling, $g$ the usual gauge coupling
and $\tau^{a},a=1\ldots 3$ are the generators of the $SU(2)$ flavour group.
This action may be discretized using the (reduced) staggered fermion formalism with the result
\begin{equation}
S= \sum_{x,\mu} \ \chi^{T}(x) \ \mathcal{U}_{\mu}(x) \ \chi(x + a_{\mu}) \ [\eta_{\mu}(x) +G\; {\overline{\phi}}_\mu(x) \,\epsilon(x) \, \xi_\mu(x)] .
\label{finalS-latt} \end{equation} where $\eta_\mu(x)$, $\xi_\mu(x)$ and $\epsilon(x)$ are the usual
staggered fermion phases, ${\overline{\phi}}(x)=\frac{1}{16}\sum_h \phi(x-h)$ the average of the scalar field over the hypercube \cite{redstag-Smit-1, redstag-Smit-2} and the gauge field acting on the reduced staggered fermions takes the form:
\begin{equation} \mathcal{U}_{\mu} (x) = \frac{1}{2} [1+ \epsilon(x)] \; U_{\mu}(x) + \frac{1}{2} [1- \epsilon(x)] \; U_{\mu}^{*}(x) \label{mathcalU}. \end{equation}
Clearly the theory is invariant under the $U(1)$
symmetry $\chi(x)\to e^{i\alpha\epsilon(x)}\chi(x)$ which is to be interpreted as the $U(1)$ symmetry corresponding to
fermion number.
More interestingly it is also invariant under
certain shift symmetries given by
\begin{eqnarray}
\chi(x)&\to&\xi_\rho(x)\chi(x+\rho) , \\
U_\mu(x)&\to&U_\mu^{*}(x+\rho) , \\
\phi_\mu(x)&\to&(-1)^{\delta_{\mu\rho}}\phi_\mu(x+\rho) .
\end{eqnarray}
These shift symmetries
correspond to a {\it discrete} subgroup of
the continuum axial flavor transformations which act on the matrix field $\Psi$ according to
\begin{equation} \Psi\to \gamma_5\Psi\gamma_\rho\end{equation}
Notice that no single site mass term is allowed in this model.
\section{Numerical results}
We have used the RHMC algorithm to simulate the lattice theory with
a standard Wilson gauge action being employed for the gauge fields. Upon integration over
the basic fermion doublet we obtain a Pfaffian ${\rm Pf(M(U))}$ depending on the gauge field \footnote{Note that the fermion operator appearing in eqn.~\ref{finalS-latt} is antisymmetric}.
The required pseudofermion weight for $N_f$ flavors is then
${\rm Pf}(M)^{N_f/2}$. The pseudoreal character of
$SU(2)$ allows us to show that the
Pfaffian is purely real and so we are guaranteed to have no sign problem if
we use multiples of four flavors corresponding to a
pseudofermion operator of the form $(M^\dagger M)^{-{\frac{N_f}{8}}}$. The results in this
paper are devoted to the case $N_f=4$.
We have utilized
a variety of lattice sizes: $4^4$, $6^4$, $8^4$ and
$8^3\times 16$ and a range of gauge couplings $1.8< \beta \equiv 4/g^2< 10.0$.
\begin{figure}[htb]
\begin{center}
\includegraphics[height=70mm]{poly-L468-N4.eps}
\caption{Polyakov loop vs $\beta$ at $G=0.1$ for four flavours}
\label{poly-L468}
\end{center}
\end{figure}
To determine where the pure gauge theory is strongly coupled and confining we
have examined the average Polyakov line
as $\beta$ varies holding
the four fermi coupling fixed at $G=0.1$.
This is shown in figure ~\ref{poly-L468}.
We see a strong crossover between a confining regime for small $\beta$
to a deconfined regime at large $\beta$. The crossover coupling is volume dependent and takes
the value of $\beta_c\sim 2.4$ for lattices of size $L=8$.
For $\beta<1.8$ the plaquette
drops below 0.5 which we take as indicative of the presence of strong lattice spacing artifacts and so
we have confined our simulations to larger values of $\beta$.
We have set the fermion mass to zero in all of our work so that our lattice
action possesses the series of exact chiral symmetries discussed earlier.
One of the primary observables used in this
study is the chiral condensate which
is computed from the gauge invariant one link mass operator
\begin{equation}
\chi (x)\left({\mathcal U}_\mu(x)\chi (x+e_\mu)+{\mathcal U}^\dagger_\mu(x-e_\mu)\chi(x-e_\mu)\right) \epsilon (x)\xi_\mu(x)\end{equation}
\begin{figure}
\begin{center}
\includegraphics[height=70mm]{oct12-psibpsi-L8-N4-all-beta-v3.eps}
\caption{$\langle{\overline{\chi}}\chi\rangle$ vs $G$ for varying $\beta$ for the $8^4$ lattice with $N_f = 4$.}
\label{oct12-psibpsi-L8-N4-all-beta}
\end{center}
\end{figure}
In Figure \ref{oct12-psibpsi-L8-N4-all-beta} we
show a plot of the absolute value of the condensate at a variety of gauge couplings
$\beta$ on $8^4$ lattices. Notice the rather smooth transition between symmetric and broken phases around
$G\sim 0.9$ for $\beta = 10$. This is consistent with earlier work using sixteen flavors of naive fermion reported
in \cite{annakuti} which identified a line of second order phase transitions in this region of
parameter space. It also agrees with the behavior seen in previous simulations using conventional staggered quarks \cite{Hands:1997uf}.
This behavior should be contrasted with the behavior of the condensate for strong gauge coupling $\beta\le 2.4$. Here a very
sharp transition can be seen reminiscent of a first order phase transition. In Figure~\ref{oct12-psibpsi-all-L-N4} we highlight this
by showing a plot of the condensate versus four fermi coupling at the single gauge coupling $\beta=2.0$ for a range of
different lattice sizes.
\begin{figure}
\begin{center}
\includegraphics[height=70mm]{oct12-psibpsi-all-L-N4.eps}
\caption{$\langle{\overline{\chi}}\chi\rangle$ vs $G$ at $\beta=2.0$ for lattices $4^4$, $6^4$ and $8^4$ with $N_f = 4$.}
\label{oct12-psibpsi-all-L-N4}
\end{center}
\end{figure}
The chiral condensate is now non-zero even for small four fermi coupling and shows no
strong dependence on the volume consistent with spontaneous chiral symmetry breaking in the pure gauge
theory. However, it jumps abruptly to much
larger values when the four fermi coupling exceeds some critical value.
This crossover or transition
is markedly discontinuous in character - reminiscent of a first order phase transition. Indeed,
while the position of the phase transition is only weakly volume dependent it appears
to get sharper with increasing volume.
What seems clear is that the second order transition seen in the
pure NJL model is no longer present when the gauge coupling is strong.
In the next section we will argue that this is to be expected -- in the gauged
model one can no longer send the fermion mass to zero by adjusting the four fermi coupling since it receives
a contribution from gauge mediated chiral symmetry breaking. Indeed the measured one link chiral condensate operator is not an order parameter
for such a transition since we observe it to be non-zero for all $G$. Notice however that we see no sign
that this condensate depends on the gauge coupling $\beta$ in the confining regime at small $G$. This is qualitatively different from the behavior of regular staggered
quarks and we attribute it to the fact that the reduced formalism does not allow for a single site mass term or an
exact {\it continuous} chiral symmetry. Thus the spontaneous breaking of the
residual discrete lattice chiral symmetry by gauge interactions will
not be signaled by a light Goldstone pion and the measured condensate will receive contributions only from massive states.
The transition we observe is probably best thought
of as a crossover phenomenon corresponding to the sudden onset of a new mechanism for dynamical mass generation due to the strong four fermi
interactions.
\section{Summary}
In this paper we have conducted numerical simulations
of the gauged NJL model for four flavors of Dirac fermion in the
fundamental representation of the $SU(2)$ gauge group.
We have employed a reduced staggered fermion discretization scheme which allows us
to maintain an exact subgroup of the continuum
chiral symmetries.
We have examined the model for a variety of values for lattices size, gauge coupling, and four fermi interaction strength. In the NJL limit $\beta\to\infty$
we find evidence for a continuous phase transition for $G\sim 1$ corresponding to
the expected spontaneous breaking of chiral symmetry. However, for gauge couplings that
generate a non-zero chiral condensate even for $G=0$ this transition or crossover appears
much sharper and there is no evidence of critical fluctuations in the
chiral condensate.
Thus our results are consistent with the idea that the second order phase transition which exists in the pure
NJL theory ($\beta=\infty$) survives at weak gauge coupling. However our results indicate that
any continuous transition ends if the gauge coupling becomes strong enough to cause confinement.
In this case we do however see evidence of additional dynamical mass generation
for sufficiently large four fermi coupling associated with
an observed rapid crossover in the chiral condensate and a possible first
order phase transition.
The fact that we find the condensate non-zero and constant for strong
gauge coupling and $G< G_{c}$ shows that the chiral symmetry of the theory is already broken as expected for $SU(2)$ with $N_f=4$ flavors. This
breaking of chiral symmetry due to the gauge interactions is accompanied by the
generation of a non-zero fermion mass even for small four fermi coupling. Notice that
this type of scenario is actually true of top quark condensate models in which the
strong QCD interactions are already expected to break chiral symmetry
independent of a four fermion top quark operator.
The magnitude of
this residual fermion mass is {\it not} controlled by the four fermi coupling and cannot
to sent to zero by tuning the four fermi coupling - there can be no continuous phase
transition in the system as we increase the four fermi coupling - rather the condensate
becomes strongly enhanced for large $G$.
\begin{acknowledgments}
The simulations were carried out using USQCD
resources at Fermilab and Jlab.
\end{acknowledgments}
|
2,877,628,088,605 | arxiv | \section{Introduction}
Measurements are usually made to identify one out of a number of possibilities. In particular, in state discrimination measurements, in which the object is to determine which quantum state one has, the task is to design a measurement that will tell you exactly that, what the state of the system you have been given is. Another possibility, however, is to design a measurement that tells you which state you do not have, i.e.\ one that eliminates a possibility. A simple example of this is
a measurement that eliminates one of the trine
states of a qubit. The trine states are $|0\rangle$, $(1/2) (-|0\rangle + \sqrt{3}|1\rangle )$, and $(-1/2) (|0\rangle + \sqrt{3}|1\rangle )$.
The ``anti-trine states", which are orthogonal to the trine states, are $|1\rangle$, $(-1/2) (\sqrt{3}|0\rangle + |1\rangle )$, and $(1/2)(\sqrt{3} |0\rangle - |1\rangle)$. Suppose you are given a qubit, which is guaranteed to be in one of the trine states. It is not possible to find a measurement that will definitely tell you which state you have, some probability of error or failure is necessary. But by making use of a POVM whose elements are proportional to projections onto the anti-trine states you can definitely find out a state you do not have. For example, if you find a result corresponding to the state $|1\rangle$, you have not been given the state $|0\rangle$.
There has not been a great deal of work on state elimination measurements. None the less, they have found application in studies of the hidden subgroup problem \cite{hoyer}, quantum foundations \cite{PBR,Caves2002}, quantum communication \cite{Perry}, and quantum cryptography \cite{Collins,RyanOT}. Perhaps the most extensive study so far is by Bandyopadhyay \emph{et al.}, who applied semi-definite programming to an examination of single-state elimination measurements \cite{Bando2014}. Single state elimination also goes under the name ``anti-distinguishability'', and the focus of the work in that area has been in finding conditions for determining when a set of states is anti-distiguishable, i.e. when does there exist a POVM each of whose outcomes corresponds to eliminating one of the states \cite{Caves,Heinosaari1,Havlicek}. Recently, a connection between anti-distinguishability and non-contextuality inequalities was found \cite{Leifer}. Measurements for eliminating pairs of two qubit states, which generalize some of the results in \cite{PBR}, were presented in \cite{Crickmore}.
Here we are interested in looking at measurements that eliminate sets of states, not just single states as in anti-distinguishability. We will focus on states that are qubit sequences, where each qubit can be in one of several different states.. Previous studies of elimination measurements for states of this type considered only two states per qubit, and the elimination of a single state of the sequence. We will construct measurements that remove both of these restrictions. The tool we will use is group theory, and we will use it to construct covariant elimination measurements \cite{Dariano}. Group theory was used to generate sets of anti-distinguishable states in the case that the representation of the group is irreducible in \cite{Heinosaari1}. We will go beyond this and consider the case in which the representations is reducible, but with the restriction that each irreducible representation appears at most once. This turns out to be quite adequate for producing elimination measurements of qubit sequences. We will start with measurements that eliminate one state, but then go on to find measurements that eliminate two or four states. Finally, we find a condition that places a restriction on what types of elimination measurements are possible.
\section{Group theory preliminaries}
Suppose one has a collection of states, $\{ |\psi_{g}\rangle = \Gamma (g) |\psi_{e}\rangle\, | \, g\in G \}$. Here, $G$ is a group, $\Gamma (g)$ for $g\in G$ is a unitary representation of the group in which each irreducible representation appears at most once, and $e\in G$ is the identity element of the group. We want to find a POVM, $\{ \Pi_{g}\, | \, g\in G\}$, for which each element corresponds to eliminating one of the states in the set. That means that, if $\Pi_{g}$ is the element corresponding to $g$, then $\Pi_{g}|\psi_{g}\rangle = 0$. Let us assume for now that the POVM elements are rank one, and can be expressed in the form
\begin{equation}
\label{povm}
\Pi_{g}= \Gamma (g) |X\rangle\langle X| \Gamma (g)^{-1} ,
\end{equation}
where $|X\rangle$ is a vector yet to be determined. Note that if $\Pi_{e} |\psi_{e}\rangle =0$, which is equivalent to the condition $\langle X|\psi_{e}\rangle = 0$, then we will have $\Pi_{g}|\psi_{g}\rangle = 0$.
The representation $\Gamma$ can be decomposed into irreducible representations, $\Gamma_{p}$,
\begin{equation}
\Gamma = \bigoplus_{p} \Gamma_{p} ,
\end{equation}
where each $\Gamma_{p}$ acts on an invariant subspace, that is, each $\Gamma_{p}(g)$, for $g\in G$, maps the subspace into itself. Let $P_{p}$ be the projection onto the subspace corresponding to $\Gamma_{p}$. Then a theorem from group representation theory implies that
\begin{equation}
\label{decomp}
\frac{1}{|G|} \sum_{g\in G} \Gamma (g) |X\rangle\langle X| \Gamma (g)^{-1} = \sum_{p} \frac{1}{d_{p}} \| X_{p}\|^{2} P_{p} ,
\end{equation}
where $d_{p}$ is the dimension of the representation $\Gamma_{p}$, $|G|$ is the number of elements in $G$, and $|X_{p}\rangle = P_{p}|X\rangle$. Now suppose we find a vector $|X\rangle$ in Eq.\ (\ref{povm}) that satisfies $\| X_{p}\|^{2} = d_{p}/|G|$. We will then have that $\sum_{g} \Pi_{g} = I$, as is required for the elements of a POVM.
As a simple example, let us apply this to a single qubit with the group $\mathbb{Z}_{3} =\{ e,g,g^{2}\}$, where $e$ is the identity element and $g^{3}=e$. We shall choose a two-dimensional representation of $\mathbb{Z}_{3}$, and identify $g$ with the matrix (in the computational basis)
\begin{equation}
V=\left( \begin{array}{cc} -1/2 & \sqrt{3}/2 \\ -\sqrt{3}/2 & -1/2 \end{array} \right) ,
\end{equation}
that is, $\Gamma (g) = V$, which is a rotation by $2\pi /3$ in the $x-y$ plane. This matrix has eigenstates
\begin{equation}
|u_{\pm}\rangle = \frac{1}{\sqrt{2}}\left( \begin{array}{c} 1 \\ \pm i \end{array} \right) ,
\end{equation}
corresponding to eigenvalues $e^{\pm 2\pi i/3}$, respectively. We will be finding an elimination measurement for the vectors $V^{j}|0\rangle$, where $j \in \{ 0,1,2 \}$. Note that the vectors $|0\rangle$, $V|0\rangle$, and $V^{2}|0\rangle$ are just the trine states.
The vectors corresponding to the invariant subspaces of the irreducible representations are $|u_{j}\rangle$, where $j = \pm$. The vector $|X\rangle$ is now
\begin{equation}
|X\rangle = \frac{1}{\sqrt{3}} \sum_{j=\pm} e^{i\phi_{j}}|u_{j}\rangle ,
\end{equation}
which follows from the fact that $\| X_{p}\|^{2}=d_{p}/|G|$, $d_{p}=1$, and $|G|=3$.
The condition that $\langle 0|X\rangle = 0$ is
\begin{equation}
\sum_{j=\pm} e^{i\phi_{j}} = 0.
\end{equation}
This equation is easy to satisfy with the obvious choice of $\phi_{+} = 0$ and $\phi_{-} = \pi$. Making this choice we find that $|X\rangle =i\sqrt{2/3} |1\rangle$. The POVM elements are found by applying $V^{j}$, where $j \in \{ 0,1,2 \}$ to $|X\rangle$, and they are proportional to projections onto the anti-trine states.
\section{Single-state elimination for two qubits}
Let's now apply the group theory perspective to the measurement of four two-qubit states, which will be a generalisation of the measurement considered in \cite{PBR}. We will make use of the group $\mathbb{Z}_{2}$. This group has two elements, $\mathbb{Z}_{2}=\{ e,g\}$, where $e$ is the identity element and $g^{2}=e$. We will choose a two-dimensional representation of this group acting in the qubit space spanned by the computational basis vectors $|0\rangle$ and $|1\rangle$. The representation is specified by $\Gamma (e)=I$, the identity operator, and $\Gamma (g) = R$, the reflection through the $x$ axis, $R|0\rangle = |0\rangle$ and $R|1\rangle = - |1\rangle$. The two irreducible representations of $\mathbb{Z}_{2}$ are $\Gamma_{1}(e) =1$, $\Gamma_{1}(g)=1$, and $\Gamma_{2}(e) =1$, $\Gamma_{2}(g)=-1$. The invariant subspace corresponding to $\Gamma_{1}$ is spanned by $|0\rangle$, and the invariant subspace corresponding to $\Gamma_{2}$ is spanned by $|1\rangle$. For the state $|\psi_{e}\rangle$, we shall choose
\begin{equation}
|+ \theta\rangle = \cos\theta |0\rangle + \sin\theta |1\rangle ,
\end{equation}
where $0 \leq \theta \leq \pi /4$. This state is mapped into the state
\begin{equation}
|- \theta\rangle = \cos\theta |0\rangle - \sin\theta |1\rangle
\end{equation}
by $R$.
Going now to two qubits, our task is to find a measurement to eliminate one of the four states $|\pm \theta\rangle \otimes|\pm \theta\rangle$, where the pluses and minuses for each qubit are independent. Pusey et al. gave such a measurement for $\theta=\pi/8$. Our group is now $\mathbb{Z}_{2}\times \mathbb{Z}_{2}$ and the representation, $\Gamma (g)$, is now $\{ I\otimes I, I\otimes R, R\otimes I, R\otimes R \}$, which is a four-dimensional representation. The irreducible representations are just the products of the irreducible representations for $\mathbb{Z}_{2}$, and there are four of them. The invariant subspaces corresponding to the irreducible representations of $\mathbb{Z}_{2} \times \mathbb{Z}_{2}$ are just $|j\rangle \otimes|k\rangle$, where $j,k \in \{ 0,1 \}$. The vector $|\psi_{e}\rangle$ is now $|+\theta\rangle \otimes |+\theta\rangle$. The vector $|X\rangle$ can be chosen to be
\begin{equation}
\label{XPBR}
|X\rangle = \frac{1}{2} \sum_{j,k=0,1} e^{i\phi_{jk}}|j\rangle \otimes|k\rangle .
\end{equation}
We can set $\phi_{00} = 0$ without loss of generality. The condition that $\langle +\theta, +\theta |X\rangle = 0$ is then
\begin{equation}
\cos^{2}\theta + e^{i\phi_{11}} \sin^{2}\theta + (e^{i\phi_{01}} + e^{i\phi_{10}}) \cos\theta \sin\theta =0 .
\end{equation}
Dividing through by $\cos^{2}\theta$ we get
\begin{equation}
1 + e^{i\phi_{11}} \tan^{2}\theta + (e^{i\phi_{01}} + e^{i\phi_{10}}) \tan\theta = 0 .
\end{equation}
Let's make the Ansatz $\phi_{11}=\pi$ and $\phi_{01}=\phi_{10}=\phi + \pi$. This gives us
\begin{equation}
1- \tan^{2}\theta -2\tan\theta \cos\phi = 0 .
\end{equation}
For this to have a solution, it must be the case that
\begin{equation}
1- \tan^{2}\theta -2\tan\theta \leq 0 ,
\end{equation}
and this will be true if $\tan\theta \geq \sqrt{2}-1$, i.e.\ $\theta \geq \pi /8$. For $\theta$ satisfying this condition, we have the POVM whose elements are $\Gamma(g)|X\rangle\langle X|\Gamma^{\dagger}(g)$ for $g \in \mathbb{Z}_{2} \times \mathbb{Z}_{2}$. Each element corresponds to eliminating one of the four states. Note that while the states we are considering are separable, the POVM elements are projections onto entangled states. All of this is consistent with the results in \cite{PBR}, where the elimination measurement was given for $\theta = \pi/8$. Since we started with the assumption that the POVM elements are rank 1, we have strictly speaking not proven that a measurement that always eliminates one two-qubit state is impossible for $\theta<\pi/8$, but it turns out that this is the case \cite{Crickmore}. As we will see in Appendix B, for $\theta <\pi/8$ it is possible to sometimes eliminate one two-qubit state, that is, the measurement may sometimes fail, but when it succeeds, it will conclusively eliminate one state..
Crickmore et al. \cite{Crickmore} give an alternative construction for the measurement in the whole range $0\ <\theta\le \pi/4$, and also prove optimality.
Now suppose we want to consider more states. In particular, we would like to consider $N$ states for each qubit. First we need the group $\mathbb{Z}_{N} = \{ g^{j}\, | \, j=0,1,2,\ldots N-1 \}$, where $g^{0}=g^{N}=e$, and its irreducible representations, which are given by $\Gamma_{k}(e) = 1$ and $\Gamma_{k}(g) = \exp (2k\pi i/N)$ for $k=0,1,2,\ldots N-1$. We will choose the representation $\Gamma (e)=I$ and $\Gamma (g)= S_{N}$, where $S_{N}|0\rangle = |0\rangle$ and $S_{N}|1\rangle = \exp (2\pi i /N) |1\rangle$. This representation is a direct sum of $\Gamma_{0}$ and $\Gamma_{1}$, and the invariant subspaces, as before, are spanned by $|0\rangle$ and by $|1\rangle$. For two qubits, the group is $\mathbb{Z}_{N} \times \mathbb{Z}_{N}$. The vectors to be measured are generated by applying $S_{N}^{j}\otimes S_{N}^{k}$, for $j,k \in \{ 0,1,2,\ldots N-1 \}$ to $|+\theta\rangle \otimes|+\theta\rangle$, and the POVM elements are generated by applying these same operators to the vector $|X\rangle$, which is the same, up to a factor, as above. In particular, the factor of $1/2$ in Eq.\ (\ref{XPBR}) will be replaced by $1/N$, since the group now has $N^{2}$ rather than $4$ elements. Therefore, we can see that the group theory gives us an elimination measurement for a larger set of states with almost no additional work. Note that the POVM elements are proportional to projections onto entangled states.
\section{A non-abelian group}
Up until now we have only made use of abelian groups, so now let us look at a non-abelian group. A simple non-abelian group is the dihedral group $D_{3}$, which consists of rotations and reflections in the plane that leave an equilateral triangle invariant. It has six elements, $\{ e,r,r^{2}, s, rs, r^{2}s \}$, where $r^{3}=e$ and $s^{2}=e$. The dihedral group $D_3$ is isomorphic to the symmetric group~$S_3$, i.e., the group of permutations of three elements. The mapping is defined by $s\mapsto(12)$, $r\mapsto(123)$.
The group has three conjugacy classes $C_{e}=\{ e\}$, $C_{r}=\{ r,r^{2}\}$, and~$C_{s}=\{ s, rs, r^{2}s \}$. It has three irreducible representations, $\Gamma_p$ for $p=1,2,3$, where $\Gamma_1$ and $\Gamma_2$ are one-dimensional and $\Gamma_3$ is two dimensional. The character table for the group is given in Table~\ref{t-1}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|} \hline
& $C_{e}$ & $C_{r}$ & $C_{s}$ \\ \hline $\Gamma_1$ & $1$ & $1$& $1$ \\ \hline $\Gamma_2$ & $1$ & $1$ & $-1$ \\ \hline $\Gamma_3$ & $2$ & $-1$ & $0$ \\ \hline
\end{tabular}
\caption{\label{t-1}Character table for $D_{3}$.}
\end{table}
The one-dimensional representations are the trivial representation, $\Gamma_1(g)=1$ for all $g\in D_3$, and the so-called sign or alternate representation, defined by $
\Gamma_2(r)=1$ and $\Gamma_2(s)=-1$ for the generators of the group $r$ and $s$.
For the representation $\Gamma_3$, we can take the matrices
\begin{equation}
\Gamma_3(r)=\left( \begin{array}{cc} -1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & -1/2 \end{array} \right),\ \Gamma_3(s)=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) ,
\end{equation}
expressed in the computational basis $\{ |0\rangle ,|1\rangle \}$.
Suppose we have two qubits, which transform according to the representation $\Gamma_3\otimes \Gamma_3$, that is $\Gamma (g) = \Gamma_{3}(g) \otimes \Gamma_{3}(g)$ for $g\in D_{3}$. For $|\psi_{e}\rangle$ we will choose $|0\rangle \otimes |+x\rangle$, where $|\pm x\rangle = (|0\rangle \pm |1\rangle )/\sqrt{2}$, and the application of $\Gamma (g)$ to this state for the different possible values of $g$ yields a set of $6$ product states in a four dimensional space. We now want to find a POVM that eliminates one of these states, which means we want to find a suitable vector $|X\rangle$.
The product representation, $\Gamma$ can be decomposed into irreducible representations,
\begin{equation}
\Gamma_3\otimes \Gamma_3=\Gamma_1 \oplus \Gamma_2 \oplus \Gamma_3 .
\label{D3rep}
\end{equation}
For the invariant subspaces, we find that $|v_{1}\rangle = (|00\rangle + |11\rangle )/\sqrt{2}$ transforms as $\Gamma_{1}$, $|v_{2}\rangle = (|01\rangle - |10\rangle )/\sqrt{2}$ transforms as $\Gamma_{2}$, and the subspace that transforms as $\Gamma_{3}$ is spanned by $|v_{3}\rangle = (|00\rangle - |11\rangle )/\sqrt{2}$ and $|v_{4}\rangle = (|01\rangle + |10\rangle )/\sqrt{2}$. In terms of these states, we have
\begin{equation}
|0\rangle \otimes |+x\rangle = \frac{1}{2} \sum_{j=1}^{4} |v_{j}\rangle .
\end{equation}
The vector $|X\rangle$ must be orthogonal to this vector and satisfy $\|X_{1}\|^{2}=\|X_{2}\|^{2}=1/6$ and $\|X_{3}\|^{2}=1/3$. We find that
\begin{equation}
|X\rangle = \frac{2}{\sqrt{6}} |0\rangle |-x\rangle = \frac{1}{\sqrt{6}} \sum_{j=1}^{4} (-1)^{j+1} |v_{j}\rangle ,
\end{equation}
satisfies these conditions. Consequently, the POVM given by $\{ \Gamma (g)|X\rangle\langle X|\Gamma (g)^{-1} \, | \, g\in D_{3} \}$ will eliminate one of the six states $\{ \Gamma (g) |\psi_{e}\rangle \, | \, g\in D_{3} \}$.
\section{Eliminating more than one state}
So far, we have only explored measurements that eliminate one state, and now we would like to find one that eliminates sets of larger size. Suppose we can find a vector $|X\rangle$, satisfying $\|X_{p}\|^{2}=d_{p}/|G|$, which is orthogonal to the states in $S_{e} = \{ |\psi_{e}\rangle , \Gamma (g_{1})|\psi_{e}\rangle , \ldots \Gamma (g_{n}) |\psi_{e}\rangle \}$. We will then obtain a POVM whose outcomes correspond to eliminating the sets $S_{g} = \{ \Gamma (g)|\psi_{e}\rangle , \Gamma (gg_{1})|\psi_{e}\rangle , \ldots \Gamma (gg_{n}) |\psi_{e}\rangle \}$. In general these sets may not be disjoint, and some may be identical. If the group elements $\{ e, g_{1}, \ldots g_{n}\}$ form a subgroup, $H$, then the sets $S_{g}$ correspond to left cosets of $H$. Any two cosets are either identical or disjoint, and there are $|G|/|H|$ of them, and that means that our POVM will be able to eliminate one of $|G|/|H|$ disjoint sets. We will now look at two examples.
Let us consider three qubits, each of which is in one of the states $|\pm \theta\rangle$. These eight states are generated by the group $\mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2}$ using the same representation of $\mathbb{Z}_{2}$ as before. We will choose a vector $|X\rangle$ of the form
\begin{equation}
|X\rangle = \frac{1}{2\sqrt{2}} \sum_{j,k,l=0,1} e^{i\phi_{jkl}}|j\rangle\otimes |k\rangle \otimes |l\rangle ,
\end{equation}
and choose the phases so that $|X\rangle$ is orthogonal to both $|+\theta\rangle^{\otimes 3}$ and $|-\theta\rangle^{\otimes 3}$. This will be true if
\begin{eqnarray}
0 & = & e^{i\phi_{000}} + (e^{i\phi_{011}} + e^{i\phi_{101}} + e^{i\phi_{110}} )\tan^{2}\theta \nonumber \\
0 & = & (e^{i\phi_{001}} + e^{i\phi_{010}} + e^{i\phi_{100}} ) + e^{i\phi_{111}} \tan^{2}\theta .
\end{eqnarray}
If we choose $\phi_{011} = \alpha$, $\phi_{101}=0$, $\phi_{110}=-\alpha$, $\phi_{001}=\beta$, $\phi_{010}=0$, and $\phi_{100} = - \beta$, and both $\phi_{000}$ and $\phi_{111}$ equal to $\pi$, then these conditions, become
\begin{eqnarray}
1 & = & (1+2\cos\alpha ) \tan^{2}\theta \nonumber \\
\tan^{2}\theta & = & 1+2\cos\beta .
\end{eqnarray}
The first condition can be satisfied if $\tan^{2}\theta \geq 1/3$ and the second if $\tan^{2}\theta \leq 3$. For $3 \geq \tan^{2}\theta \geq 1/3$ they can both be satisfied, and this determines a vector $|X\rangle$ that is orthogonal to both $|+\theta\rangle^{\otimes 3}$ and $|-\theta\rangle^{\otimes 3}$. Note that $|-\theta\rangle^{\otimes 3} = R^{\otimes 3} |+\theta\rangle^{\otimes 3}$ and that $\{ I^{\otimes 3}, R^{\otimes 3} \}$ is a subgroup. That means the POVM will eliminate one of four sets, which in this case are pairs of states. The elements of the pairs will differ in all three slots, that is, if, for example, the first element is $|+\theta , -\theta, -\theta\rangle$, the second element will be $|-\theta, +\theta , +\theta\rangle$. So far, in our construction, we have $4$ pairs but $8$ POVM elements. The reason for this is that each pair is eliminated by two POVM elements. For example, the pair $\{ |+\theta\rangle^{\otimes 3} , |-\theta\rangle^{\otimes 3} \}$ is eliminated by both $|X\rangle\langle X|$ and $(R\otimes R\otimes R)|X\rangle\langle X|(R\otimes R\otimes R)$, because $R\otimes R\otimes R$ maps the pair $\{ |+\theta\rangle^{\otimes 3} , |-\theta\rangle^{\otimes 3} \}$ into itself. We can then combine these two rank one POVM elements into a rank two POVM element that eliminates the pair. Doing the same thing with the remaining pairs, we finally have a $4$ element POVM, consisting of rank two operators, each of whose elements corresponds to eliminating a pair. Note that there are $28$ possible pairs of states, so this POVM only eliminates a subset of the possible pairs. Also, due to the assumptions we made on $|X\rangle$, it is not clear whether the constructed measurement is optimal, or if it is possible to eliminate pairs also in other ranges of $\theta$.
This construction can be easily extended to $N$ states for each qubit, for $N$ even ($N$ needs to be even to guarantee that $|-\theta\rangle^{\otimes 3}$ is one of the states generated), by replacing $\mathbb{Z}_{2}$ by $\mathbb{Z}_{N}$, using the same representation for $\mathbb{Z}_{N}$ we used previously, and using the same vector $|X\rangle$, but with the factor $1/2^{3/2}$ replaced by $1/N^{3/2}$. The result is an $N^{3}/2$ element POVM each of whose elements corresponds to eliminating a pair of states.
Moving on to four qubits, we can find a measurement that eliminates sets of four states. The derivation is similar to those previously, so we will just state the results. The vector $|X\rangle$ is chosen to be
\begin{equation}
|X\rangle = \sum_{j,k,l,m=0}^{1} z_{jklm} |jklm\rangle ,
\end{equation}
where $z_{1000}$, $z_{0010}$, $z_{0111}$, $z_{1101}$, $z_{0101}$, $z_{0110}$, $z_{1111}$ are equal to one, $z_{0011}=e^{i\alpha}$, $z_{1100} = e^{-i\alpha}$, and the remaining coefficients are equal to $-1$. The angle $\alpha$ is given by
\begin{equation}
\cos\alpha = \frac{1 - \tan^{4}\theta}{2\tan^{2}\theta} ,
\end{equation}
and this can be satisfied if $1 \geq \tan^{2}\theta \geq \sqrt{2}-1$. This state is orthogonal to $|\theta\rangle^{\otimes 4}$, $|-\theta\rangle^{\otimes 4}$, $|\theta\rangle^{\otimes 2}|-\theta\rangle^{\otimes 2}$, and $|-\theta\rangle^{\otimes 2} |\theta\rangle^{\otimes 2}$. The operators $\{ I^{\otimes 4}, R^{\otimes 4}, I^{\otimes 2}R^{\otimes 2}, R^{\otimes 2}I^{\otimes 2} \}$, which generate these states from $|\theta\rangle^{\otimes 4}$ form a subgroup. Therefore, the sets that are eliminated are the four-element cosets of this subgroup, and there are four of them (the group is $\mathbb{Z}_{2}^{\times 4}$). Each POVM element is four-dimensional and corresponds to eliminating one of the cosets.
Thus we see that we can use group theoretic methods to construct exclusion measurements that eliminate more than one state. We next want to find a constraint on the kinds of exclusion measurements that are possible.
\section{Entropic bound}
Suppose Alice sends to Bob one of $N$ possible states, $|\psi_{z}\rangle$, $z\in \{ 1,2,\ldots N\}$, with all states being equally probable. This set of states is divided into $M$ non-intersecting sets of size $K$, where $MK=N$. Let $X\in \{ 1,2,\ldots M \}$ be the random variable corresponding to the set to which the state Alice sent belongs. We denote the set of states corresponding to $X=x$ by $S_{x}$. Bob performs a measurement on the state with $M$ possible outcomes, and $Y\in \{ 1,2,\ldots M\}$ is the random variable corresponding to Bob's outcome. Each result of the measurement that Bob performs corresponds to eliminating one of the sets of size $K$ of the set of states, in particular, one of the $M-1$ sets to which the state that Alice sent does not belong. This can be viewed as Alice sending one of the states
\begin{equation}
\rho_{x} = \frac{1}{K} \sum_{z\in S_{x}} |\psi_{z}\rangle\langle \psi_{z}| ,
\end{equation}
where each $\rho_{x}$ is sent with a probability of $1/M$, and Bob's measurement yielding a result $y$, such that $\rho_{y}$ is not the state that Alice sent. This scenario describes the two measurements in the previous section.
The mutual information between $X$ and $Y$ is (logarithms are base 2)
\begin{equation}
I(X:Y) = \sum_{x=1}^{M} \sum_{y=1}^{M} p(x,y) \log \left[ \frac{p(x,y)}{p_{X}(x) p_{Y}(y)} \right] ,
\end{equation}
where $p(x,y)$ is the joint distribution between $X$ and $Y$, and $p_{X}(x)$ and $p_{Y}(y)$ are its marginals. The measurement of this type that will provide the least information about which set the state that was sent belonged to (for a further discussion of this point see Appendix C) will be the one for which each each measurement result that can occur is equally probable, that is
\begin{equation}
\label{condmin}
p(y|x) = \left\{ \begin{array} {cc} 0 & y=x \\ 1/(M-1) & y \neq x\end{array} \right. ,
\end{equation}
where $p(y|x)$ is the conditional probability of $y$ given $x$. We already have that $p_{X}(x) = 1/M$, and from this and the above equation we have that $p_{Y}(y) = 1/M$. The mutual information is then
\begin{equation}
\label{infbnd}
I(X:Y) = \log \left( \frac{M}{M-1} \right) .
\end{equation}
Any measurement that eliminates the same sets will have a greater mutual information that that given in Eq.\ (\ref{infbnd}). Further, any measurement that eliminates the same sets will satisfy the Holevo bound \cite{nielsen}, which implies that $I(X:Y) \leq S(\rho ) - \sum_{x=1}^{M} p_{X}(x) S(\rho_{x})$. Here, if the state $\rho_{x}$ is sent with probability $p_{X}(x)$, then $\rho = \sum_{x=1}^{M} p_{X}(x) \rho_{x}$ (in our case we have that $p_{X}(x) = 1/M$). . Consequently, we have that
\begin{equation}
\label{condition}
\log \left( \frac{M}{M-1} \right) \leq S(\rho ) - \frac{1}{M} \sum_{x=1}^{M} S(\rho_{x}) .
\end{equation}
This places a constraint on the sets of states for which it is possible to create an elimination measurement that will eliminate one of $M$ non-overlapping sets.
In the case that the states are generated by a group, $\rho$ becomes quite simple, again with the caveat that each irreducible representation appears at most once,
\begin{equation}
\rho = \frac{1}{|G|} \sum_{g\in G} \Gamma (g) |\psi_{e}\rangle\langle \psi_{e}| \Gamma (g)^{-1} = \sum_{p} \frac{1}{d_{p}} \| \psi_{ep}\|^{2} P_{p} ,
\end{equation}
where $|\psi_{ep}\rangle = P_{p}|\psi_{e}\rangle$. We then have that
\begin{equation}
S(\rho ) = - \sum_{p} \| \psi_{ep}\|^{2} \log (\| \psi_{ep}\|^{2}/d_{p}) .
\end{equation}
In the case of our $4$-qubit example., we have that
\begin{equation}
S(\rho ) = - \sum_{n=0}^{4} \left(\begin{array}{c} 4 \\ n \end{array} \right) s^{n} (1-s)^{4-n} \log [s^{n} (1-s)^{4-n}] ,
\end{equation}
where $s=\sin^{2}\theta$.
In our $4$-qubit example, the sets $S_{x}$ correspond to cosets of a subgroup, so the density matrices $\rho_{x}$ are related to each other by unitary operators, i.e.\ $\rho_{x^{\prime}} = \Gamma (g)\rho_{x}\Gamma (g)^{-1}$ for some $g\in G$. This implies that they all have the same entropy. Therefore we just have to find the entropy of the density matrix corresponding to the subgroup itself
\begin{eqnarray}
\rho_{e} & = & \frac{1}{4} [ ( |\theta\rangle\langle \theta |)^{\otimes 4} +( |\theta\rangle\langle \theta |)^{\otimes 2} (|-\theta\rangle\langle -\theta |)^{\otimes 2} \nonumber \\
& & + ( |-\theta\rangle\langle -\theta |)^{\otimes 2}( |\theta\rangle\langle \theta |)^{\otimes 2}
+ ( |-\theta\rangle\langle -\theta |)^{\otimes 4} ].
\end{eqnarray}
This density matrix can be diagonalized, and its entropy is
\begin{eqnarray}
S(\rho_{e}) & = & -\frac{1}{2} (1-v^{4}) \log (1-v^{4}) -\frac{1}{4} (1+v^{2})^{2} \log (1+v^{2})^{2}
\nonumber \\
& & -\frac{1}{4} (1-v^{2})^{2} \log (1-v^{2})^{2} +2 ,
\end{eqnarray}
where $v=1-2s$. With $M=4$ , the condition in Eq.\ (\ref{condition}) becomes
\begin{equation}
\log \left( \frac{4}{3} \right) \leq S(\rho ) - S(\rho_{e}) .
\end{equation}
\begin{figure} [h]
\includegraphics[scale=.15]{Figure5d.jpg}
\caption{Plot of the quantities appearing in Eq.\ (34) versus $s=\sin^{2}\theta$. Green is the entropy difference and the red line is $\log (4/3)$.}
\label{fig1}
\end{figure}
We plot the quantities in this inequality versus $s$ in Figure 1. We see that for there to be a state elimination measurement that eliminates one of four sets, we must have $s>0.08$. This implies that $\theta$ must be greater than $16$ degrees.
\section{Conclusion}
We have shown how group theory can be used to find measurements that eliminate states of qubit sequences. We first looked at cases where one state is eliminated, a situation also known as anti-discrimination. This was extended to cases in which more than one state is eliminated. Finally, we developed a constraint on the construction of elimination measurements.
As was noted in the Introduction, elimination measurements have proven useful in a number of areas of quantum information. By making it easier to find such measurements, we believe that the techniques presented here will increase the areas of applicability of these measurements.
\section*{Appendix A}
So far, we have only considered the situation in which the measurement always
eliminates one of the states unambiguously. This may not be possible in general. We can extend the set of states for which elimination measurements are possible by allowing the measurement to sometimes fail, and telling us when it does. As an example, let's go back to the case of two qubits, each in one of the states $|\pm \theta\rangle$, and see what we can do when $\tan\theta < \sqrt{2}-1$. This will require a failure operator, that is a POVM element that will give us the probability of the measurement failing. This case was studied in \cite{Crickmore}, but here we would like to consider from a slightly different point of view and include it for completelness.
We now set
\begin{equation}
|X\rangle = \sum_{j,k=0}^{1} c_{jk}|j\rangle\otimes |k\rangle ,
\end{equation}
and we still want the condition $\langle +\theta, +\theta |X\rangle = 0$, which is
\begin{equation}
\label{orthog}
c_{00}\cos^{2}\theta +(c_{01}+c_{10})\sin\theta \cos\theta + c_{11}\sin^{2}\theta = 0 .
\end{equation}
We will no longer have the condition that $|c_{jk}|$ is independent of $j$ and $k$, because we cannot satisfy the above equation if it holds. The POVM operators that eliminate a state are still given by $\Pi_{g} = \Gamma (g) |X\rangle\langle X| \Gamma(g)^{-1}$, and the failure operator $\Pi_{f}$ is given by
\begin{equation}
\Pi_{f}= I - \sum_{g} \Pi_{g} =I - 4\sum_{j,k=0}^{1} |c_{jk}|^{2} |j\rangle\langle j| \otimes |k\rangle \langle k|.
\end{equation}
For $\Pi_{f}$ to be a positive operator, we see from the above equation that we must have $|c_{jk}| \leq 1/2$. Assuming the states are equally likely, the failure probability is
\begin{eqnarray}
P_{f} & = & \frac{1}{4} \sum_{j,k=\pm \theta} \langle j,k|\Pi_{f}|j,k\rangle \nonumber \\
& = & 1- 4( |c_{00}|^{2} \cos^{4}\theta + ( |c_{01}|^{2} + |c_{10}|^{2}) \sin^{2}\theta \cos^{2}\theta \nonumber \\
& & + |c_{11}|^{2} \sin^{4}\theta ) .
\end{eqnarray}
We want to minimize $P_{f}$, which means we want to maximize the expression in parentheses in the above equation. The coefficient multiplying $|c_{00}|^{2}$ is the largest, so we would like to make $|c_{00}|$ as large as possible consistent with the condition in Eq.\ (\ref{orthog}). Now if we choose $c_{00}$ real and positive, looking at Eq.\ (\ref{orthog}), we see it will be maximized if we choose $c_{01}=c_{10}=c_{11}= -1/2$. This then gives us
\begin{equation}
c_{00}=\tan\theta + \frac{1}{2} \tan^{2}\theta ,
\end{equation}
and the condition $\tan\theta < \sqrt{2}-1$ guarantees that $|c_{00}| < 1/2$. This, then, specifies the POVM elements, and the failure probability is given by
\begin{equation}
P_{f}=1- 2 \sin^{2}\theta [ 1+2\cos\theta\, (\cos\theta + \sin\theta )] .
\end{equation}
Note that this expression holds only for $\theta \leq \pi /8$, and $P_{f}=0$ for $\theta \geq \pi /8$. Again, we have not proven that this is the optimal success probability, but it turns out that it is \cite{Crickmore}. Finally, if one goes from $\mathbb{Z}_{2} \times \mathbb{Z}_{2}$ to $\mathbb{Z}_{N} \times \mathbb{Z}_{N}$ using the same representation as in the previously, the expressions for the failure operator and the failure probability remain the same.
\section*{Appendix B}
So far, we have only considered two-, three-, and four-qubit states, but in \cite{PBR} measurements that exclude a single $n$-qubit state were found. It is useful to study this case from the group theory point of view. Let us consider the set of $n$-qubit states where each qubit is in either the state $|+\theta\rangle$ or the state $|-\theta\rangle$. We will denote a member of this set as $|\Psi_{x}\rangle$, where $x$ is an $n$-digit binary number, and a $0$ in the $j^{\rm th}$ place corresponds to the $j^{\rm th}$ qubit being in the state $|+\theta\rangle$ and a $1$ corresponds to the $j^{\rm th}$ qubit being in the state $|-\theta\rangle$. We want to find a measurement that will eliminate one of the states $|\Psi_{x}\rangle$.
The relevant group here is $\mathbb{Z}_{2}^{\times n}$ and the representation for $\mathbb{Z}_{2}$ is the same one we used before. The invariant subspaces for the irreducible representations are just the basis vectors $|x\rangle = \prod_{j=0}^{n-1} |x_{j}\rangle$, with each $x_{j}$ being either $0$ or $1$. That means that the vector $|X\rangle$ is given by
\begin{equation}
|X\rangle = \frac{1}{2^{n/2}} \sum_{x=0}^{2^{n}-1} e^{i\phi_{x}} |x\rangle ,
\end{equation}
The condition $\langle \Psi_{0} | X\rangle = 0$ gives us
\begin{equation}
\sum_{x=0}^{2^{n}-1} e^{i\phi_{x}} \langle \Psi_{0}|x\rangle = 0.
\end{equation}
Let us now make the Ansatz $\phi_{0} = \alpha$ and $\phi_{x} = |x|\beta$ for $x\neq 0$, where $|x|$ is the number of ones (Hamming weight) in the sequence $x$. We then have
\begin{equation}
e^{i\alpha} \cos^{n}\theta + \sum_{k=1}^{n} \left[ \left(\begin{array}{c} n \\ k \end{array}\right) e^{ik\beta}\cos^{n-k}\theta \sin^{k}\theta \right] = 0,
\end{equation}
or, factoring out $\cos^{n}\theta$,
\begin{equation}
e^{i\alpha} + (1 + e^{i\beta} \tan\theta )^{n} -1 = 0.
\end{equation}
This is the condition derived in \cite{PBR}, and it was shown there that it is possible to satisfy it for $\arctan (2^{1/n} -1) \leq \theta \leq \pi /4$.
As we did with two qubits, we can increase the number of states for each qubit to $N$, and the group to $\mathbb{Z}_{N}^{\times n}$, using the same two dimensional representation of $\mathbb{Z}_{N}$ as before. The same vector $|X\rangle$, which we just found, with $1/2^{n/2}$ replaced by $1/N^{n/2}$, can be used to form the POVM. Thereby we obtain a POVM that will eliminate one state from sequences of $n$ qubits, where each qubit can be in one of $N$ states.
\section*{Appendix C}
Here we want to show that the conditional probability given in Eq.\ (\ref{condmin}) does lead to a minimum of the mutual information. We assume that we have a measurement that eliminates one of $M$ non-overlapping sets, with each state being equally probable, but that
\begin{equation}
p(y|x) = \left\{ \begin{array} {cc} 0 & y=x \\ q_{yx} & y \neq x \end{array} \right. ,
\end{equation}
where $0\leq q_{yx} \leq 1$ and they satisfy the $M$ constraints
\begin{equation}
\sum_{ \{ y| y \neq x\} } q_{yx} = 1 .
\end{equation}
We then find that
\begin{equation}
\label{pY}
p_{Y}(y) = \frac{1}{M} \sum_{x \neq y} q_{yx} ,
\end{equation}
and
\begin{eqnarray}
I(X:Y) & = & -\frac{1}{M} \sum_{x=1}^{M} \sum_{ y\neq x } q_{yx} \log q_{yx} \nonumber \\
& & + \sum_{y=1}^{M} p_{Y}(y) \log p_{Y}(y) .
\end{eqnarray}
For each constraint we have a Lagrange multiplier, $\lambda_{x}$, and we then have the equations
\begin{equation}
\frac{\partial}{\partial q_{yx}} \left[ I(X:Y) - \sum_{x^{\prime} = 1}^{M} \sum_{ y^{\prime} \neq x^{\prime}}\lambda_{x^{\prime}} q_{y^{\prime}x^{\prime}} \right] = 0
\end{equation}
This gives
\begin{equation}
\log q_{yx} = M\lambda_{x} + \log p_{Y}(y) ,
\end{equation}
or $q_{yx}= e^{M\lambda_{x}} p_{Y}(y)$. If we now insert this expression for $q_{yx}$ into Eq.\ (\ref{pY}), we get the consistence condition
\begin{equation}
\frac{1}{M} \sum_{x\neq y} e^{M\lambda_{x}} = 1 .
\end{equation}
For this to hold for each value of $y$, it must be the case that $e^{M\lambda_{x}}$ is a constant, independent of $x$, and, therefore, equal to $M/(M-1)$. If we now insert this into the constraint equations, we find, for each $x$
\begin{equation}
\frac{M}{M-1} \sum_{y\neq x} p_{Y}(y) = 1 ,
\end{equation}
For this to hold $p_{Y}(y)$ must be constant, and equal to $1/M$. This then yields $q_{yx}= 1/(M-1)$ for $y \neq x$.
\acknowledgments
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) under EP/M013472/1 and EP/L015110/1.
|
2,877,628,088,606 | arxiv | \section*{Introduction}
\setcounter{equation}{0}
The cohomology of quotients arising from
geometric invariant theory (GIT)
has been the object of much study.
In \cite{kir1} Kirwan used
the analyses of instability in the previous works of Hesselink
\cite{hes1}, Kempf \cite{kem1}, and Kempf-Ness \cite{kem-nes1} to
explore the
structure of GIT quotients from both the algebraic and symplectic
perspectives, finding formulas to compute
Hodge numbers.
Five years later, Ellingsrud and Str\o mme \cite{ell-str1} began a
study of
the relationship
between the Chow rings of the two GIT quotients
$\P^n /\mspace{-6.0mu}/ G$ and $\P^n/\mspace{-6.0mu}/ T$, for
a reductive group $G$ with maximal torus $T \subseteq G$,
and used this to provide a presentation of the Chow ring
$A^\ast(\P^n /\mspace{-6.0mu}/ G)_{\mathbb{Q}}$ in terms of explicit generators and
relations.
Brion \cite{bri2} then expanded
this relationship to smooth, projective varieties $X$ over the complex
numbers and proved that the $G$-equivariant cohomology of
$\semistable X G$ is
isomorphic to the submodule of
$H^\ast_T(\semistable X T; \mathbb{Q})$
comprising those classes anti-invariant under the action of the Weyl group:
\begin{equation}\label{brion-semi-stable-cohomology-result}
\phi: H^\ast_G(\semistable X G;\mathbb{Q}) \cong H^\ast_T (\semistable X T;\mathbb{Q}) ^a.
\end{equation}
Later
Brion and Joshua \cite{bri-jos1} extended these results further to the case of
singular $X$, but with equivariant \emph{intersection} cohomology used as a suitable
replacement for the standard theory.
Brion's construction of the isomorphism $\phi$ is as follows (cf. \cite{bri2}).
Since
$\semistable X G$ is a $G$-variety and $T \subseteq G$ is a subgroup,
there is an injective homomorphism
$\pi^\ast: H_G^\ast(\semistable X G;\mathbb{Q}) \to H_T^\ast(\semistable X
G;\mathbb{Q})$ that induces an
isomorphism onto the submodule of $W$-invariant elements
$H_T^\ast(\semistable X G;\mathbb{Q})^W \subseteq H_T^\ast(\semistable X
G;\mathbb{Q})$. Moreover, there is a
$W$-equivariant isomorphism
\begin{equation}\label{equation-equivariant-cohomology-isomorphism}
H_T^\ast(\semistable X G; \mathbb{Q}) \cong S \otimes_{S^W}
H_G^\ast(\semistable X G;\mathbb{Q}),
\end{equation}
where $S := H_T^\ast(\Spec \mathbb{C};\mathbb{Q})$ is the
$T$-equivariant cohomology of a
point, and under this identification $\pi^\ast$ is equal to $1 \otimes
\id$.
The open inclusion $i: \semistable X G \hookrightarrow \semistable X T$ induces a
surjective homomorphism $i^\ast: H_T^\ast(\semistable X T; \mathbb{Q}) \to
H_T^\ast(\semistable X G; \mathbb{Q})$, and Brion's key observation is that
$i^\ast$ is an isomorphism on the $W$-anti-invariant
submodules:
$$i^\ast: H_T^\ast(\semistable X T; \mathbb{Q})^a \cong
H_T^\ast(\semistable X G; \mathbb{Q})^a.$$
The anti-invariant elements $S^a \subseteq S$ form a free
$S^W$ module of rank $1$, generated by the element defined as the
product of the positive roots, $\sqrt{c_{\text{top}}} :=
\prod_{\alpha \in {\Phi^+}} \alpha \in S$; here $S$ is identified
with $\Sym^\ast_\mathbb{Q}(\chargp{T}_\mathbb{Q}) \cong S$, with $\chargp T$ denoting
the character group of $T$.
Combining these facts,
$\phi:= (i^\ast)^{-1} \circ ( \sqrt{c_{\text{top}}} \frown \pi^\ast)$ is
the desired isomorphism. If $\tilde \alpha \in H_T^\ast(\semistable X
T; \mathbb{Q})^W$ denotes some $W$-invariant lift of the class
$\alpha \in H_G^\ast(\semistable X G; \mathbb{Q})$, that is, if $i^\ast \tilde
\alpha = \pi^\ast \alpha$, then $\phi$ can be described
explicitly as
$$\phi: \alpha \mapsto \sqrt{c_{\text{top}}} \frown \tilde \alpha.$$
In this paper we address the question of how
this isomorphism $\phi$ interacts with the integration pairings on $X/\mspace{-6.0mu}/
G$ and $X /\mspace{-6.0mu}/ T$.
When $\semistable X G = \stable X G$, there is a natural identification
between the equivariant cohomology groups of the semi-stable locus and
the ordinary cohomology groups
of the GIT quotient,
$$H_G^\ast(\semistable X G;\mathbb{Q}) \cong H^\ast(X/\mspace{-6.0mu}/ G;\mathbb{Q}),$$
(and similarly with $T$ replacing $G$).
Hence, for any $\alpha_1, \alpha_2 \in H^\ast(X/\mspace{-6.0mu}/ G; \mathbb{Q})$,
we can compare the integrals
$$\int_{X/\mspace{-6.0mu}/ G} \alpha_1 \frown \alpha_2~~ \stackrel ? \leftrightarrow
~~\int_{X/\mspace{-6.0mu}/ T} \phi(\alpha_1) \frown \phi(\alpha_2).$$
Because $\phi(\alpha_1)
\frown \phi(\alpha_2) = (\sqrt{c_{\text{top}}} \frown \sqrt{c_{\text{top}}})\frown
({\tilde\alpha_1 \frown \tilde \alpha_2})$ and $i^\ast(\tilde \alpha_1 \frown
\tilde \alpha_2) =\pi^\ast(\alpha_1 \frown
\alpha_2)$, we may
simplify notation by defining $\alpha := \alpha_1
\frown \alpha_2$. Moreover, it will prove to be more
natural to consider the class $c_{\text{top}} := \prod_{\alpha
\in \Phi} \alpha$ instead of $\sqrt{c_{\text{top}}} \frown \sqrt{c_{\text{top}}}$, which
just differs from the former by the sign $(-1)^{|\Phi^+|}$.
After these substitutions, the question becomes the comparison of the
integrals $\int_{X/\mspace{-6.0mu}/ G} \alpha$ and $\int_{X /\mspace{-6.0mu}/ T} c_{\text{top}} \frown
\tilde \alpha$ for $\alpha \in H^{\ast}(X/\mspace{-6.0mu}/ G;\mathbb{Q})$.
Martin answered this question in the context of symplectic
geometry (cf. \cite{mar1}). There he
proved the following formula
for Hamiltonian actions of compact Lie groups $G$ on
symplectic manifolds $X$:
\begin{equation}
\int_{X/\mspace{-6.0mu}/ G}\alpha = \frac 1 {|W|} \int_{X/\mspace{-6.0mu}/ T} c_{\text{top}}
\frown \tilde \alpha. \label{martins-integration-formula}
\end{equation}
Here $X/\mspace{-6.0mu}/ G$ and $X/\mspace{-6.0mu}/ T$ denote symplectic reductions.
Furthermore, he used this to give a
presentation of $H^\ast_G(\semistable X G)$ as the quotient of the
$W$-invariant elements of $H_T^\ast(\semistable X T)$ by the elements
annihilated by $c_{\text{top}}$:
$$H_G^\ast(\semistable X G) \cong H_T^\ast(\semistable X T)^W/ \Ann(c_{\text{top}}).$$
Martin's method of proof is analytic, with the crux of his argument
relying on properties of moment maps, while the
methods in the works of Brion, Ellingsrud-Str\o mme, and
Brion-Joshua mentioned above are algebraic.
The purpose of this note is to generalize Martin's results
to the algebraic setting of varieties $X$
over an arbitrary field $k$. Let $G$ be a reductive group over
$k$ with
a split maximal torus $T \subseteq G$ and Weyl group $W$.
Let $X$ be a
$G$-linearized
(possibly singular) variety over $k$ for which $\stable X T =
\semistable X T \neq \emptyset$. Denote by the
symbol $\int_{Y} \sigma$ the degree of the Chow class $\sigma
\in A_0(Y)_{\mathbb{Q}}$ of a proper variety $Y \to k$ given by proper
push-forward, and by $c_{\text{top}} := \prod_{\alpha \in
\Phi} \alpha \in A^\ast(BT)$ the top Chern class in the Chow
ring of the vector bundle $\mathfrak g/\mathfrak t$ on the classifying
space $BT$.
We are led to define, for
projective linearized $G$-varieties $X$,
the \emph{GIT integration ratio}:
$$r_{G,T}^{X,\alpha} := \frac{\int_{X/\mspace{-6.0mu}/ T} c_{\text{top}} \frown \tilde \alpha}
{\int_{X/\mspace{-6.0mu}/ G} \alpha} \in \mathbb{Q},$$
for a Chow $0$-cycle $\alpha \in A_0(X/\mspace{-6.0mu}/ \mathbb{G})_{\mathbb{Q}}$ that does not integrate
to $0$ and a lift $\tilde
\alpha \in A_\ast(X/\mspace{-6.0mu}/ T)_{\mathbb{Q}}$ ~(cf. Defn. \ref{definition-of-lift}).
Understanding
the invariance properties of this ratio will guide us to the proper
generalization of Martin's theorem.
The ratio $r_{G,T}^{X,\alpha}$ may depend on the choice of
lift $\tilde \alpha$ or --- as seems \emph{a priori} more likely --- the
variety $X$, but
neither is the case.
This is our main theorem (proved
\newcounter{sectionSafety}
\setcounter{sectionSafety}{\value{equation}}
in \S
\ref{section-independence-of-GIT-ratio}):
\setcounter{equation}{\value{sectionSafety}}
\begin{theorem}\label{GIT-integral-ratio-is-invariant-of-G-theorem}
If $G$ is a reductive group over a field $k$ and $T \subseteq G$ a
split maximal torus, then the
GIT integration ratio $r_{G,T}^{X,\alpha}$, defined above for a
$G$-linearized projective
$k$-variety $X$ and a Chow class $\alpha \in A_0(X/\mspace{-6.0mu}/ G)$ satisfying
$\semistable X T = \stable X T$ and $\int_{X/\mspace{-6.0mu}/ G} \alpha \neq
0$,
depends not on the choice of $T$, $X$, or $\alpha$.
That is, $r_G := r_{G,T}^{X,\alpha}$ is an invariant of the group
$G$.
\end{theorem}
The GIT integration ratio is multiplicative under the group operation
of direct products and invariant under central extensions. This is
the content of our second theorem (proved in
\S\setcounter{sectionSafety}{\value{equation}}\ref{section-functorial-properties}):
\setcounter{equation}{\value{sectionSafety}}
\begin{theorem}\label{GIT-integral-ratio-decomposes-multiplicatively}
If $G$ is a reductive group over a field $k$ that, up to central
extensions, is the product of simple groups $G_1 \times \cdots
\times G_n$, then
$$r_G = \prod_{i =1}^n r_{G_i}.$$
\end{theorem}
As a result of these theorems, the determination of the
value $r_G$ for all reductive groups $G$ is reduced to the computation
of $r_G$
just for the simple groups. We are able to do this
explicitly in
\S\setcounter{sectionSafety}{\value{equation}}\ref{section-calculation}\setcounter{equation}{\value{sectionSafety}}
for the simple group
$G= PGL(n)$, where we verify $r_G = n! = |W|$.
\begin{corollary}\label{main-theorem}
Let $G$ be a reductive group over a field $k$ and $T \subseteq G$ a
split maximal torus. If the
root system of $G$ decomposes into irreducible root systems of type
$\mathbf A_{n}$, for various $n \in \mathbb{N}$, then for any
$G$-linearized
projective $k$-variety $X$ for which $\stable X T = \semistable X
T$ and any
Chow class $\alpha \in A_0(X /\mspace{-6.0mu}/ G)_\mathbb{Q}$
with lift $\tilde \alpha \in A_{\ast}(X/\mspace{-6.0mu}/ T)_\mathbb{Q}$,
\begin{equation}\label{martins-integration-formula-for-Chow-groups-equation}
\int_{X/\mspace{-6.0mu}/ G}\alpha = \frac{1}{|W|} \int_{X/\mspace{-6.0mu}/ T} c_{\text{top}} \frown
\tilde \alpha.
\end{equation}
\end{corollary}
For general reductive groups, one can apply Theorem
\ref{GIT-integral-ratio-is-invariant-of-G-theorem} and make use of
the theory of
relative GIT (cf. \cite{sesh1}) and specialization
(cf. \cite[\S20.3]{ful1})
to remove the
root system condition from the above corollary by reducing
the proof to a calculation over the complex numbers, where we may
apply Martin's result (cf. \cite[Thm. B]{mar1}); this is discussed in
\S \ref{section-final-remarks}.
An entirely algebraic
proof of the general case still eludes us.
\vspace{.2cm}
\noindent {\bf \large Acknowledgments.}
I thank my thesis advisor Johan de Jong for teaching me
algebraic geometry and for his charitable guidance and endless
patience that led me through the discovery of these results.
I also thank Burt Totaro for suggesting improvements to an earlier
draft.
\section*{Notation}
\begin{itemize}
\renewcommand{\labelitemi}{$\cdot$}
\item $k$ denotes a field and $\bar k$ its algebraic closure.
\item $G$ denotes a smooth, reductive group over $k$, with identity
element $e \in G$ and split
maximal torus $T \subseteq G$.
\item $N_T \subseteq G$ denotes the normalizer of $T$ in $G$.
\item $\chargp T$ denotes the character group of $T$ and $\cochargp T$
the group of $1$-parameter subgroups.
\item $V$ denotes a finite dimensional $G$-representation over $k$.
\item $\P(V)$ denotes the projective space of hyperplanes in $V$, so that $\Gamma(\P(V), \mbox{$\mathcal{O}$}(1)) = V$.
\item $\mathfrak g$ and $\mathfrak t$ denote respectively the Lie algebras of $G$
and $T$.
\item $W$ denotes the Weyl group of $G$.
\item $\Phi$ (resp. $\Phi^+$ or $\Phi^-$) denotes the root system
(resp. set of positive or negative roots) of $G$.
\item $X$ denotes a projective variety on which $G$ acts and $\scr{L}$
denotes an ample $G$-linearized line bundle.
\item $\semistable X G, \stable X G$, and $\unstable X G$ denote the GIT loci of
$G$-semi-stable, $G$-stable, and $G$-unstable points of $X$,
respectively. We also use the analogous notations for $T$ in place of $G$.
\item $[S/H]$ denotes the quotient stack of a scheme $S$ by a
group $H$.
\item $BT := [\Spec k/T]$ denotes the Artin stack that is the
algebraic classifying space of $T$.
\item $A_\ast(-)$ (resp. $A_\ast(-)_\mathbb{Q}$) denotes the Chow group with coefficients
in $\mathbb{Z}$ (resp. $\mathbb{Q}$), graded by dimension.
\item $A^\ast(-)$ (resp. $A^\ast(-)_\mathbb{Q}$) denotes the operational Chow ring with
coefficients in $\mathbb{Z}$ (resp. $\mathbb{Q}$).
\item $\sqrt{c_{\text{top}}} := \prod_{\alpha \in \Phi^{+}} \alpha \in A^\ast(BT)$
and $c_{\text{top}} := \prod_{\alpha \in \Phi} \alpha \in A^\ast(BT)$.
\item $\scr{L} \boxtimes \scr{M}$ denotes the line bundle $\pi_Y^\ast \scr{L}
\otimes \pi_Z^\ast \scr{M}$ on $Y \times Z$ when $\scr{L}$ and $\scr{M}$ are line
bundles on varieties $Y$ and $Z$, respectively.
\item $\int_Y \alpha$ denotes the degree of a Chow class $\alpha \in
A_0(Y)$ on a proper variety $Y$ over $k$, computed via proper
push-forward by the structure morphism.
\end{itemize}
\section{Analysis of $\sqrt{c_{\text{top}}}$ on $\semistable X T$ for smooth $X$}\label{section-rootctop-on-smooth-x}
The goal of this section is to prove that when $X$ is smooth
over an arbitrary field $k$, that the class $\sqrt{c_{\text{top}}} \frown \tilde
\alpha$ is independent of the choice of lift. From this the
independence of $c_{\text{top}} \frown \tilde \alpha$ follows immediately,
as $c_{\text{top}} =
(-1)^{|\Phi^+|} \sqrt{c_{\text{top}}} \frown \sqrt{c_{\text{top}}}$.
To treat the case of singular $X$, one could
attempt a push-forward along a closed immersion into
projective space. Unfortunately, complications arise due to the possible
introduction of strictly semi-stable points; we postpone a
discussion of these
technicalities until \S \ref{strictly-semi-stable-points-section}.
\subsection{A review of GIT}
We give a brief summary of geometric invariant
theory, mainly to set conventions. See \cite{GIT} or \cite{kir1} for
more detailed expositions.
\begin{definition}
Let $X$ be a projective variety with a linearized action of a
reductive group $G$, i.e. an ample
line bundle $\pi:\scr{L} \to X$ and $G$-actions on $\scr{L}$ and $X$ for which
$\pi$ is
an equivariant map. We call such an $\scr{L}$ a \emph{$G$-linearization} of $X$.
\begin{itemize}
\renewcommand{\labelitemi}{$\diamond$}
\item The \emph{semi-stable locus} is defined to be
$$\semistable X G := \{x \in X: \exists ~n > 0 \textrm{
and some } \phi \in \Gamma(X, \scr{L}^{\otimes n})^G \textrm{
satisfying } \phi(x) \neq 0\};$$
\item The \emph{stable}\footnote{Often in the literature, this is
referred to as ``properly stable''.} \emph{locus} is defined to be
$$\stable X G := \{ x \in \semistable X G: G\cdot x \subseteq
\semistable X G \textrm{ is a
closed subscheme and } |\stab_G x| < \infty \}.$$
\item The \emph{unstable locus} is defined to be
$\unstable X G := X \setminus \semistable X G.$
\item The \emph{strictly semi-stable locus} is defined to be
$\strictlysemistable X G := \semistable X G \setminus \stable X G.$
\end{itemize}
\end{definition}
\noindent The following theorem justifies the making of the above
definitions:
\begin{theorem}[Mumford]
The open locus $\semistable X G \subseteq X$ of semi-stable points
admits a uniform categorical quotient $\pi: \semistable X G \to X/\mspace{-6.0mu}/
G$, called the \emph{GIT quotient of $X$ by $G$.}
Moreover, the ample line bundle $\scr{L}$ descends to an ample line bundle
on the projective scheme $X/\mspace{-6.0mu}/ G$,
and
the restriction $\pi|_{\stable X G}:\stable X G \to \pi(\stable X
G)$ of $\pi$ to the open stable locus is a geometric quotient.
\end{theorem}
\begin{proof}
See \cite[Thm. 1.10]{GIT}.
\end{proof}
We conclude this review by describing how to compute the stable and
semi-stable loci in practice.
Let $T\subseteq G$ be a split maximal torus with
character group $\chargp{T}$. Equivariantly embed $X$
into $\P(V)$ for some $G$-representation $V$; e.g. via
some tensor power of
the $G$-linearized line bundle $\scr{L}$, with $V := \Gamma(X,
\scr{L}^{\otimes n})$. The $G$-representation structure on $V$ endows
$\P(V)$ with a
naturally linearized
$G$-action,
for which $\semistable{ \P(V)} G \cap X =
\semistable X G$ (and similarly for the stable loci).
As a $T$-module, $V$ decomposes
as the direct sum of weight spaces $V = \oplus_{\chi
\in \chargp T} V_\chi$.
\begin{definition}
For any $x \in X \subseteq \P(V)$ as above, we define the
\emph{state of $x$} to be
$$\Xi(x) := \{ \chi \in \chargp T: \exists~ v \in V_\chi
\textrm{ such that } v(x) \neq 0 \}.$$
\end{definition}
\begin{theorem}[Hilbert-Mumford criterion]
\label{hilbert-mumford-criterion}
A point $x$ is semi-stable for the induced linearized $T$-action
if and only if $0$ is in the convex hull of $\Xi(x)$ in $\chargp T
\otimes \mathbb{Q}$. Moreover, $x$
is stable for the $T$-action if and only if $0$ is in the interior
of the convex hull of $\Xi(x)$. Furthermore,
$$\semistable X G = \bigcap_{g \in G} g\cdot \semistable X T,~~
\textrm{ and }~~ \stable X G = \bigcap_{g \in G} g\cdot \stable X T.$$
\end{theorem}
\begin{proof}
See \cite[Thm. 2.1]{GIT}.
\end{proof}
\subsection{Lifting Chow classes between GIT quotients}
We now define what it precisely means to lift a Chow class $\alpha
\in A_\ast(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$ to a class $\tilde \alpha \in A_\ast(X /\mspace{-6.0mu}/
T)_\mathbb{Q}$.
We make use of the notion of Chow groups of quotient stacks,
a review of which may be
found in the appendix of this paper.
The main result one needs to recall is that when $\semistable X G =
\stable X G$, the
quotient $\gitstack X G$ is a proper Deligne-Mumford stack with coarse
moduli space $\phi^G: \gitstack X G \to X /\mspace{-6.0mu}/ G$, and there is an
induced isomorphism
\begin{equation}\label{equation-vistoli-equality}
\phi^G_\ast: A_\ast(\gitstack X G)_{\mathbb{Q}} \cong A_\ast(X/\mspace{-6.0mu}/ G)_\mathbb{Q}.
\end{equation}
The upshot is that via the identification $\phi^G_\ast$, we may think
of a Chow class $\alpha \in A_\ast(X /\mspace{-6.0mu}/ G)_\mathbb{Q}$ equivalently as a Chow
class $\alpha \in A_\ast(\gitstack X G)_\mathbb{Q}$.
\begin{definition}\label{definition-of-lift}
Let $\alpha \in A_\ast(\gitstack X G)$. We say that the class $\tilde
\alpha \in A_{\ast + g - t}(\gitstack X T)$ is a \emph{lift of $\alpha$}
provided that $i^\ast(\tilde \alpha) = \pi^\ast(\alpha)$, where $g :=
\dim G$, $t:= \dim T$,
\begin{itemize}
\renewcommand{\labelitemi}{$\diamond$}
\item $i:
[\semistable X G / T] \hookrightarrow \gitstack X T$
is the open immersion, and
\item $\pi: [\semistable X G / T] \to \gitstack X G$
is the flat fibration with fibre $G/T$.
\end{itemize}
Furthermore, we say $\tilde \alpha \in A_\ast(X/\mspace{-6.0mu}/ T)_\mathbb{Q}$ is a
\emph{lift} of $\alpha \in A_\ast(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$ provided
that $\phi^T_\ast(\tilde \alpha)$
is a lift, in the above sense, of the Chow class
$\phi^G_\ast(\alpha)$.
\end{definition}
\begin{remark}\label{remark-ctop-killing}
By the right exact sequence of Chow groups
$$A_\ast([\unstable X G \cap\semistable X T / T]) \to A_\ast(
\gitstack X T) \to A_\ast(\semistable X G / T]) \to 0,$$
any two lifts of $\alpha$ differ
by the push-forward of an element of $A_\ast([\unstable X G \cap
\semistable X T / T])$.
\end{remark}
\subsection{$\sqrt{c_{\text{top}}}$ is zero on each Kirwan
stratum}\label{rootctop-is-zero-on-each-stratum-section}
We pause a moment to clarify the definition of $\sqrt{c_{\text{top}}}$.
Recall that
$A^\ast(BT) \cong \Sym^\ast(\chargp{T})$, where $\chargp{T}$ is the
character group of $T$, hence any polynomial in the roots
$\alpha \in \Phi$ can be viewed as a Chow
class in $A^\ast(BT)$.
Moreover, for any $T$-variety $Y$ (e.g. $Y = \semistable X T$), there
is a flat morphism from the quotient stack $[Y/T]$ to the
classifying space $BT$, so one may pull-back classes from $A^\ast(BT)$
to $A_{\ast}([Y/T])$. In this way, $\sqrt{c_{\text{top}}}$ defines a
Chow class in $A_\ast([Y/T])$ for any $T$-variety $Y$.
In light of Remark \ref{remark-ctop-killing}, to show that
$\sqrt{c_{\text{top}}} \frown \tilde \alpha$ is independent of the choice of lift,
it suffices to show that $\sqrt{c_{\text{top}}}$ kills all elements in the image
of $A_\ast([\unstable X G \cap\semistable X T / T])$. This will be
accomplished by the end of \S\ref{section-rootctop-on-smooth-x}. In
this subsection, we
assume that $X$ is a
smooth, $G$-linearized projective variety over $k = \bar k$, and
we decompose the locus $\unstable X G \cap \semistable X T$ into strata
on which to analyze $\sqrt{c_{\text{top}}}$.
The stratification we use is due to Kirwan, but relies on
the previous work of Hesselink, Kempf, and Ness (cf. \cite{hes1},
\cite{kem1}, \cite{kem-nes1}).
We summarize the relevant properties:
\begin{theorem}[Kirwan]\label{theorem-kirwan-strata-decomposition}
Let $X$ be a $G$-linearized projective variety over an algebraically
closed field $\bar k$, with $T \subseteq G$ a choice of maximal torus.
The unstable locus $\unstable X G$ admits a
finite $G$-equivariant stratification
$$\unstable X G = \bigcup_{\beta \in \mathbf B} S_\beta$$ with the following
properties:
\begin{enumerate}
\item $S_\beta \subseteq \unstable X G$ is a locally closed
$G$-equivariant subscheme.
\item $S_\beta \cap S_{\beta'} = \emptyset$ for $\beta \neq \beta'$.
\item There exist parabolic subgroups $P_\beta \subseteq G$, containing $T$,
and locally closed
$P_\beta$-closed subschemes $Y_\beta \subseteq S_\beta \cap
\unstable X T$
and a surjective $G$-equivariant morphism
$$\phi: Y_\beta \times_{P_\beta} G \to
S_\beta$$ \label{item-isomorphism-when-smooth}
induced by the multiplication morphism $Y_\beta \times G \to S_\beta$.
\end{enumerate}
If moreover $X$ is smooth, then each $S_\beta$ is smooth and the morphism $\phi$ in
\eqref{item-isomorphism-when-smooth} is an isomorphism.
\end{theorem}
\begin{proof}
See \cite[\S 13]{kir1}.
\end{proof}
As a step toward showing that $\sqrt{c_{\text{top}}}$ vanishes on $\unstable X G
\cap \semistable X
T$, we prove that
$\sqrt{c_{\text{top}}}$ vanishes on each individual stratum $S_\beta
\cap \semistable X T$. Since $X$ is smooth, the
stratum $S_\beta$ is fibred over a flag variety $G/P_\beta$,
so we first study $\sqrt{c_{\text{top}}}$ on $G/P_\beta$.
To do so,
we require some notation related to elements
of the Weyl
group. For a parabolic subgroup $P \subseteq G$,
denote by $W_P$ the subgroup $(N_T \cap P)/T \subseteq W$.
Let the symbol $\dot w$ denote a
choice of a lift to $N_T$ of an element $w \in W = N_T/T$, and
let the symbol $\bar w$ denote the image of $w$ in $W/W_P$.
\begin{lemma}\label{calculating-localization-of-point-lemma}
Let $P \subseteq G$ be a parabolic subgroup containing
the maximal torus $T$, and let
\makebox{$i: W/W_P \to G/P$} be the inclusion defined by
$\bar w \mapsto \dot w P$.
Then $i$ is simply the inclusion of the $T$-fixed points of $G/P$,
and the Gysin pull-back
of the $T$-equivariant class $[eP] \in A_\ast^T(G/P)$ is given by
$$i^\ast([eP]) = \left(\prod_{\alpha \in \Phi(\mathfrak g/\mathfrak
p)} \alpha\right) \cdot [ \bar e] \in A^T_\ast(W/W_P),$$
where $A^T_\ast(-)$ denotes the $T$-equivariant Chow group and
$\Phi(\mathfrak g/\mathfrak p)$ is the subset of roots consisting of
the weights corresponding to the $T$-action on the tangent space
$T_{eP}(G/P)$.
\end{lemma}
\begin{proof}
It is well-known that the $T$-invariant points of $G/P$ are
precisely $W/W_P$.
The Chow group $A^T_\ast(W/W_P)$ is a free
$A^\ast(BT)$-module with basis given by the elements of $W/W_P$.
The element $eP \in G/P$ is an isolated, nonsingular fixed point,
disjoint from all other fixed points $wP \neq eP$. Hence, $i^\ast
([eP])$ equals the self-intersection of $[eP]$. This equals the
product of $[\bar e]$ and the $T$-equivariant top Chern class of the
normal bundle $T_{eP}(G/P)$, which is clearly the product of the
roots in $\Phi(\mathfrak g/ \mathfrak p)$.
\end{proof}
\begin{lemma}\label{rootctop-is-zero-from-localization-in-G/P-lemma}
Let $P \subseteq G$ be a parabolic subgroup containing
the maximal torus $T$, let
$\Phi(\mathfrak g/\mathfrak p)$ be the collection of weights of the
induced $T$-action on the tangent space $T_{eP}(G/P)$,
and let $U \subseteq G/P$ denote the open complement of the finite
set $W/W_P \hookrightarrow G/P$.
As an element
of the $T$-equivariant operational Chow group,
$$\sqrt{c_{\text{top}}} = 0 \in A^\ast_T(U).$$
\end{lemma}
\begin{proof}
The variety $U$ is smooth, so by Poincar\'e duality (Theorem
\ref{poincare-duality-theorem}) it suffices to prove that
$\sqrt{c_{\text{top}}} \frown [U] = 0 \in A_\ast^T(U)$.
Let $X := G/P$ denote the flag variety,
and let $i: X^T = W/W_P \to X$ denote the inclusion of the
$T$-fixed points. By the right-exact sequence of Chow groups
$$A_\ast^T(X^T) \stackrel{i_\ast} \to A_\ast^T(X) \to A_\ast^T(U) \to 0,$$
it suffices to show that $\sqrt{c_{\text{top}}} \frown [X]$ is in the image of
$i_\ast$.
$X$ is a smooth projective variety, so
by the localization theorem
(Theorem \ref{brions-localization-theorem}) there is an injective
$A^\ast(BT)$-algebra
homomorphism,
$$i^\ast:A^\ast_T(X) \to A^\ast_T(X^T).$$
Thus, it suffices to prove that $i^\ast (\sqrt{c_{\text{top}}} \frown [X])$ is in the
image of
$i^\ast \circ i_\ast$.
The ring $A^\ast(X^T)$ is a free $A^\ast(BT)$-module with basis
given by $\{ [\bar w] : \bar w \in W/W_P\}$, with
multiplication defined by the rule
$$[\bar w] \cdot [\bar w'] = \left \{
\begin{array}{l l}
[\bar w] &: \textrm{ if } \bar w = \bar w' \in W/W_P\\
0 &: \textrm{ otherwise,}
\end{array}
\right.$$
and in terms of this basis,
$$i^\ast(\sqrt{c_{\text{top}}} \frown [X]) = \sum_{\bar w \in W/W_P}
\sqrt{c_{\text{top}}} \cdot [\bar w].$$
By Lemma \ref{calculating-localization-of-point-lemma},
$$i^\ast \circ i_\ast([\bar e]) = (\prod_{\alpha \in \Phi(\mathfrak g /
\mathfrak p)} \alpha )\cdot [\bar e].$$
Since $i$ is a $W$-equivariant inclusion, the
homomorphisms $i^\ast$ and $i_\ast$ are also compatible with
$W$-action, hence
$$i^\ast \circ i_\ast([\bar w]) = (\prod_{\alpha \in \Phi(\mathfrak g /
\mathfrak p)} w\alpha ) \cdot [\bar w].$$
Notice that $\Phi(\mathfrak g/\mathfrak p)$ is a subset of the
negative roots $\Phi^-$, so for
$\beta_w := \prod_{\alpha \in \Phi^- \setminus \Phi(\mathfrak g
/ \mathfrak p)} w\alpha$,
\begin{align*}
i^\ast \circ i_\ast( \beta_w \cdot [wP/P]) & = (\prod_{\alpha \in \Phi^-}
w\alpha )\cdot [\bar w]\\
& = \det(w)\cdot \sqrt{c_{\text{top}}} \cdot [\bar w].
\end{align*}
Therefore, $\sum \sqrt{c_{\text{top}}} \cdot [\bar w] = i^\ast \circ i_\ast( \sum
\det(w) \cdot \beta_w \cdot [\bar w])$.
\end{proof}
We conclude the subsection by proving that $\sqrt{c_{\text{top}}}$ is zero on
each stratum $S_\beta \cap \semistable X T$.
\begin{lemma}\label{rootctop-is-zero-on-S-beta-semistable-lemma}
The Chow class $\sqrt{c_{\text{top}}}$ is zero as an element of the
$T$-equivariant operational
Chow group
$$\sqrt{c_{\text{top}}} = 0 \in A^\ast_T(S_\beta \cap \semistable X T).$$
\end{lemma}
\begin{proof}
Since $S_\beta \cap \semistable X T$ is smooth, by Poincar\'e duality (Thm.
\ref{poincare-duality-theorem}) it is enough to show that
$\sqrt{c_{\text{top}}} \frown [S_\beta \cap \semistable X T] = 0 \in A_\ast^T(S_\beta \cap
\semistable X T)$. By Theorem
\ref{theorem-kirwan-strata-decomposition}, there is $G$-equivariant morphism
$\pi:S_\beta \to G/P$ with $\pi^{-1}(eP) = Y_\beta \subseteq
\unstable X T$. Moreover, for any element $\dot w \in N_T$, we
still have $\dot w Y_\beta \subseteq \unstable X T$. By the
$G$-equivariance, we therefore have the restriction of $\pi$ satisfying
$S_\beta \cap \semistable X T \to G/P - WP/P$. We finish by
noting
that $\sqrt{c_{\text{top}}}$ on
$S_\beta \cap \semistable X T$ is the
pull-back of the class $\sqrt{c_{\text{top}}}$ in $A_T^\ast(G/P -
WP/P)$, which is $0$ by Lemma
\ref{rootctop-is-zero-from-localization-in-G/P-lemma}.
\end{proof}
\begin{remark}
The arguments above in \S
\ref{rootctop-is-zero-on-each-stratum-section}
are directly analogous to those used by Brion in \cite{bri2} for
equivariant cohomology, but the arguments to follow in \S
\ref{integral-vanishing-of-rootctop-section} and \S
\ref{rootctop-is-zero-generally-section} are original and yield
marginally stronger results than what can be found in the literature
for equivariant cohomology: when possible we use
$\mathbb{Z}$-coefficients instead of $\mathbb{Q}$-coefficients.
\end{remark}
\subsection{$\sqrt{c_{\text{top}}}$ is zero on $\unstable X G \cap
\semistable X T$}
\label{integral-vanishing-of-rootctop-section}
We continue to assume that $X$ is smooth over an algebraically
closed field $\bar k$, and we extend the vanishing of $\sqrt{c_{\text{top}}}$ on a
stratum to vanishing over
the entire locus $\unstable X G \cap \semistable X T$, proving
$\sqrt{c_{\text{top}}}$ acts as $0$ on
$A_\ast^T(\unstable X G \cap \semistable X T)_\mathbb{Z}.$
We recall a presentation of $T$-equivariant Chow groups given
by Brion:
\begin{proposition}[Brion]\label{brions-presentation-of-T-equivariant-Chow-proposition}
Let $X$ be a $T$-scheme. The $T$-equivariant Chow group
$A_\ast^T(X)$ is generated as an $A^\ast(BT)$-module by the classes $[Y]$
associated to $T$-invariant closed subschemes $Y \hookrightarrow X$.
\end{proposition}
\begin{proof}
See \cite[Thm. 2.1]{bri1}.
\end{proof}
Our first step is to extend the vanishing of $\sqrt{c_{\text{top}}}$ to
the closure of each Kirwan stratum in $\unstable X G \cap \semistable
X T$. From this, the result on the entire space follows quickly.
\begin{lemma}\label{rootctop-vanishes-on-S-beta-closure-lemma}
Let $\overline{S_\beta}$ be the closure of an unstable Kirwan
stratum. The Chow class $\sqrt{c_{\text{top}}}$ annihilates every class in
$A_\ast^T(\overline{S_\beta} \cap \semistable X T)_\mathbb{Z}$.
\end{lemma}
\begin{proof}
We proceed by induction on the dimension of the strata $S_\beta$.
The result is clear for closed strata $S_\beta$ by Lemma
\ref{rootctop-is-zero-on-S-beta-semistable-lemma}. Assume that
$S_\beta$ is a stratum and that all strata in
its closure satisfy the conclusion of the lemma. By Proposition
\ref{brions-presentation-of-T-equivariant-Chow-proposition}, it suffices
to show that $\sqrt{c_{\text{top}}} \frown [Y] = 0 \in
A_\ast^T(\overline{S_\beta}\cap \semistable X T)$ for any $T$-invariant
subvariety $Y \hookrightarrow \overline{S_\beta}\cap \semistable X T$. If $Y \cap
S_\beta = 0$, then $Y$ is contained in some $\overline{S_{\beta'}}
\cap \semistable X T$
for a stratum
$S_{\beta'}$ in the closure
of $S_\beta$, and therefore is killed by the operator $\sqrt{c_{\text{top}}}$,
as implied by the inductive hypothesis.
Thus, the only case we need to consider is when $Y$ intersects
$S_\beta$ nontrivially. We now resolve the birational map
$\overline{S_\beta} \dashrightarrow G/P_\beta$ defined on $S_\beta$ by
$S_\beta \cong Y_\beta \times_{P_\beta} G \to G/P_\beta$
(cf. Theorem \ref{theorem-kirwan-strata-decomposition}).
Our strategy will be to partially resolve the locus of
indeterminacy in the following manner:
$$\xymatrix{ G \times_{P_\beta} \overline{Y_\beta} \ar[d]_\pi
\ar[rd]^{\tilde f} & \\
\overline{S_\beta} \ar@{-->}[r]^f & G/P_\beta.}$$
The morphism $\pi$ is a proper, since $G/P_\beta$ is projective, and
an isomorphism restricted the dense open $S_\beta \subseteq
\overline{S_\beta}$, by Theorem
\ref{theorem-kirwan-strata-decomposition}(3).
The morphism
$\tilde f$ is a $\overline {Y_\beta}$-fibration, hence flat, and
both morphisms $\tilde f$ and $\pi$ are
$T$-equivariant. Since the morphisms are
$G$-equivariant, and $N_T$ preserves $\unstable X T$, the above
diagram factors as
$$\xymatrix{ \pi^{-1}(\overline{S_\beta} \cap \semistable X T) \ar[d]_\pi
\ar[rd]^{\tilde f} & \\
\overline{S_\beta} \cap \semistable X T \ar@{-->}[r]^f & U,}$$
where $U$ is the open complement of the fixed-point locus,
$U := G/P_\beta - WP_\beta/P_\beta \subseteq G/P_\beta$.
If $\tilde Y$ denotes the strict transform of $Y$ under the birational
morphism $\pi$, then the projection formula implies
$\pi_\ast(\sqrt{c_{\text{top}}} \frown [\tilde Y]) = \sqrt{c_{\text{top}}}
\frown [Y]$. The former equals $0$,
since $\sqrt{c_{\text{top}}}$ is the pull-back via $\tilde f$ of $\sqrt{c_{\text{top}}}$ from
$(G/P_\beta)\setminus (WP_\beta/P_\beta)$, and this is $0$ as an
element of the operational Chow groups by Lemma
\ref{rootctop-is-zero-from-localization-in-G/P-lemma}. Therefore,
$\sqrt{c_{\text{top}}} \frown [Y] = 0$, as desired.
\end{proof}
\begin{proposition}\label{rootctop-is-zero-proposition}
Let $X$ be a smooth $G$-linearized projective variety over $\bar k$.
The Chow class $\sqrt{c_{\text{top}}}$ annihilates every class in
$A_\ast^T(\unstable X G \cap \semistable X T)_\mathbb{Z}$.
\end{proposition}
\begin{proof}
Since the stratification in Theorem
\ref{theorem-kirwan-strata-decomposition} is finite, any closed
($T$-invariant) subscheme $Y \subseteq \unstable X G$ must be contained in
some $\overline {S_\beta}$, so the projection formula and Lemma
\ref{rootctop-vanishes-on-S-beta-closure-lemma}
guarantee $\sqrt{c_{\text{top}}} \frown [Y] = 0$.
The result then follows from
Proposition
\ref{brions-presentation-of-T-equivariant-Chow-proposition}.
\end{proof}
\subsection{$\sqrt{c_{\text{top}}}$ vanishes for $\mathbb{Q}$-coefficients when
$k \neq \bar k$}\label{rootctop-is-zero-generally-section}
We relax our previous assumption on $k$, allowing $k$ to be an
arbitrary field.
Theorem \ref{theorem-kirwan-strata-decomposition} is proven over
algebraically closed field, so our previous arguments do not
immediately apply.
If we weaken our statements by ignoring torsion, considering only
Chow groups with rational coefficients, we can easily extend our
previous results to this case.
We outline the proof of the following well-established lemma
(cf. \cite[Lem. 1A.3]{blo1}) for the reader's convenience.
\begin{lemma}\label{algebraic-field-extension-lemma}
If $X$ is a variety over a field $k$, then any field extension $K/k$
induces an injective morphism between Chow groups with
rational coefficients: $A_\ast(X)_\mathbb{Q} \hookrightarrow A_\ast(X_K)_\mathbb{Q}$.
\end{lemma}
\begin{proof}
If a field extension $E/k$ is the union of a directed system of sub-extensions
$E_i/k$,
then $A_\ast(X_E) =
\varinjlim A_\ast(X_{E_i})$. We may apply this result to
the extension $K/k$ to reduce the proof
to the two cases: $K/k$ is finite; or $K= k(x)$ for a transcendental
element $x$.
If $K/k$ is finite
then the morphism $\phi: X_K \to X$ is
proper and the composition $\phi_\ast \circ \phi^\ast$ is simply
multiplication by $[K:k]$, which is an
isomorphism since coefficients are rational. Therefore, $\phi^\ast$
is injective.
If $K = k(x)$, then $X_K$ is the generic fibre of $\pi: X \times
\P^1_k \to \P^1_k$. Let $\phi: X \times \P^1_k \to X$ denote the
other projection. There is an isomorphism of Chow groups
$A_i(X \times \P^1_k)_\mathbb{Q} \cong A_{i-1}(X)_\mathbb{Q} \oplus A_i(X)_\mathbb{Q}
\cdot t$, where $t$ is the class associated to a fibre of $\pi$ and
the morphism $\phi^\ast: A_i(X)_\mathbb{Q} \to A_{i+1}(X \times
\P^1_k)_\mathbb{Q}$ is identified
with $\id \oplus 0$ (cf. \cite[Thm. 3.3]{ful1}).
As schemes,
$$X_K = \varinjlim_{\emptyset \neq U \subseteq \P^1} X \times U,$$
and there is an induced isomorphism on the level of Chow groups:
$$A_i(X_K)_\mathbb{Q} \cong \varinjlim_{\emptyset \neq U \subseteq \P^1}
A_{i+1}(X\times U)_\mathbb{Q} \cong A_{i}(X)_\mathbb{Q} \oplus 0.$$
From this description, it is clear that the induced morphism $\phi^\ast:
A_\ast(X)_\mathbb{Q} \to A_\ast(X_K)_\mathbb{Q}$ is an isomorphism, ergo injective.
\end{proof}
\begin{proposition}\label{rootctop-is-torsion-over-any-field-proposition}
Let $X$ be a $G$-linearized smooth projective variety over
$k$. Then $\sqrt{c_{\text{top}}}$ annihilates every class in
$A_\ast^T(\unstable X G \cap \semistable X T)_\mathbb{Q}$.
\end{proposition}
\begin{proof}
Lemma \ref{algebraic-field-extension-lemma} reduces to the
algebraically closed case, which is proved in Proposition
\ref{rootctop-is-zero-proposition}.
\end{proof}
As a corollary, we obtain
\begin{corollary}\label{rootctop-is-numerically-zero-for-smooth-X}
Let $X$ be a smooth $G$-linearized projective variety over
a field $k$ so that $\semistable X T = \stable
X T$.
If $\tilde \alpha_0, \tilde \alpha_1 \in A_\ast(\gitstack X T)_\mathbb{Q}$
are two lifts of $\alpha \in A_\ast(\gitstack X G)_\mathbb{Q}$, then
$$\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha_0 =
\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha_1.$$
\end{corollary}
\section{Dealing with strictly semi-stable points}\label{strictly-semi-stable-points-section}
One approach to extend the results of \S
\ref{section-rootctop-on-smooth-x}
to the case
of singular $X$ is to
study a $G$-equivariant closed immersion $j: X \hookrightarrow
\P(V)$ associated to the linearization. Regrettably,
I see no reason to believe
that the push-forward map $j_\ast: A_\ast( \gitstack X T)_\mathbb{Q} \to
A_\ast( \gitstack {\P(V)} T)_\mathbb{Q}$ is injective, thus preventing an easy
reduction of the proof for $X$ to the known case of $\P(V)$ (cf. Prop.
\ref{rootctop-is-torsion-over-any-field-proposition}).
Our solution is to restrict to the case $\semistable X T = \stable
X T$ and to limit our ambitions to showing that for any $\alpha \in
A_\ast(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$, the Chow class $\sqrt{c_{\text{top}}} \frown
\tilde \alpha$ has a well-defined \emph{numerical} equivalence
class, independent of the choice of lift $\tilde \alpha$.
Numerical equivalence would then behave better than algebraic equivalence
under closed immersions,
if not for
the unfortunate possibility that
$\semistable{\P(V)} T \neq \stable{\P(V)} T$,
as there is not a well-defined notion of numerical equivalence on the
Artin stack $\gitstack{(\P(V))}T$.
We circumvent
this problem by building an auxiliary smooth
$G$-linearized variety $Y \to \P(V)$, for
which $\semistable Y {T} = \stable Y {T}$ and $\semistable Y G =
\stable Y G \neq \emptyset$,
and then relating
integration on $X/\mspace{-6.0mu}/ G$ and $X/\mspace{-6.0mu}/ T$ to integration on $Y /\mspace{-6.0mu}/ G$ and
$Y /\mspace{-6.0mu}/ T$.
If one is guaranteed $G$-equivariant resolutions of
singularities (e.g. if $\Char k = 0$), then a result of
Reichstein \cite{rei1} generalizes the partial desingularizations
of Kirwan \cite{kir2} and produces such an auxiliary variety $Y$. As
resolution of singularities is still an open area of research in
positive characteristic, we
provide an independent construction.
We will find this construction again useful in
\S\ref{section-independence-of-GIT-ratio},
when the existence of strictly semi-stable points would otherwise impede
our efforts to show that the GIT integration ratio $r_{G,T}^{X,\alpha}$ is
independent of $X$.
As a corollary of our method, we prove in \S
\ref{subsection-rootctop-is-numerically-zero}:
\begin{corollary}\label{rootctop-is-numerically-zero-for-singular-X}
Let $G$ be a reductive group over $k$, $T \subseteq G$ a split maximal
torus, and $X$ a $G$-linearized projective variety $X$ satisfying
$\semistable X T = \stable X T$.
If $\tilde \alpha_0, \tilde \alpha_1 \in A_\ast(\gitstack X T)_\mathbb{Q}$
are two lifts of $\alpha \in A_\ast(\gitstack X G)_\mathbb{Q}$, then
$$\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha_0 =
\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha_1.$$
\end{corollary}
\subsection{The existence of semi-stable points}
To construct the auxiliary space $Y$, we must understand the
stabilizers of strictly semi-stable points $x \in \strictlysemistable
X T.$ We review some of their key properties in the next
set of lemmas.
\begin{lemma}\label{positive-dimensional-stabilizer-lemma}
Let $X$ be a $T$-linearized projective variety. If $\semistable X T \neq
\stable X T$, then there exists a strictly semi-stable $x \in
\strictlysemistable X T$ that has a positive dimensional stabilizer.
\end{lemma}
\begin{proof}
If $\semistable X T \neq \stable X T$, then by definition there is a
point with positive dimensional stabilizers, or there is a
non-closed orbit in $\semistable X T$. The closure of a non-closed orbit
contains
an orbit of strictly smaller dimension, which then must have a positive
dimensional stabilizer.
\end{proof}
\begin{lemma}\label{trivial-linearization-lemma}
Let $(X, \mbox{$\mathcal{O}$}_X(1))$ be a projective variety $T$-linearized by a very
ample line bundle $\mbox{$\mathcal{O}$}_X(1)$. If $x
\in \strictlysemistable X T$ is strictly semi-stable and $T'
\subseteq T$ is a subtorus
stabilizing $x$, then $T'$ acts trivially on the fibre
$\mbox{$\mathcal{O}$}_X(1)|_x$.
\end{lemma}
\begin{proof}
By the Hilbert-Mumford criterion, $x$ must also be $T'$
semi-stable. Since $T'$ fixes $x$, the $T'$-state of $x$ can
only consist of a single weight. By the Hilbert-Mumford criterion,
this weight must be $0$, so $T'$ acts trivially on the fibre
$\mbox{$\mathcal{O}$}_X(1)$.
\end{proof}
\begin{lemma}
Let $X \hookrightarrow \P(V)$ be a $T$-linearized projective variety. There
exists only finitely subtori $T' \subseteq T$ occurring as $(\stab x)^0$ for
$x \in \strictlysemistable X T$.
\end{lemma}
\begin{proof}
Let $X \hookrightarrow \P(V)$ via some high tensor power of the
$T$-linearization.
A rank $r$ subtorus $T' \subseteq T$
stabilizes $x \in X$ if and only if each element $\lambda \in
\cochargp {T'} \subseteq \cochargp T$ is constant when paired with the
elements of $\Xi(x) \subseteq \chargp T$. Therefore, $(\stab x)^0$
is the subtorus of $T$ corresponding
to the subgroup of $1$-parameter subgroups that are constant on
$\Xi(x)$. The set of states of
$x \in X$ is a finite set since $V$ is a finite-dimensional
representation, and therefore the tori that appear as $(\stab x)^0$
form a finite collection.
\end{proof}
\subsection{Constructing $Y$}
The trick will be to define $Y$ to be $Y := \P(V) \times (G/B)^{\rank G}$,
the iterated product of $\P(V)$ with a flag variety. The only delicate
point is the choice of a suitable linearization.
Let $T \subseteq B \subseteq G$ be a choice of Borel subgroup
containing a split maximal torus $T$.
\begin{lemma}
Let $\chi \in \chargp T$ be in the positive Weyl chamber. The
$G$-equivariant line bundle $L(\chi) := G \times_{B,\chi} \mathbb{A}^1$ on
$G/B$ is ample and hence a $G$-linearization of $G/B$.
For any nontrivial subtorus $T' \subseteq T$ that fixes
a point $gB \in G/B$, the
induced $T'$-action on the fibre $L(\chi)|_{gB}$
is by weight $(w \cdot \chi )|_{T'}$,
where $w \in W$ corresponds to the Bruhat cell
containing $gB$.
\end{lemma}
\begin{proof}
$L(\chi)$ is ample on $G/B$ by the Borel-Weil-Bott theorem. By the Bruhat
decomposition, $G/B = \coprod_{w \in W} UwB$, for $U \subseteq B$ the
maximal unipotent subgroup. Moreover, denoting by \makebox{$\dot w
\in N_T$}
a lift of
the element $w \in W$, there is an isomorphism $\phi:U_w
\times B \to U \dot wB \subseteq G$ sending $(u, b)
\mapsto u\dot wb$, where $U_w := U \cap (U^{-})^w$ is the
intersection of $U$ with the $w$-conjugate
of the opposite unipotent subgroup (cf. \cite[Thm. 14.12]{bor1}). Note that $U_w$ is
normalized by $T$ because both $U$ and $(U^-)^w$ are; the latter being
so because for any $t \in T$, $\dot w \in N_T$, and $u \in
U^-$,
$$t (\dot w^{-1} u \dot w) t^{-1} = \dot w^{-1} ( t^{\dot w} u (t^{\dot
w})^{-1} ) \dot w.$$
Assume $gB = u \dot wB$ for some $u \in U_w$ and $\dot w \in N_T$.
Let $t \in T'$ be
an element fixing $gB$ and observe
$$t u \dot w = tut^{-1} \dot w t^{ \dot w^{-1}},$$
with $t u t^{-1} \in U_w$ and $t^{\dot w ^{-1}} = \dot w ^{-1} t \dot
w \in T \subseteq B$. Since $t$ fixes $gB$, there exists some $b \in
B$ such that $u\dot w b = t u t^{-1}
\dot w t^{\dot w ^{-1}}$, and since $\phi$ is an isomorphism,
$u = t u t^{-1}$ and $b = t^{\dot w ^{-1}}$.
Therefore $t u \dot w = u \dot w t^{w ^{-1}}$, and so $t$ acts on the
fibre by multiplication by $\chi(t^{w^{-1}}) = (w \cdot \chi)(t)$.
\end{proof}
The following lemma will be the inductive step in the argument showing
that for a $G$-linearized variety $Z$ (e.g. $Z = \P(V)$), the variety
$Y := Z \times
(G/B)^{\rank G}$ admits a linearization such that $\semistable Y T =
\stable Y T$ and all points $y \in Y$ projecting to a stable $z
\in Z$ are stable in $Y$.
\begin{lemma}\label{inductive-step-lemma}
Let $\scr{L}$ be a $G$-linearization of a projective variety $Z$.
If the subtori of a split maximal torus $T \subseteq G$
occurring as $(\stab_T z)^0$ for $z \in
\strictlysemistable Z T$ are at most rank $r > 0$, then
there is a
character $\chi \in \chargp T$ such that the $G$-equivariant line
bundle $L(\chi)
:= G \times_{B, \chi} \mathbb{A}^1$
is very ample on $G/B$, and for some $N \gg 0$, the induced
$G$-linearized action on the
line bundle $\scr{L}^{\otimes N} \boxtimes L(\chi)$ on $Z \times G/B$
has the properties:
\begin{enumerate}
\item The subtori of $T$ of the form $(\stab_T p)^0$ for
$p \in \strictlysemistable{(Z \times G/B)}{T}$
are at most rank $r -1$;
\item A point $p \in Z \times G/B$ is $G$- or $T$-stable
(resp. $G$- or $T$-unstable)
whenever $\pi(p)$ is so,
where $\pi: Z \times G/B \to Z$ is projection onto the first factor.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $T_1,\ldots , T_n$ denote the positive-dimensional subtori
of $T$ occurring as $(\stab_T z)^0$ for
$z \in \strictlysemistable Z T$.
Let $H_i \subset \chargp T$ be dual to the subgroup $\cochargp {T_i} \subseteq
\cochargp T$. Since each $T_i$ is positive dimensional, all of the $H_i$
are proper subgroups of $\chargp T$. Choose $\chi$ in the interior of
the positive Weyl
chamber of $\chargp{T}$ avoiding the $W$-orbit of any $H_1, \ldots, H_n$,
and large enough so that $L(\chi)$ is very ample on $G/B$.
The state $\Xi(p)$ of a point $p = (z,gB) \in Z \times G/B$ with
respect to the linearization $\scr{L} ^{\otimes N} \boxtimes L(\chi)$ consists of weights of
the form $N \cdot \chi_z + \chi_{gB}$, with $\chi_z \in \Xi(z)$ and
$\chi_{gB} \in \Xi(gB)$.
By choosing $N$ large enough, the Hilbert-Mumford criterion
shows that the
induced linearized action on
$\scr{L}^{\otimes N} \boxtimes L(\chi)$ satisfies:
\begin{itemize}
\renewcommand{\labelitemi}{$\diamond$}
\item $p \in Z \times G/B$ is stable if $\pi(p)$ is
stable;
\item $p \in Z \times G/B$ is unstable if $\pi(p)$ is unstable.
\end{itemize}
This shows (2), and moreover that any strictly semi-stable point $p \in
Z \times G/B$ sits above a strictly semi-stable point $\pi(p) \in Z$.
With the prescribed
$G$-linearized action on $\scr{L}^{\otimes N} \boxtimes L(\chi)$, let $T' :=
(\stab_T p)^0$ for some point
$p := (z,gB) \in \strictlysemistable{(Z \times G/B)} {T}$. As we
just noted, $z$ is strictly semi-stable in $Z$ and
so $T'$ is contained in $T_i$ for some $i$.
Recall that $\chi$ was chosen
so that $(w \cdot \chi)|_{T_i} \neq 0 \in \chargp{T_i}$ for any $
w\in W$. Since $p$ is strictly semi-stable, by
Lemma \ref{trivial-linearization-lemma} the
weight of the induced linearization of $T'$ at $(z,gB)$ will be
$0$. The weight of the action of $T'$ on $L(\chi)|_{gB}$ is
$w \cdot \chi$ for some $w \in W$, by Lemma \ref{inductive-step-lemma}.
Therefore, the weight of the linearization of $T'$ at
$(z,gB)$ is $0 + \chi|_{T'}$; hence $T' \subseteq \ker( \chi|_{T_i} )$, and
therefore the rank of $T'$ is at most $r - 1$.
\end{proof}
\begin{proposition}\label{trivial-flag-variety-bundle-prop}
If $Z$ is a $G$-linearized projective variety
then $Y := Z \times (G/B)^{r}$ for $r := \rank G$ admits a
$G$-linearization for which:
\begin{enumerate}[(i)]
\item $\semistable Y {} = \stable Y {}$; and
\item $\stable Z {} \times (G/B)^{r} \subseteq \semistable Y {}
\subseteq \semistable Z {} \times (G/B)^{r}$,
\end{enumerate}
for both $T$- and $G$- (semi-)stability.
\end{proposition}
\begin{proof}
We prove the result for $T$-stability, and then as $\semistable Y G =
\cap_{g \in G} \semistable Y T$ and $\stable Y G = \cap_{g \in G}
\stable Y T$, the results for $G$-stability will follow (cf. Theorem
\ref{hilbert-mumford-criterion}).
Recursively applying Lemma \ref{inductive-step-lemma}, we obtain a
$G$-linearization of $Z \times (G/B)^{\times \rank G}$ for which no
$T$-strictly
semi-stable points have positive dimensional $T$-stabilizers. By Lemma
\ref{positive-dimensional-stabilizer-lemma}, there are no
$T$-strictly semi-stable points, proving (i). Lemma
\ref{inductive-step-lemma}(2) implies (ii).
\end{proof}
\subsection{$\sqrt{c_{\text{top}}}$ on singular $X$}\label{subsection-rootctop-is-numerically-zero}
\begin{lemma}\label{surjectivity-of-chow-groups-lemma}
If $\pi: Y \to X$ be a surjective proper morphism between varieties, then the
push-forward map $\pi_\ast: A_\ast(Y)_\mathbb{Q} \to A_\ast(X)_\mathbb{Q}$ is a
surjective map between Chow groups with rational coefficients.
\end{lemma}
\begin{proof}
Take a subvariety $Z \hookrightarrow X$. Let $\xi_Z$ be the generic point.
Then let $Z' \hookrightarrow Y$ be the scheme-theoretic closure of any
closed point in the fibre $Y \times_X \kappa(\xi_Z)$. The scheme
$Z'$ sits generically finitely over $Z$, and the class
$\frac{1}{[K(Z'):K(Z)]}[Z']$ pushes forward to $[Z]$. Therefore
$\pi_\ast$ is surjective.
\end{proof}
We have developed enough theory to prove the main result of this section:
\begin{proposition}\label{prop-comparing-integrals-on-auxiliary-space}
Let $j:X \hookrightarrow Z$ be a $G$-equivariant inclusion of varieties with
compatible $G$-linearizations such that $\semistable X T =
\stable X T$. For $\pi: Y := Z \times (G/B)^r \to Z$
with the $G$-linearization of Proposition
\ref{trivial-flag-variety-bundle-prop}, and any $\alpha \in
A_\ast(X /\mspace{-6.0mu}/ G)_\mathbb{Q}$ with lift $\tilde \alpha \in A_\ast( X /\mspace{-6.0mu}/
T)_\mathbb{Q}$, there exists a class $\beta \in A_\ast(Y /\mspace{-6.0mu}/ G)_\mathbb{Q}$ with lift
$\tilde \beta \in A_\ast(Y /\mspace{-6.0mu}/ T)_\mathbb{Q}$ so that
\begin{enumerate}
\item $\int_{X/\mspace{-6.0mu}/ G} \alpha = \int_{Y/\mspace{-6.0mu}/ G} \beta;$
\item $\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha =
\int_{Y/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \beta.$
\end{enumerate}
\end{proposition}
\begin{proof}
Let $X'_H := \semistable
X {H} \times_X \semistable Y {H}$, where $H = T$ or $H = G$.
Proposition
\ref{trivial-flag-variety-bundle-prop} is enough to guarantee that
$X'_H$ embeds $T$-equivariantly as a closed subvariety
of $\semistable Y {H}$ and $T$-equivariantly and surjectively onto
$\semistable X {H}$. The following commutative diagram of proper
Deligne-Mumford stacks,
$$\xymatrix{[X'_H/H] \ar[r]^j \ar[d]_\pi
& \gitstack Y H \ar[d]\\
\gitstack X H \ar[r] & \Spec k,}$$
induces an analogous diagram between the coarse moduli spaces.
By Lemma \ref{surjectivity-of-chow-groups-lemma}, $\pi_\ast \alpha' =
\alpha$ for some $\alpha ' \in A_\ast( [X'_G/G])_\mathbb{Q}$. Because $Z'_H$
is a fibre product, $\pi_\ast(\tilde \alpha')$ is a lift of $\alpha$
for any lift $\tilde \alpha'$ of $\alpha'$.
Let $\beta := j_\ast (\alpha') \in A_\ast(Y /\mspace{-6.0mu}/ G)_\mathbb{Q}$ and $\tilde
\beta := j_\ast (\tilde \alpha') \in A_\ast(Y /\mspace{-6.0mu}/ T)_\mathbb{Q}$. By the
commutativity of the diagram, the degrees of the
classes $\alpha$ and $\beta$ agree, proving (1). The equality in (2)
follows similarly.
\end{proof}
Finally, we prove the result advertised in the introduction to this
section.
\begin{proof}[Proof of Corollary
\ref{rootctop-is-numerically-zero-for-singular-X}]
Embed the singular $X$ into the smooth variety $\P(V)$ via some high
tensor power of the given $G$-linearization. Construct the smooth
$G$-linearized $Y$ sitting over $\P(V)$. By Proposition
\ref{prop-comparing-integrals-on-auxiliary-space}, any two lifts
$\tilde \alpha_0$ and $\tilde \alpha_1$ of a class $\alpha \in
A_\ast(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$ have analogues $\tilde \beta_0$ and $\tilde
\beta_1$ both lifting a class $\beta \in A_\ast(Y /\mspace{-6.0mu}/ G)_\mathbb{Q}$ and satisfying
$$\int_{X/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}} \frown \tilde \alpha_i = \int_{Y/\mspace{-6.0mu}/ T} \sqrt{c_{\text{top}}}
\frown \tilde \beta_i,$$
for $i = 0,1$. The result immediately follows from the smooth case
(cf. Corollary \ref{rootctop-is-numerically-zero-for-smooth-X}).
\end{proof}
\section{Independence of GIT integral ratio}\label{section-independence-of-GIT-ratio}
The goal of this section is to prove Theorem
\ref{GIT-integral-ratio-is-invariant-of-G-theorem}: for a $G$-linearized
projective variety
$X$ over a field $k$ with no strictly $T$-semi-stable points,
the ratio $r_{G,T}^{X,\alpha}$,
(defined below in \S \ref{subsection-independence-on-chow-class}) is
an invariant of the group $G$.
We do this in stages, first showing that it
is
independent of the
rational equivalence class $\alpha \in A_0(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$ and the choice
of maximal torus, before showing
the independence on the variety $X$.
\subsection{Independence on Chow class}
\label{subsection-independence-on-chow-class}
By Corollary \ref{rootctop-is-numerically-zero-for-singular-X}, the
following ratio is well-defined:
\begin{definition}\label{definition-GIT-integral-ratio}
Assume $X$ is a $G$-linearized projective variety,
$T\subseteq G$ is
a split maximal torus for which $\semistable X T = \stable X
T$, and $\alpha \in A_0(X /\mspace{-6.0mu}/ G)$ is a $0$-cycle
satisfying $\int_{X/\mspace{-6.0mu}/ G} \alpha \neq 0$. We define the
\emph{GIT integral ratio} to be
$$r_{G,T}^{X,\alpha} :=
\frac{ \int_{X/\mspace{-6.0mu}/ T} c_{\text{top}} \frown \tilde \alpha}
{ \int_{X/\mspace{-6.0mu}/ G} \alpha},$$
where $\tilde \alpha$ is some lift of the class $\alpha$.
\end{definition}
\begin{lemma}\label{independent-of-class-DM-case-lemma}
The GIT integral ratio $r^{X}_{G,T} := r^{X,\alpha}_{G,T}$ is
independent of the choice of Chow class $\alpha\in A_0(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$.
\end{lemma}
\begin{proof}
The definition of $r^{X,\alpha}_{G,T}$ is
independent of the algebraic equivalence class of $\alpha$, since
numerical equivalence is coarser than algebraic equivalence. Let
$B_\ast(-)$ denote the quotient of the Chow group $A_\ast(-)$ by the
relation of algebraic equivalence (cf. \cite[\S 10.3]{ful1}).
Since $\gitstack
X G$ is a Deligne-Mumford stack $B_0(\gitstack X G)_\mathbb{Q} = B_0(X/\mspace{-6.0mu}/
G)_\mathbb{Q}$. All connected projective schemes are algebraically
connected, i.e. there is a connected chain of (possibly singular)
curves connecting any two closed points. Therefore,
$B_0(X/\mspace{-6.0mu}/ G)_\mathbb{Q} = \mathbb{Q}$, and the result follows since
$r^{X,\alpha}_{G,T}$ is invariant under the scaling of
$\alpha$.
\end{proof}
\subsection{Independence on split maximal torus}
\begin{lemma}\label{independent-of-maximal-torus-lemma}
The GIT integral ratio $r_G^X := r_{G,T}^X$ does not depend on the
choice of split maximal torus $T$.
\end{lemma}
\begin{proof}
Fix two split maximal tori $T, T' \subseteq G$. All split maximal tori
are conjugate,
and so $T'$ is of the form $T' =
gTg^{-1}$ for some $g \in G$. By assumption, $G$ acts linearly on
the projective variety $X$.
Consider the map $\phi:T \to T'$ given by
$t \mapsto g t g^{-1}$, and the map $\Phi:X \to X$ given by $x
\mapsto x\cdot g^{-1}$. The pair of maps $(\phi,\Phi)$ show that
the linearized actions $\sigma: X \times T \to X$ and $\sigma': X
\times T' \to X$ are isomorphic: $\Phi(x)\cdot \phi(t) = \Phi(x\cdot
t)$. By the Hilbert-Mumford numerical
criterion, $\semistable X {T'} = \Phi(\semistable X T)$, and
furthermore there is an induced isomorphism $\bar \Phi: \gitstack X {T'}
\cong \gitstack X {T'}$.
Since the following square is commutative,
$$\xymatrix { \semistable X G \ar[r]^{\bar \Phi} \ar[d]_{i_{T}} & \semistable
X G \ar[d]_{i_{T'}}\\
\semistable X T \ar[r]^{\bar \Phi} & \semistable X {T'}, }$$
the push-forward $\bar \Phi_\ast \tilde \alpha$ is a lift of $\bar
\Phi_\ast \alpha$ for any lift $\tilde \alpha$ of $\alpha \in
A_0(X/\mspace{-6.0mu}/ G)_\mathbb{Q}$.
We use the pairs
$(\alpha, \tilde \alpha)$ and $( \Phi_\ast \alpha, \Phi_\ast
\tilde \alpha)$ to
compute the GIT integral ratios. Since $\bar \Phi$ is an
isomorphism, for any class $\beta$, the class $\bar \Phi_\ast
\beta$ has the same degree. Therefore, the ratios
$r_{G,T}^{X,\alpha}$ and $r_{G,T'}^{X, \Phi_\ast \alpha}$ are
equal.
\end{proof}
\subsection{Independence on linearized variety}
\begin{lemma}
The GIT integral ratio $r_G := r_{G}^X$ does not depend on the
choice of $G$-linearized variety $X$.
\end{lemma}
\begin{proof}
Let $X_i$ for $i = 1,2$ be two $G$-linearized
projective varieties for which $\semistable {(X_i)} T = \stable
{(X_i)} T$.
Some high tensor powers of these linearizations define
$G$-equivariant embeddings $j_i: X_i \hookrightarrow \P(V_i)$ for
$G$-representations $V_i$, $i = 1,2$.
For $i=1,2$ there are embeddings $X_i \hookrightarrow \P(V_1
\oplus V_2)$, defined from the embeddings $j_i$ by setting the
extraneous coordinates to 0.
By Proposition \ref{prop-comparing-integrals-on-auxiliary-space},
for any classes $\alpha_i \in A_0(X_i/\mspace{-6.0mu}/ G)_\mathbb{Q}$, there are classes
$\beta_i \in A_0(Y/\mspace{-6.0mu}/ G)_\mathbb{Q}$, for a smooth $G$-linearized $Y$ over
$\P(V)$ satisfying $\semistable Y T = \stable Y T \neq \emptyset$,
such that
$r_{G}^{Y,\beta_i} = r_{G}^{X_i,\alpha_i}$.
By Lemma \ref{independent-of-class-DM-case-lemma},
$r_{G}^{Y,\beta_1} = r_{G}^{Y_,\beta_2}$ is an invariant of the
$G$-linearized space $Y$, and therefore $r_{G}^{X_1, \alpha_1} =
r_G^{X_2,\alpha_2}$.
\end{proof}
\begin{proof}[Proof of Theorem
\ref{GIT-integral-ratio-is-invariant-of-G-theorem}]
The above lemmas combine to prove $r_G$ is an invariant of the group $G$
for any
reductive group $G$ with a split maximal torus $T$ over a field $k$.
\end{proof}
\section{Functorial properties of the GIT integration ratio}
\label{section-functorial-properties}
In this section, we prove that the GIT integration ratio behaves well
with respect to the group operations of direct product and central
extension.
\subsection{Field extensions and direct products}
\begin{lemma}\label{lemma-independent-under-field-extension}
If $G$ is a reductive group
over $k$ and $G_K$ its base change by a field extension $K/k$, then
the GIT integration ratios for $G$ and $G_K$ are equal:
$$r_G = r_{G_K}.$$
\end{lemma}
\begin{proof}
If $X$ is a projective variety over $k$ with a $G$-linearization,
then there is an induced
$G_K$-linearized action on $X_K$. By \cite[Prop. 1.14]{GIT},
$\semistable X G \times K = \semistable {(X_K)} {G_K}$ (and
similarly for $T$).
It follows
that $(X/\mspace{-6.0mu}/ G)_K \cong X_K/\mspace{-6.0mu}/ G_K$ and $(X/\mspace{-6.0mu}/ T)_K \cong X_K/\mspace{-6.0mu}/ T_K$.
The result can be deduced from the facts that the degree of a Chow
class is invariant
under field extension \cite[Ex. 6.2.9]{ful1} and that $c_{\text{top}}$ pulls-back
to $c_{\text{top}}^K$ by the natural morphism $BT_K \to BT$.
\end{proof}
\begin{lemma}\label{product-of-groups-GIT-ratio-lemma}
If $G_1, G_2$ are two reductive groups over a field $k$, then
$r_{G_1\times G_2} = r_{G_1} \cdot r_{G_2}$.
\end{lemma}
\begin{proof}
For each $i = 1,2$, choose a projective $X_i$ on which $G_i$ acts
linearly, and let $T_i \subseteq G_i$ denote split maximal tori.
Clearly $G_1 \times G_2$ acts on $X_1 \times X_2$ linearly, and
the stability loci are just the products of the corresponding loci from
the factors.
Let $\alpha_i \in A_0^{G_i}( \semistable {(X_i)} {G_i})$, and consider
$\alpha:=\alpha_1\times \alpha_2 \in A_0^{G_1 \times G_2}(
\semistable{(X_1)} {G_1} \times \semistable{(X_2)}{G_2}).$ Also,
take $\tilde \alpha := \tilde \alpha_1 \times \tilde \alpha_2$ to
where each $\alpha_i$ is be the lift of $\alpha_i$ to the
$T_i$-semi-stable locus of $X_i$.
We may
calculate the GIT integration ratio for $G_1 \times G_2$ using these
classes, since the ratio is independent of such choices (cf. Theorem
\ref{GIT-integral-ratio-is-invariant-of-G-theorem}). The degree of
a product of two classes is the product of the degrees, and so
the result follows since the isomorphism
$[\Spec k/T] \cong [\Spec k /T_1]
\times_k [\Spec k/T_2]$ identifies
$c_{\text{top}}(T_1) \times c_{\text{top}}(T_2) = c_{\text{top}}(T)$.
\end{proof}
\subsection{Central extensions}
In this section we prove that $r_G$ is invariant under central
extensions, completing the proof of Theorem
\ref{GIT-integral-ratio-decomposes-multiplicatively}.
Throughout the course of the discussion, we will be working with
several types of quotients.
If $G$ acts on the right on a variety $X$, when the
respective quotients exist, they will be denoted as follows:
the stack-theoretic quotient by $[X/G]$, the GIT quotient by
$X /\mspace{-6.0mu}/ G$, and the uniform categorical quotient by $X / G$.
\begin{lemma}\label{line-bundle-quotient-lemma}
Let $\scr{L}$ be a $G$-linearization of a variety $X$ for which
the GIT quotient $\pi: \semistable X G \to X /\mspace{-6.0mu}/ G$ is nonempty.
There exists some $n > 0$ such that $\scr{L}^{\otimes n}$ descends to a
line bundle $\widehat{ \scr{L}}$ (i.e. $\pi^\ast \widehat{ \scr{L}} = \scr{L}^{\otimes
n}|_{\semistable X G}$), and moreover, $\widehat{\scr{L}}$ is
the uniform categorical quotient of the induced linearization $G
\times \scr{L}^{\otimes n} \to \scr{L}^{\otimes n}$.
\end{lemma}
\begin{proof}
From \cite[Thm. 1.10]{GIT}, we see that $\semistable X G \to X/\mspace{-6.0mu}/ G$
is a uniform categorical quotient and that some power $\scr{L}^{\otimes
n}$ descends to $\widehat{\scr{L}}$. Since $\scr{L} \to X/\mspace{-6.0mu}/ G$ is flat, the
base change morphism $\scr{L}^{\otimes n} \to \widehat{\scr{L}}$ is also a uniform
categorical quotient.
\end{proof}
\begin{lemma}\label{composition-of-uniform-categorical-quotient-lemma}
Let $1 \to S \to \tilde G \to G \to 1$ be a central extension of
reductive groups, $X$ a scheme,
$\pi_{\tilde G} : X \to X
/ \tilde G$ and $\pi_S: X \to X / S$
uniform categorical quotients by $\tilde G$ and $S$ respectively,
and $\pi_G: X / S \to X / \tilde G$ the induced
morphism.
If $X/{\tilde G}$ is covered by finitely
many affine open subschemes $U$ for
which both $\pi_{\tilde G}^{-1}(U)$
and $\pi_{G}^{-1}(U)$ are affine,
then $\pi_G$ is a
uniform categorical quotient by the induced action of $G$ on $X/S$.
\end{lemma}
\begin{proof}
Consider the action $\sigma: X \times \tilde G \to X$.
Compose with
$\pi_S$ to obtain $X \times \tilde G \to X/S$. This is an
$S \times S$-invariant morphism and hence
descends to an action $X/S \times G \to X/S$.
All that remains to be shown is that $\pi_{G}: X/S \to X / \tilde G$
is a uniform categorical quotient for this action.
By \cite[Rem. 0.2(5)]{GIT}, it suffices to show for each $U$
described in the lemma statement that
the restriction $\pi_G: \pi_G^{-1}(U) \to U$ is a uniform
categorical quotient. This boils down to the
easy fact that for an affine ring $R$ on which $\tilde G$ acts,
the rings of invariants satisfy
$R^{\tilde G} = (R^S)^G$.
\end{proof}
\begin{lemma}\label{composition-of-GIT-quotient-lemma}
Let $1 \to S \to \tilde G \to G \to 1$ be a central extension of
reductive groups. Let $\tilde G$ act linearly on a
variety $X$, and $\pi: \semistable X S \to X/\mspace{-6.0mu}/ S$ denote the induced
GIT quotient. Then there is an induced $G$-linearized
action on $X/\mspace{-6.0mu}/ S$ and the semi-stable loci satisfy
\begin{equation}\label{equality-of-semistable-loci-equation}
\semistable X {\tilde G} = \pi^{-1}( \semistable
{(X/\mspace{-6.0mu}/ S)} G ).
\end{equation}
Moreover, this yields a canonical isomorphism between GIT quotients
$(X/\mspace{-6.0mu}/ S)/\mspace{-6.0mu}/ G \cong X /\mspace{-6.0mu}/ \tilde G$.
\end{lemma}
\begin{proof}
Choose $n \in \mathbb{N}$ as in
Lemma \ref{line-bundle-quotient-lemma} so that
$\scr{L}^{\otimes n}$ descends to the ample line bundle
$\widehat \scr{L}$ on $X /\mspace{-6.0mu}/ S$ that is a uniform
categorical quotient of $\scr{L}^{\otimes n}$ by $S$. Lemma
\ref{composition-of-uniform-categorical-quotient-lemma} results in
compatible $G$-actions on $X/\mspace{-6.0mu}/ S$ and $\scr{L}^{\otimes n}$, i.e. a
$G$-linearized action on $X/\mspace{-6.0mu}/ S$. Therefore, we can take the GIT
quotient $(X/\mspace{-6.0mu}/ S)/\mspace{-6.0mu}/ G$, which by
Lemma \ref{composition-of-uniform-categorical-quotient-lemma}
is a uniform categorical
$\tilde G$-quotient of $\pi^{-1}( \semistable {(X/\mspace{-6.0mu}/ S)} G)$.
Since $\widehat \scr{L}$ pulls-back to a tensor power of $\scr{L}$, and
$G$-equivariant sections of $\widehat {\scr{L}}^{\otimes m}$ pull-back to $\tilde
G$-equivariant sections of $\scr{L}^{\otimes mn}$, we have
$\pi^{-1} \semistable {(X/\mspace{-6.0mu}/ S)} G \subseteq \semistable X
{\tilde G}$.
Conversely,
if $\sigma \in \Gamma(X, \scr{L}^{\otimes n})^{\tilde G}$ is a $\tilde
G$-equivariant section, then due to its $S$-equivariance, it descends to
a $G$-equivariant section $\bar \sigma \in \Gamma(X/\mspace{-6.0mu}/ S, \widehat{\scr{L}})^{G}$
since $\widehat{\scr{L}}$ is a quotient of $\scr{L}^{\otimes n}$ by
S. Therefore, the inclusion is
full: \makebox{$\pi^{-1}(\semistable {(X/\mspace{-6.0mu}/ S)} G) = \semistable X
{\tilde G}$.}
Combining this with Lemma
\ref{composition-of-uniform-categorical-quotient-lemma},
we obtain the equalities
$$X/\mspace{-6.0mu}/ \tilde G = (\semistable X {\tilde G}/S)/G
= \semistable{(X/\mspace{-6.0mu}/ S)} {G} /G = (X/\mspace{-6.0mu}/ S)/\mspace{-6.0mu}/ G.$$
\end{proof}
\begin{lemma}\label{proper-map-from-categorical-and-stack-quotient-lemma}
Let $1 \to S \to \tilde G \twoheadrightarrow G \to 1$ is a central extension of
reductive groups. If $\tilde G$ acts linearly on $X$ so
that $\semistable X {\tilde G} = \stable X {\tilde G} \neq
\emptyset$,
then there is a canonical morphism of stacks
$\phi: [\semistable X {\tilde G}/ \tilde G] \to [ (\semistable X
{\tilde G} / S) / G]$.
Moreover, the morphism $\phi$ is proper and makes the
following diagram commute:
$$\xymatrix{ [\semistable X {\tilde G}/ \tilde G] \ar[d] \ar[r]^\phi &
[(\semistable X {\tilde G} / S) / G] \ar[d] \\
[\Spec k/ \tilde G] \ar[r] & [\Spec k/ G].}$$
\end{lemma}
\begin{proof}
We describe the functor $\phi$ on objects, omitting the description
of the functor on morphisms.
Let $\underline E := (B \stackrel \pi \leftarrow E \stackrel f \to
\semistable X {\tilde G})$
be an object of $[\semistable X {\tilde G}/ \tilde G]$, i.e.
$\pi: E \to B$ is a $\tilde G$-torsor in the \'etale topology, and
$f: E \to \semistable X {\tilde G}$ is a $\tilde G$-equivariant
morphism.
Consider the scheme $E \times_{\tilde G} G$, which exists
by the descent of affine morphisms in the \'etale topology.
Furthermore, by the descent of morphisms, it is clear that this is the
uniform categorical quotient $E/ S$, and hence maps to
$\semistable X {\tilde G} / S$.
This results in an object
$\underline E \times_{\tilde G} G := (B \stackrel \pi \leftarrow E\times_{\tilde G} G \stackrel f \rightarrow \semistable{X}{\tilde G}/S)$
of the stack $[ (\semistable X {\tilde G} / S) /G]$.
The construction $\underline E \mapsto \underline E\times_{\tilde G}
G$ is clearly
functorial. Therefore $\phi$ is a functor between categories
fibred in groupoids and hence a morphism of stacks. Moreover, since
the morphism of stacks $[\Spec k/\tilde G] \to [\Spec k/G]$ is
defined on $B$-points by $(E \stackrel \pi \to B) \mapsto (E \times_{\tilde G} G
\stackrel \pi \to B)$, the
above square commutes.
To see that $\phi$ is proper, consider the morphisms to the coarse
moduli spaces $f: \gitstack X {\tilde G} \to X/\mspace{-6.0mu}/ \tilde G$ and $g:
[(\semistable X {\tilde G}/S)/G] \to X/\mspace{-6.0mu}/ \tilde G$.
By Lemma \ref{composition-of-GIT-quotient-lemma}, these morphisms are
well-defined and $f = g\circ \phi$. Being coarse moduli morphisms,
in particular $f$ is proper and $g$ is separated, so consequently
$\phi$ is proper.
\end{proof}
\begin{proposition}\label{prop-central-extension}
If $\tilde G \twoheadrightarrow G$ is a central extension of reductive groups,
then $r_{\tilde G} = r_G$.
\end{proposition}
\begin{proof}
The case of a finite central extension is trivial, as we may use the
same $G$-variety $X$ on which to calculate both ratios $r_G$ and
$r_{\tilde G}$.
This
allows us to then reduce to the case where
the kernel $S$ of the central extension is connected.
Since $S$ centralizes the maximal torus
$\tilde T \subseteq \tilde
G$, it
must be contained within $\tilde T$; hence there is also an
analogous exact sequence
involving the maximal
tori, $1 \to S \to \tilde T \to T \to 1$. Also, notice that the
class $c_{\text{top}}(G) \in A^\ast(BT)$ pulls-back via $B\tilde T \to BT$ to
$c_{\text{top}}(\tilde G) \in A^\ast(B\tilde T)$, since the map $\tilde G
\twoheadrightarrow G$ induces an isomorphism of root systems.
Let $X$ be a projective $\tilde G$-linearized variety such that
$\semistable X {\tilde T} = \stable X {\tilde T}$ and
$\semistable X {\tilde G}\neq \emptyset$.
By Lemma
\ref{composition-of-GIT-quotient-lemma}, the projective scheme
$X/\mspace{-6.0mu}/ S$ has an induced $G$-linearization for which it is easy to
see $\semistable {(X/\mspace{-6.0mu}/ S)} {T} = \stable {(X /\mspace{-6.0mu}/ S)} T$ and
$\semistable {(X/\mspace{-6.0mu}/ S)} G \neq \emptyset$.
We use $X$ to compute $r_{\tilde G}$ and $X/\mspace{-6.0mu}/ S$ to compute
$r_G$.
There is a commutative diagram:
$$\xymatrix{
[\semistable X {\tilde T}/ \tilde T] \ar[r]^\phi & [\semistable {(X/\mspace{-6.0mu}/ S)} T/T] \\
[\semistable X {\tilde G}/ \tilde T] \ar@{_(->}[u]^i \ar[r]^\phi \ar[d]^\pi & [\semistable
{(X/\mspace{-6.0mu}/ S)} G / T] \ar@{_(->}[u]^i \ar[d]^\pi \\
\gitstack {X} {\tilde G} \ar[r]^{\phi} & \gitstack{(X/\mspace{-6.0mu}/ S)}{G}. \\
}$$
\noindent Since Lemma \ref{composition-of-GIT-quotient-lemma} applies equally
well to quasi-projective varieties $X$, combining with
Lemma \ref{proper-map-from-categorical-and-stack-quotient-lemma}, we
see that each morphism $\phi$
induces an isomorphism $\phi_\ast$ on rational Chow groups.
Moreover, the commutative diagram of Lemma
\ref{proper-map-from-categorical-and-stack-quotient-lemma} implies
that
$\phi_\ast(c_{\text{top}} \frown \tilde \alpha) = c_{\text{top}} \frown \phi_\ast
\tilde \alpha$. The theorem follows once we check that
$\phi_\ast(\tilde \alpha)$ is a lift of the class $\phi_\ast
\alpha$, for then the equality of the ratios
$$\frac{ \int_{X/\mspace{-6.0mu}/ {\tilde T}} c_{\text{top}} \frown \tilde
\alpha}{\int_{X/\mspace{-6.0mu}/ {\tilde G}} \alpha} = \frac{\int_{(X/\mspace{-6.0mu}/ S)/\mspace{-6.0mu}/ T}
c_{\text{top}} \frown \widetilde{\phi_\ast \alpha}}{\int_{(X/\mspace{-6.0mu}/ S)/\mspace{-6.0mu}/ G} \phi_\ast
\alpha}$$
follows immediately from the functoriality of Chow groups under
proper push-forwards. The fact that $\phi_\ast \tilde \alpha$ is a lift of
$\phi_\ast \alpha$ will follow from the equality
$$ \phi_\ast(\pi^\ast \alpha) =
\pi^\ast(\phi_\ast(\alpha)),$$
which follows from the standard push-pull argument once we prove
that the lower square is a fibre square of DM stacks.
Since the lower square of the diagram commutes, there is a functor from
$[\semistable {X} {\tilde
G}/\tilde T]$ to the fibre product. All that remains is to construct an
inverse functor that would demonstrate an equivalence of categories.
We do so, explicitly describing the functor on objects, but again
omitting the details of the definition on morphisms.
Given
a $\tilde G$-torsor $(B \leftarrow \tilde E \to \semistable X
{\tilde G})$, a $T$-torsor $(B \leftarrow E \to
\semistable{(X/\mspace{-6.0mu}/ S)}G)$, and an isomorphism of $G$-torsors $\tilde
E/S \cong E \times_T G$, we must construct a $\tilde T$-torsor and a
$\tilde T$-equivariant morphism to $\semistable X {\tilde G}$. This
can be accomplished by taking $E \times_{\tilde E/S} \tilde E$,
where one of the structure morphisms is the composition $E \to E
\times \{e\} \to E \times_T G \cong E/S$ and the other is the
quotient $\tilde E \to \tilde E/S$. The
$\tilde T$ equivariant morphism $E \times_{\tilde E/S} \tilde E \to
\semistable X {\tilde G}$ is the composition of the projection onto
$\tilde E$
with the $\tilde G$-equivariant morphism to $\semistable X {\tilde
G}$. It is
routine to verify that this is indeed the inverse functor.
\end{proof}
These results combine to prove Theorem
\ref{GIT-integral-ratio-decomposes-multiplicatively}.
\begin{proof}[Proof of \ref{GIT-integral-ratio-decomposes-multiplicatively}]
Immediate from the results of this section.
\end{proof}
\section{$r_G = |W|$ for groups of type $A_n$}\label{section-calculation}
We compute the GIT integration ratio $r_G$
for $G = PGL(n)$, and use this to prove Corollary \ref{main-theorem}.
\begin{proposition}\label{proposition-calculation-for-pgln}
The GIT integration ratio for $G = PGL(n)$ over any field is $r_G =
|W| = n!$.
\end{proposition}
\begin{proof}
Let $SL(n)$ act on $\mathbf M_{n}$, the vector space of $n \times n$
matrices with $k$-valued entries, via left multiplication of
matrices.
This induces a dual representation of $SL(n)$ on $\mathbf M_n
^\ast$ and hence actions of $SL(n)$ and $PGL(n)$ on $\P(\mathbf
M_n)$, the projective space of lines in $\mathbf M_{n}^\ast$.
Choose the $PGL(n)$-linearization
on $\mbox{$\mathcal{O}$}_{\P(\mathbf M_{n})}(n)$ induced from
these representations of $SL(n)$.
Let $T \subseteq PGL(n)$ and $\tilde T
\subseteq SL(n)$ denote the diagonal maximal tori. In this case, the
$PGL(n)$-stability loci (resp. $T$-stability loci) are equal to
the analogous $SL(n)$-stability loci (resp. $\tilde T$-stability loci),
which we now describe.
A basis of $\mathbf M_n$ is given by the matrices $e_{ij}$,
each defined by its unique nonzero entry of $1$ in the $(i,j)$th
position.
Moreover, $e_{ij}$ is a weight vector of weight $\chi_i
\in \chargp{\tilde T}$, where $\chi_i$ is defined by the rule
$$\begin{bmatrix} t_1 & & & \\
& t_2 & & \\
& & \ddots & \\
& & & t_n\\
\end{bmatrix} \mapsto t_i.$$
Notice that $\chi_1,\ldots, \chi_{n-1}$ is a basis of the
character group $\chargp {\tilde T}$ and $\chi_n = - \sum_{i=1}^{n-1} \chi_i$, so that
the characters $\chi_1,\ldots, \chi_n$ form the vertices of a
simplex centered at the origin in $\chargp{\tilde T}_\mathbb{Q}$.
From the Hilbert-Mumford criterion (cf. Theorem
\ref{hilbert-mumford-criterion}), one quickly concludes that
the $\tilde T$-unstable locus in $\P(\mathbf M_{n})$
is the set of all points $x \in \P(\mathbf M_n)$ such that the
matrix $e_{ij}(x)$
has a row with all entries $0$; all
other points are $\tilde T$-stable. Thus, there are no
strictly semi-stable points for the $\tilde T$-action, and hence
neither are there any for the
$SL(n)$-action.
The $SL(n)$-stable locus
then comprises the set of $x \in \P(\mathbf M_n)$ such that the
matrix $e_{ij}(x)$ is of full-rank; this comprises a
dense $PGL(n)$ orbit.
The stabilizer of this orbit is trivial,
so $\gitstack {\P(\mathbf M_{n})}{PGL(n)} \cong
\P(\mathbf M_{n})/\mspace{-6.0mu}/ PGL(n)
\cong \Spec k$.
The $T$-quotient is clearly
$\gitstack {\P(\mathbf M_n)}{T} \cong \P(\mathbf M_n)/\mspace{-6.0mu}/ T \cong
(\P^{n-1})^n$.
The rational Chow ring of the $T$-quotient is
$$A^\ast((\P(\mathbf M_n)/\mspace{-6.0mu}/ T )_\mathbb{Q} \cong
\mathbb{Q}[t_1,\ldots, t_n]/(t_1^n,\ldots, t_n^n).$$
In this ring, the class of a point is clearly $\prod_{i=1}^n
t_i^{n-1}$. The class $c_{\text{top}} \in A^\ast(BT)$ is the product of all
the roots, which are of the form $\alpha_{ij} := \chi_i - \chi_j
\in \Sym^\ast \chargp{T} \cong A^\ast(BT)$, for $1 \leq i \neq
j\leq n$. One can easily check that the pull-back of $\chi_i - \chi_j$
to $A^\ast( \P(\mathbf M_n) /\mspace{-6.0mu}/ T)_\mathbb{Q}$ is $t_i - t_j$, and therefore
$c_{\text{top}} = \prod_{i \neq j} (t_i - t_j)$. Let $\alpha \in
A_0(\P(\mathbf M_{n}) /\mspace{-6.0mu}/ G)_\mathbb{Q} \cong \mathbb{Q}$
denote the fundamental class; i.e.
$\int_{\P(\mathbf M_n) /\mspace{-6.0mu}/ PGL(n)} \alpha = {1}$.
Therefore, the GIT
integral ratio $r_G$ is just $\int_{\P(\mathbf M_n)/\mspace{-6.0mu}/ T} c_{\text{top}}$,
which equals the coefficient of the monomial
$\prod_{i=1}^n t_i^{n-1}$ in the expansion of $\prod_{i \neq j} (t_i -
t_j)$, since all other monomials of degree $n^2 - n$ are $0$ in the
ring $\mathbb{Q}[t_1,\ldots, t_n]/(t_1^n,\ldots, t_n^n)$.
Notice that $\prod_{i\neq j}(t_i - t_j) = (-1)^{n(n-1)/2}
(\det M_V)^2$,
where $\det M_V$ is the determinant of the Vandermonde matrix
$$
M_V :=
\begin{bmatrix}
1 & t_1 & t_1^2 & \cdots & t_1^{n-1}\\
1 & t_2 & t_2^2 & \cdots & t_2^{n-1}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
1 & t_n & t_n^2 & \cdots & t_n^{n-1}\\
\end{bmatrix}.
$$
By definition, $\det M_V = \sum_{\sigma \in S_n} \mathrm{sgn}(\sigma)
\prod_{i=1}^{n} t_i^{\sigma(i)-1}$. In the
ring $A^\ast((\P^{n-1})^n)_\mathbb{Q}$, we compute the products of
monomials of the form $m_\sigma := \prod_{i=1}^n t_i^{\sigma(i) - 1}$
for $\sigma \in S_n$:
$$m_\sigma \cdot m_{\sigma'} =
\left \{
\begin{array}{l l}
\prod_{i=1}^n t_i^{n-1} & : \sigma(j) + \sigma'(j) = n+1; ~\forall~1 \leq j
\leq n;\\
0 & : \textrm{ otherwise.}
\end{array}\right.$$
If $w_0 := (1~n)(2~n-1)\cdots(\lceil n/2 \rceil ~ \lceil (n+1)/2
\rceil) \in S_n$ denote the longest element of the Weyl group $W$,
then for each
$\sigma \in S_n$, the permutation $\sigma'$ defined as the composition
$\sigma' := w_0 \circ \sigma$ is the
unique element of $S_n$ for which $m_\sigma \cdot m_{\sigma'}\neq 0$.
For such pairs $(\sigma, \sigma')$, the product of the signs satisfies
$\mathrm{sgn}(\sigma)\cdot \mathrm{sgn}(\sigma') = \mathrm{sgn}(w_0) = (-1)^{(n^2 - n)/2}$.
Therefore,
\begin{align*}
c_{\text{top}} & = (-1)^{(n^2 - n)/2} \cdot \sum_{\sigma \in S_n} (-1)^{(n^2 -
n)/2} \prod_{i=1}^n t_i^{n-1}\\
& = n! \cdot \prod_{i=1}^n t_i^{n-1}.
\end{align*}
Thus, $r_G = n! = |W|$.
\end{proof}
The proof of Corollary \ref{main-theorem} is now anticlimactic:
\begin{proof}[Proof of Cor. \ref{main-theorem}]
Combine Theorems \ref{GIT-integral-ratio-is-invariant-of-G-theorem} and
\ref{GIT-integral-ratio-decomposes-multiplicatively},
and Proposition \ref{proposition-calculation-for-pgln}.
\end{proof}
\section{Final Remarks and Questions}
\label{section-final-remarks}
We conclude the paper with a discussion of how to generalize Corollary
\ref{main-theorem} to arbitrary reductive groups.
\begin{remark}\label{remark-stating-Martins-theorem}
One can prove that $r_G = |W|$ for any reductive group $G$ over a
field $k$
admitting a split maximal torus, but the proof no
longer is independent of Martin's theorem
\eqref{martins-integration-formula}.
\end{remark}
We outline a justification of this remark, pointing the reader to
\cite{sesh1} as a reference on geometric invariant theory relative
to a base; the base we will use is $\mathbb{Z}_{(p)}$, the localization of $\mathbb{Z}$ at
the characteristic $p$ of the base field $k$.
By Theorems \ref{GIT-integral-ratio-is-invariant-of-G-theorem} and
\ref{GIT-integral-ratio-decomposes-multiplicatively}, it suffices to
verify $r_G^X = |W|$ on a single
$G$-linearized projective variety $X$ for each simple Chevalley group
$G$.
Each Chevalley group $G$ admits
a model $G_{\mathbb{Z}}$ over the integers, with a split maximal torus $T_\mathbb{Z}
\subseteq G_\mathbb{Z}$.
Moreover, there is a
smooth projective $\mathbb{Z}_{(p)}$-scheme $X_{(p)}$ on which
$G_{(p)} := G_\mathbb{Z} \times_\mathbb{Z} {\mathbb{Z}_{(p)}}$ acts linearly
and for which
all $G_{(p)}$- (resp. $T_{(p)}$-) semi-stable points are stable and
comprise an open locus that nontrivially intersects the closed fibre
over $\mathbb{F}_p$.
We justify this assertion
briefly:
Proposition \ref{trivial-flag-variety-bundle-prop} reduces the problem
to finding some $\mathbb{Z}_{(p)}$-scheme for which there exist
$G_{(p)}$-stable points in the closed fibre over $\mathbb{F}_p$;
with the aid of the Hilbert-Mumford criterion, one discovers that
many such schemes exist (e.g. take $\P(V_{\mathbb{Z}_{(p)}}^{\oplus n})$ with
$V_{\mathbb{Z}_{(p)}}^{\oplus n}$ a large multiple of a general irreducible
$G_{(p)}$-representation).
Having chosen such an $X_{(p)}$, the technique of specialization
(cf. \cite[\S20.3]{ful1}) implies that the
integral of relative
$0$-cycles on $X_{(p)} /\mspace{-6.0mu}/ G_{(p)}$
and $X_{(p)} /\mspace{-6.0mu}/ T_{(p)}$ restricted to the generic fibre over $\mathbb{Q}$ is
equal to the integral restricted to any closed fibre over $\mathbb{F}_p$.
The ratio $r_G$ is
independent under field extension by Lemma
\ref{lemma-independent-under-field-extension}, and so this reduces
the calculation of $r_G$ over the field $k$ to the
computation of $r_{G_{\mathbb{C}}}$, where $G_{\mathbb{C}} := G_{\mathbb{Z}} \times_{\mathbb{Z}}
\mathbb{C}$.
The Kirwan-Kempf-Ness theorem
(cf. \cite[\S8]{kir1} or \cite[\S8.2]{GIT}) shows that over $\mathbb{C}$,
the GIT quotient $X_\mathbb{C} /\mspace{-6.0mu}/ G_\mathbb{C}$
is equivalent to the symplectic reduction, and so Martin's theorem
\cite[Thm. B]{mar1}
proves $r_{G_\mathbb{C}} = |W|$.
\begin{question}\label{question-purely-aglgebraic-proof}
What is a purely algebraic proof that $r_G = |W|$ for a general
reductive group $G$ admitting a split maximal torus?
\end{question}
In light of Theorems
\ref{GIT-integral-ratio-is-invariant-of-G-theorem} and
\ref{GIT-integral-ratio-decomposes-multiplicatively}, to answer
Question \ref{question-purely-aglgebraic-proof} it suffices to verify
$r_G = |W|$ for all simple groups $G$. Such a verification was done
in \S \ref{section-calculation} for simple groups of type $\mathbf
A_{n}$. Can $r_G$ be calculated (algebraically) for any other simple
groups $G$?
\begin{question}\label{question-combinatoric-applications}
Are there any interesting combinatorial applications of Corollary \ref{main-theorem}?
\end{question}
The proof of Corollary \ref{main-theorem} boils down to a
calculation involving the symmetric group
(cf. \S \ref{section-calculation}), so perhaps
interpreting Corollary \ref{main-theorem} in the context of another
explicit example would have interesting combinatorial consequences.
\section*{Appendix: Chow groups and quotient
stacks}\label{appendix-equivariant-Chow-group}
\setcounter{equation}{0}
\setcounter{subsection}{0}
\renewcommand{\theequation}{A.\arabic{subsection}.\arabic{equation}}
\renewcommand{\thesubsection}{A.\arabic{subsection}}
Here we recall the basic properties of Chow groups for schemes and
quotient stacks.
\subsection{Chow groups}
For a scheme $X$ defined over a field $k$, let $A_i(X)$ denote the
$\mathbb{Z}$-module generated by $i$-dimensional subvarieties over $k$ modulo rational
equivalence (see \cite{ful1}). We call $A_\ast(X) := \oplus_i A_i(X)$ the
\emph{Chow group} of $X$. To indicate rational
coefficients, we write $A_\ast(X)_\mathbb{Q} := A_\ast(X) \otimes \mathbb{Q}$.
For a scheme $X$ over a field $k$ and an algebraic group $G$
acting on $X$, the Chow group of the quotient stack $[X/G]$ is defined
by Edidin and Graham in \cite{edi-gra1} to be the limit of Chow
groups using Totaro's \cite{tot1} finite approximation construction:
$$A_i([X/G]) := A_{i-g+n}(X \times_G U)$$
where $U$ is an open subset of an $n$-dimensional
$G$-representation $V$ on which $G$ acting freely and whose
complement
$V\setminus U$ has sufficiently large codimension. It is a result
of Edidin and Graham that this is well-defined and independent of the
presentation of the stack $[X/G]$ as a quotient
(see \cite[Prop. 16]{edi-gra1}). Equivalently,
we may think of $A_i[X/G]$ as the $G$-equivariant Chow group of $X$,
and we make use of the notation $A_\ast^G(X) := A_\ast([X/G])$ when
convenient.
The Chow groups of quotient stacks are functorial with respect to the
usual operations (e.g. flat pull-back, proper
push-forward), and when $X$ is smooth, there is an intersection
product that endows these groups with the structure a commutative
ring with identity, graded by codimension.
Hence the Chow group of the stack $A_\ast([X/G])$ is
naturally a module over the ring $A^\ast(BG)$ where $BG = [\Spec
k/G]$ is the trivial quotient. In the case $T = \mathbb{G}_m^n$ is a rank
$n$ torus, we denote
$$S := A^\ast(BT) = \Sym \chargp T \cong \mathbb{Z}[\chi_1,\ldots, \chi_n].$$
A character $\chi \in \chargp T$ is equivalent to a line bundle $L_\chi$
over $BT$ whose Chern class $c_1(L_\chi) = \chi \in S$.
For a $G$-scheme $X$, the relationship between the rational Chow
groups of $[X/G]$ and $[X/T]$ is simple to state.
The following result follows originally from the work
of Vistoli \cite[Thm. 3.1]{vis2}, but
the formulation we
require is taken from \cite{bri1}:
\begin{theorem}\label{relation-between-equivariant-cohomologies-theorem}
Let $G$ be a connected reductive group, acting on a $k$-scheme $X$,
and with maximal torus $T$. The homomorphism
$$\gamma: S_\mathbb{Q} \otimes_{S_\mathbb{Q}^W} A_\ast^G(X)_\mathbb{Q} \to A_\ast^T(X)_\mathbb{Q}$$
defined by $u \otimes v \mapsto u \frown \pi^\ast(v)$, where $\pi:
[X/T] \to [X/G]$ is the natural surjection, is a $W$-equivariant
isomorphism.
\end{theorem}
\begin{proof}
See \cite[Thm. 6.7]{bri1}.
\end{proof}
As usual, the case of the action of a torus $T$ is especially
well-understood (see \cite{bri1}). In particular, there is a
localization theorem useful for making calculations in $T$-equivariant
Chow groups. The following version of the localization theorem will
suffice for our purposes:
\begin{theorem}[Localization]\label{brions-localization-theorem}
Let $X$ be a smooth projective scheme with a $T$-action, and let
$i: X^T \to X$ denote the inclusion of the scheme of $T$-fixed
points. Then the morphism
$$i^\ast: A_\ast^T(X)_\mathbb{Q} \to A_\ast^T(X^T)_\mathbb{Q}$$
is an injective $S$-algebra morphism.
Furthermore, if $X^T$ consists of finitely many points, then the
morphism
$$i^\ast: A_\ast^T(X) \to A_\ast^T(X^T)$$
of Chow groups with integer coefficients is injective as well.
\end{theorem}
\begin{proof}
See \cite[Cor. 3.2.1]{bri1}.
\end{proof}
\subsection{Operational Chow groups}
We define the $i$th operational Chow group $A^i(X)$ to be the group of
``operations'' $c$ that comprise a system
of group homomorphisms $c_f:A_\ast(Y) \to A_{\ast -i}(Y)$, for
morphisms of schemes $f:Y \to X$, compatible
with proper push-forward, flat pull-back, and the refined Gysin map
(cf. \cite[\S17]{ful1}). Similarly, Edidin and Graham
define equivariant operational Chow groups $A^i_G(X)$ via systems of
group homomorphisms
$c_{f}^G
: A_\ast^G(Y) \to A_{\ast-i}^G(Y)$ compatible with the $G$-equivariant
analogues of the above maps (see \cite[\S 2.6]{edi-gra1}). The most
obvious examples of equivariant operational Chow classes are
equivariant Chern classes $c_i(\scr{E})$ of $G$-linearized vector
bundles $\scr{E}$ (i.e. Chern classes of vector bundles on $[X/G]$).
Moreover, $A_G^\ast(X)$ equipped with composition
forms an associative, graded ring with identity.
When $X$ is smooth, there is a Poincar\'e
duality between the equivariant operational Chow group and the
usual equivariant Chow group.
\begin{theorem}[Poincar\'e duality]\label{poincare-duality-theorem}
If $X$ is a smooth $n$-dimensional variety, then the map $A^i_G(X)
\to A_{n-i}^G(X)$ defined by $c \mapsto c \frown [X]$ is an
isomorphism.
\end{theorem}
\begin{proof}
See \cite[Prop. 4]{edi-gra1}.
\end{proof}
\begin{remark}
When $X$ is a smooth $n$-dimensional variety, this allows us to
write $A^k_G(X)$ to denote the
codimension $k$ Chow group $A_{n-k}^G(X)$. Furthermore, this
induces an isomorphism of rings $A_G^\ast(X) \cong
A_{n-\ast}^G(X)$, with the multiplication structure on
$A_{n-\ast}^G(X)$ given by the intersection product.
\end{remark}
|
2,877,628,088,607 | arxiv | \section{Introduction \label{SEC:INTRO}}
It is well established that the presence of structured surfaces can have a profound impact on the phase behaviour and particularly, the freezing transition of atomic, molecular, and colloidal fluids.
Typical effects are shifts of the freezing transition with respect to the corresponding bulk transition \cite{Alba_Simionesco_2006}, and a significant impact on the fluid's structure close to the walls \cite{schoen2007nanoconfined, Seemann1848}.
Examples include water at the inner surfaces of silica nanopores \cite{B010086M,doi:10.1002/cphc.200800616} or at graphene sheets \cite{PhysRevB.95.195414},
atoms between the structured surfaces of a surface force apparatus \cite{doi:10.1063/1.466668}, but also wetting of crystalline phases of colloids close to patterned substrates \cite{Esztermann_2005} and active Janus particles at chemically decorated surfaces~\cite{doi:10.1063/1.5091760}. In some (yet not all) cases, structured surfaces actually assist the adjacent fluid in developing a solid-like structure, that is, freezing is supported (relative to the bulk system) rather than suppressed.
Here we are interested in a seemingly "old" example of the first scenario, that is, the
freezing of a two-dimensional (2D) system of colloidal particles on a one-dimensional (1D) periodic substrate. This phenomenon, commonly denoted as laser-induced freezing (LIF), was first discovered experimentally by Chowdhury, Ackerson, and Clark~\cite{Chowdhury1985} in a 2D monolayer of charged particles subject to a 1D periodic laser field. At low light intensities, i.e., low potential barriers $V_0$, and not too high average densities, the suspension forms a modulated liquid (ML) phase, characterized by an oscillatory density profile perpendicular to the stripes but full translational symmetry along the stripes. This changes at large light intensities, i.e., large $V_0$, where a "locked floating solid" (LFS) emerges. Here, the colloids are positionally locked perpendicular to the minima, but unlocked along them (thereby allowing the solid to "float" in one direction). This discovery motivated a
series of studies by theory~\cite{Chakrabarti1994, Das1998, Das1999a, Frey1999, Radzihovsky2001, Rasmussen2002, Chaudhuri2004, Nielaba2004, Chaudhuri2006, Luo2009}, computer simulations~\cite{Loudiyi1992b, Chakrabarti1995, Das1999a, Das1999b, Das2001, Strepp2001, Strepp2002, Strepp2003, Chaudhuri2004,Chaudhuri2005, Chaudhuri2006, Buerzle2007, Luo2009} and experiments~\cite{Loudiyi1992a, Wei1998, Bechinger2000, Bechinger2001, Baumgartl2004}. From the theoretical side it turned out that mean-field like approaches (which are characterized by incorrect treatment of fluctuations) fail to predict the complete phenomenology \cite{Frey1999, Radzihovsky2001}.
A major step towards a theoretical understanding of the full LIF scenario was provided by the work of Frey, Nelson, and Radzihovsky (FNR)~\cite{Frey1999, Radzihovsky2001} who extended the concept of dislocation-mediated melting in 2D described by KTHNY theory~\cite{Kosterlitz1973, Halperin1978, Nelson1979, Young1979} towards the presence of 1D periodic substrates~\cite{Bechinger2007}. Depending on the so-called commensurability parameter $p$ that depends on the substrate periodicity, $L_s$, and determines the population of potential minima by particles, different phases with partial symmetry-breaking may arise.
Extensive numerical (Monte-Carlo) simulation studies~\cite{Strepp2001, Strepp2002, Strepp2003, Buerzle2007} later confirmed their results.
Until recently, studies of LIF were restricted to systems of particles with strongly repulsive interactions, although it is nowadays possible not only to fabricate soft particle-particle interactions~\cite{Liz-Marzan1996, Hoffmann2010, Ramli2013, Hayes2014}, but also to investigate their interaction with a substrate~\cite{Zaidouny2013, Schoch_Langmuir2014, Schoch_SoftMatter2014}.
This motivated us in an earlier study based on classical density functional theory \cite{Kraft2020a} to investigate the phenomenon of LIF in a system of particles interacting via an ultra-soft potential characterized by a finite value at zero separation, thus allowing for overlap. We studied this system on cosine and Gaussian substrates, focusing on the case $p=1$ (where each potential minimum is equally populated). Despite a mean-field like treatment, we could establish the occurrence of LIF and provide full phase diagrams in the planes spanned by the average density $\bar{\rho}$ and the parameters controlling the fluid-substrate potential. We also showed that LIF can be understood as a density-driven transition, thereby complementing the more traditional view where the control parameter is $V_0$.
In the present paper, we extend the methodology of \cite{Kraft2020a} to systems with commensurability parameter $p=2$, for which, in the ordered phase, the particle distribution in direction perpendicular to the substrate minima has periodicity $2L_s$. At $p=2$, the theory of Frey, Nelson, and Radzihovsky~\cite{Frey1999, Radzihovsky2001} predicts that the (re-entrant) melting of the LFS (with $p=2$) upon increasing $V_0$ occurs through two phase transitions with successive unbinding of dislocation pairs. First, unbinding of dislocation pairs with Burgers vectors parallel to the minima leads to a "locked smectic" (LSm) phase, which is liquid-like along the minima but still breaks the discrete symmetry of the substrate by populating only every second minimum equally. This is followed by an unbinding of dislocation pairs with Burgers vectors perpendicular to the minima, which eventually leads to a phase transition from the LSm into the ML phase.
Within the latter, the discrete substrate symmetry is restored, that is, the density profile displays modulations with periodicity $L_s$.
The emergence of a LSm phase upon melting the LFS was observed experimentally in a 2D colloidal system of charged polystyrene spheres~\cite{Baumgartl2004}, and it was
found in Monte Carlo simulation of hard discs~\cite{Buerzle2007}, It also appeared in a theoretical study of vortex systems in superconductors~\cite{Hu2005}.
In the present study we demonstrate that a LSm phase appears in systems of ultra-soft colloids. In particular, according to our mean-field density functional study, the LSm appears as an intermediate phase in between the ML and the LFS upon increasing either the potential barrier, or the average density, or by decreasing the available space within one minimum by manipulating the fluid-substrate potential.
We show this by analysing two-dimensional density profiles obtained by minimization of the grand canonical functional. The LSm is then identified by homogeneity along the minima, and periodicity $2L_s$ perpendicular to them. Due to the mean-field character of our approach, the occurrence of a LSm is indeed not an obvious result.
Different from other studies, however, we do not see re-entrant melting~\cite{Wei1998}, which is consistent with our previous study of systems at $p=1$~\cite{Kraft2020a}, but which is in contrast to observations in charged polystyrene spheres~\cite{Baumgartl2004} and for hard discs~\cite{Buerzle2007}.
The rest of this manuscript is organized as follows:
In Section~\ref{SEC:Theory} we introduce our 2D model system of ultra-soft particles, the two types of 1D periodic substrates considered, and the density functional treatment on which our work is based.
Numerical results from minimization of the density functional are presented in Section~\ref{SEC_Numerical_Results}.
In Section~\ref{SEC:Conclusion_and_Outlook}, we summarize and give an outline for future investigation, including preliminary results for new phases emerging at larger substrate periodicities.
\section{Theoretical framework \label{SEC:Theory}}
Our present study is based on the same type of model (ultra-soft colloids) and same method of investigation (classical density functional theory in mean-field approximation) as our earlier study on systems with commensurability parameter $p=1$~\cite{Kraft2020a}. Therefore, we summarize only briefly the main points and refer the reader for details to Ref.~\cite{Kraft2020a}.
\subsection{Model system}
We consider a 2D colloidal system (on the $x$-$y$ plane of the coordinate system) exposed to two variants of 1D periodic substrate potentials. The most simple variant is a harmonic (cosine) substrate potential,
\begin{align}
V_{\text{ext}}(\bs{r} ) = \frac{V_0}{2} \cos\left( \frac{2 \pi}{L_{s}} x \right),
\label{Eq_cosine_substrate}
\end{align}
with periodicity $L_{s}$ and amplitude~$V_0$, and the position vector $\bs{r} = (x,y) \in \mathbb{R}^2$.
Furthermore, we consider the Gaussian substrate
\begin{align}
V_{\text{ext}}(\bs{r} ) = \sum_{m \in \mathbb{Z}} V_0 \exp\left( - \left( \frac{ x-m {L_{s}} }{ R_{g}} \right)^2 \right),
\label{Eq_Gaussian_substrate}
\end{align}
where $R_{g}$ is a measure of the range of the Gaussian, and $m$ runs over all integer numbers.
Our motivation to introduce the Gaussian substrate is its tunability:
It allows to effectively reduce the available space around the potential minima through the range $R_g$ of the Gaussian maxima. A comparison of both substrates can be found in Ref.~\cite{Kraft2020a}.
As for the colloidal system, we consider (for reasons outlined below and, in more detail, in Ref.~\cite{Kraft2020a}) a 2D system of ultra-soft particles, with the interaction given by the generalized exponential model of index $n$ (GEM-$n$),
\begin{align}
\label{Eq_Interaction_Potential}
V( |\bs{r}_1 - \bs{r}_2| ) = \epsilon \, \exp\left( - \left(\frac{ |\bs{r}_1 - \bs{r}_2| }{R}\right)^n \right).
\end{align}
In Equation~\eqref{Eq_Interaction_Potential}, $\bs{r}_1$ and $\bs{r}_2$ are the particle positions, $\epsilon$~is the interaction strength, and $R$ represents the range of the interaction.
As in Ref.~\cite{Kraft2020a}, we fix $n=4$ in Equation~\eqref{Eq_Interaction_Potential} throughout this work, and denote all length scales in units of $R$, the range of the particle interaction.
The particle interaction strength is set to $\beta \epsilon = 1$, where $\beta = 1/ k_B T$ (with $k_B$ being Boltzmann's constant and $T$ being the temperature).
An important parameter in the context of LIF is the commensurability parameter $p$~\cite{Bechinger2007}.
The commensurability parameter $p$~\cite{Bechinger2007} is given as the ratio
\begin{equation}
p = \frac{a^{\prime}_{\vec{m}}}{L_s} = \frac{ |\bs{K}| }{ | \bs{G}_{\vec{m}} | }
\label{Eq_Def_commensurability_parameter_p}
\end{equation}
between the distance $a^{\prime}_{\vec{m}}$ between the lattice planes with Miller indices $\vec{m} =(m_1,m_2)$ (with $m_i \in \mathbb{Z}$) of the arising solid phase, and the substrate periodicity $L_s$.
The second member of Equation~\eqref{Eq_Def_commensurability_parameter_p} expresses this ratio in Fourier space using reciprocal lattice vectors. Specifically, $\bs{K}$ denotes the dominant wave vector of the substrate potential (with $|\bs{K}| = 2 \pi / L_s$) and $\bs{G}_{\vec{m}} = m_1 \bs{G}_1 + m_2 \bs{G}_2$ denotes the reciprocal lattice vectors (with $|\bs{G}_{\vec{m}}| = 2 \pi / a^{\prime}_{\vec{m}}$).
Commensurability requires that the wave vector $\bs{K}$ of the substrate is equal to one of the reciprocal lattice vectors $\bs{G}_{\vec{m}}$~\cite{Bechinger2007}.
We here rather focus on the real space representation, where commensurability requires that one of the lattice planes of the arising solid coincides with the substrate minima.
The case $p = N$ (with $N$ being a natural number) then corresponds to a situation where each ($p=1$) or every $p$-th ($p>1$) minimum is equally populated.
In Ref.~\cite{Kraft2020a}, we have considered the case $p=1$, where the locked floating solid into which the (modulated) liquid freezes is characterized by lattice sites in \textit{each} minimum of the periodic substrate.
This situations occurs most likely when the formed (locked floating) solid has the same lattice constant $a$ as the bulk solid, and the relative orientation with respect to the substrate is a primary orientation~\cite{Bechinger2007}.
For the present GEM-4 potential, the (2D) bulk lattice constant is $a/R \approx 1.4$ (see e.g. Ref.~\cite{Kraft2020a}), yielding $L_s/R = \sqrt{3}a/2R \approx 1.2$ as an optimal value for $p=1$.
Phases with commensurability parameter $p=2$, in particular the LSm ($p=2$) and the LFS ($p=2$), are expected at substrate periodicity $L_s = a \sqrt{3}/4$~\cite{Bechinger2007}.
This value was also used in the studies of the LSm ($p=2$) for charged polystyrene spheres~\cite{Baumgartl2004} and for hard discs \cite{Buerzle2007}.
For the present GEM-4 system, we obtain with $a/R \approx 1.4$, a dimensionless value of $L_s /R \approx 0.6$ as an optimal choice for the case $p=2$.
\subsection{Density functional theory \label{SUBSEC:Model_and_density_functional_theory_subsection_DFT}}
To calculate the equilibrium density profile, $\rho_{\text{eq}}(\bs{r})$, we use classical DFT~\mbox{\cite{Evans1979,Evans1992}}.
The main idea is that $\rho_{\text{eq}}(\bs{r})$ minimizes the grand potential functional
\begin{align}
\Omega[\rho] &= F[\rho] + \int d\bs{r} \rho(\bs{r}) V_{\text{ext}}(\bs{r}) - \mu \int d\bs{r} \rho(\bs{r})
\label{Eq_grand_potential_functional}
\end{align}
with chemical potential~$\mu$, external potential~$V_{\text{ext}}(\bs{r})$, and the intrinsic Helmholtz free energy functional \mbox{$F[\rho] = F_{\text{id}}[\rho] + F_{\text{exc}}[\rho]$}.
The ideal gas contribution of $ F[\rho]$ is known exactly,
\begin{subequations}
\begin{align}
F_{\text{id}}[\rho] &= k_B T \int d\bs{r} \rho(\bs{r}) \left[ \ln(\Lambda^2 \rho(\bs{r}))-1 \right],
\label{Eq_ideal_gas_free_energy_functional} \\
%
\intertext{where $\Lambda$ is the de Broglie wavelength.
The excess free energy~$F_{\text{exc}}$ describes the impact of the interactions between particles, and has to be approximated for most types of interactions.
Consistent with our earlier study~\cite{Kraft2020a}, we use the mean-field (MF) approximation for $F_{\text{exc}}$ that is well established for the description of ultra-soft particles at high density~\cite{Likos2001},}
%
F_{\text{exc}}[\rho] &= \frac{1}{2} \int d\bs{r} \int d\bs{r}' \big[ \rho(\bs{r}) V(\bs{r}-\bs{r}') \rho(\bs{r}') \big].
\label{Eq_excess_free_energy_functional}
\end{align}
\label{Eq_intrinsic_free_energy_functional}
\end{subequations}
The high accuracy of the mean-field approximation for different types of ultra-soft particles was frequently demonstrated
\cite{Lang2000, Louis2000PhysRevE,Archer2004,Likos2007, Archer2014,Mladek2006, Mladek2007,Likos2007} (see Ref.~\cite{Kraft2020a} for a more detailed description).
Apart from the direct connection to particle interactions, a further major benefit of the DFT approach from a practical point of view is the possibility of an unconstrained (numerical) minimization, in which no \textit{a priori} information of the spatial form of $\rho_{\text{eq}}(\bs{r})$ is assumed.
All of the results in the present paper are based on such an unconstrained minimization.
We note that using a constrained minimization (using, e.g., arrays of Gaussian peaks to describe the density profile in solid-like phases) one could potentially not only miss details of the phases, but even entirely miss phases which are not covered by an \textit{a priori} prescribed ansatz.
While this is true in general, it seems particularly relevant for the system at hand.
Indeed, in the outlook (see Section~\ref{SEC:Conclusion_and_Outlook}), we show an example of a phase that we probably would have missed when using a prescribed ansatz rather than unconstrained minimization.
The minimization of Equation~\eqref{Eq_grand_potential_functional} leads to the Euler-Lagrange equation,
\begin{align}
\rho_{\text{eq}} (\bs{r})=\Lambda^{-2}
\exp\left[\beta \mu -\beta V_{\text{ext}}(\bs{r}) - \beta \left.\frac{\delta F_{\text{exc}}[\rho]}{\delta \rho(\bs{r})}\right|_{\rho_{\text{eq}}} \right].
\label{Eq_Euler_Lagrange}
\end{align}
Similar to our previous work~\cite{Kraft2020a}, we solve Equation~\eqref{Eq_Euler_Lagrange} self-consistently using (numerical) fixed-point iteration~\cite{Hughes2014} at given temperature, interaction parameters, and given average density~$\bar{\rho} = \langle N \rangle / (L_x L_y)$ (where $\langle N \rangle$ is the average particle number related to the chemical potential~$\mu$), and with periodic boundary conditions in both directions.
The numerical minimization closely follows the general scheme as e.g. presented in Ref.~\cite{Hughes2014}: The density profile $\rho(\bs{r})$ is discretized on a set of grid points with discretization $dx$ and $dy$, which yields a discretized density profile $\rho_{\bs{i}}$ with grid indices $\bs{i} = (i_x, i_y)$.
The (discretized) Equation \eqref{Eq_Euler_Lagrange} is then used as a fixed-point equation $\rho_{\bs{i}}^{n+1} = f[\rho_{\bs{i}}^{n}]$, where $f$ denotes the right hand side of Equation~\eqref{Eq_Euler_Lagrange}, $n$ is an iteration index of the fixed-point iteration. and iteration steps $n$ are done until the density profile $\rho_{\bs{i}}$ converges.
For numerical stability reasons~\cite{Hughes2014}, the previous density profile $\rho_{\bs{i}}^{n}$ is mixed with the new density profile (as obtained from $f[\rho_{\bs{i}}^{n}]$) with a mixing parameter~$\alpha$ (typically $\alpha \in [0.01:0.1]$) to obtain the next iteration step $\rho_{\bs{i}}^{n+1} = \alpha f[\rho_{\bs{i}}^{n}] + (1-\alpha) \rho_{\bs{i}}^{n}$.
Other technical details such as choice of discretization and initial condition are described in Appendix A of Ref.~\cite{Kraft2020a}.
From the density profile $\rho_\text{eq}(\bs{r})$ that we numerically obtain at given $\bar{\rho}$, the associated chemical potential $\mu$ is given through integration of Equation~\eqref{Eq_Euler_Lagrange} as
\begin{align}
\beta \mu = \ln\left(\Lambda^2 \bar{\rho} \right) - \ln\left( \frac{\int d\bs{r} \exp\left[-\beta V_{\text{ext}}(\bs{r}) - \beta \left.\frac{\delta F_{\text{exc}}[\rho]}{\delta \rho(\bs{r})}\right|_{\rho_{\text{eq}}} \right]}{L_x L_y}\right).
\label{Eq_beta_mu_from_density_profile}
\end{align}
For the current study, we made a simple extension to speed up the numerical minimization.
We used two mixing parameters $\alpha_1, \alpha_2> \alpha_1$ instead of the mixing of density profiles with one mixing parameter $\alpha$ (being constant and fixed throughout the numerical calculation, see e.g. Ref.~\cite{Hughes2014} for an introduction). Specifically, after 1000 iterations with mixing parameter~$\alpha_1$, the mixing alternates between the two values of the mixing parameters $\alpha_i$ where $\alpha_1$ remains fixed and $\alpha_2$ is adjusted automatically. The initially provided value of $\alpha_2$ is used as an maximum value for the mixing parameter $\alpha_2$ and can never be exceeded.
The mixing parameter $\alpha_2$ is increased by a factor of 1.05, if the value of the grand potential $\Omega[\rho]$ [see Equation~\eqref{Eq_grand_potential_functional}] decreased over the last 500 iterations, and $\alpha_2$ is decreased by a factor of $0.8$ otherwise. We found that this simple extension greatly accelerated the numerical calculations, while the results were unchanged compared to those with simple mixing.
\section{Numerical Results\label{SEC_Numerical_Results}}
In this section we present our numerical results obtained by minimization of the grand potential $\Omega$ [see Equation~\eqref{Eq_grand_potential_functional}] at various average densities~$\makeAverageSystemDensitySymbol$ and various parameters of the external potential $V_0$ or $R_g$ (see Equations~\eqref{Eq_cosine_substrate} and \eqref{Eq_Gaussian_substrate} for the cosine and Gaussian substrate, respectively).
We observe three types of phases; the ML, LSm, and LFS.
In Section~\ref{Sec:Characteristics_of_the_ML_LSm_LFS_phases}, we first discuss characteristic features of each phase as reflected by the density distribution.
In Section~\ref{Sec:Phase_Diagrams}, we then present full phase diagrams involving different control parameters.
\subsection{Characteristics of the different phases\label{Sec:Characteristics_of_the_ML_LSm_LFS_phases}}
In earlier studies of LIF, the different phases have often been identified by pair correlation functions (see e.g. Refs.~\cite{Buerzle2007, Baumgartl2004, Bechinger2007}), or the Fourier transformed density~\cite{Buerzle2007}. In the present mean-field DFT study, we rather investigate directly the density distributions in real space, which are a direct results of our calculations.
We start by discussing the density profiles on the cosine substrate~[see Equation~\eqref{Eq_cosine_substrate}] at fixed average system density $\makeAverageSystemDensitySymbol R^2= 4.5$ (i.e., far below the bulk freezing threshold) and various potential amplitudes $V_0$.
Given that the present substrate varies along the $x$-direction, we categorize the density profiles according to two criteria:
(i)~Whether they show the discrete symmetry of the substrate with periodicity $L_s$ along the $x$-direction, or rather twice $L_s$, and~(ii) whether they are homogeneous or inhomogeneous along the $y$-direction.
Representative density profiles are shown in Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}:
At low values of $V_0$, the modulated liquid phase (ML) arises [see Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}(a)], where $\rho(\bs{r})$ varies only along~$x$ and displays the substrate periodicity~$L_s$.
At intermediate $V_0$, the obtained density profiles reflect a symmetry-breaking of the discrete substrate symmetry [see~Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}(b)].
They exhibit a periodicity $2 L_s$ along the $x$-direction, but are still constant along the $y$-direction.
Given these features, we identify this state as a LSm phase, which has been previously observed in other colloidal systems, such as charged polystyrene spheres~\cite{Baumgartl2004} and hard discs~\cite{Buerzle2007}.
Moreover, we find that the LSm phase that we observe in our calculations is characterized by different densities (i.e., different population of particles) in adjacent minima.
Upon further increase of $V_0$, the obtained density profiles are not only symmetry-broken (with respect to the substrate) in $x$-direction, but also are inhomogeneous in $y$-direction. Specifically, one observes hexagonal order.
Due to this, we identify this phase as the locked floating solid (LFS) phase~\cite{Bechinger2007} at $p=2$ [see~Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}(c)].
Furthermore, we observe that the lattice sites of the LFS are located in the highly populated minima of the former LSm phase, and that the low density regions are nearly depleted of particles.
Taken together, we observe (at fixed $\bar{\rho} R^2$) the sequence ML-LSm-LFS upon increasing $V_0$ on a substrate with $p=2$. We note that the very appearance of the LSm is consistent with earlier studies~\cite{Baumgartl2004,Buerzle2007}, the order of the sequence of the transitions upon increasing $V_0$ is somewhat different.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Fig_1.png}
\caption{Representative density profiles $\rho(x,y)$ for different values of the potential amplitude $V_0$ on the cosine substrate [see Equation~\eqref{Eq_cosine_substrate}].
Figure parts show (a)~the modulated liquid phase ($\beta V_0 = 1.5$), (b) the locked smectic phase ($\beta V_0 = 3$), and (c)~the locked floating solid phase ($\beta V_0 = 5$).
In all parts, the average density is $\bar{\rho} \, R^2= 4.5$, and the substrate periodicity is $L_s / R = 0.6$.
}
\label{Fig_exemplary_density_profiles_rho=4.5}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Fig_2}
\caption{Average density $\bar{\rho}_{m}$ in adjacent minima (say $x_{\text{min}}$ and $x_{\text{min}} + L_s$) [see Equation~\eqref{Eq_def_rho_modulation}] as function of $V_0$ on the cosine substrate ($p=2$).
In the modulated liquid phase (ML) both values of $\bar{\rho}_{m}$ in adjacent minima are identical (at given $V_0$).
For the locked smectic phase (LSm) and the locked floating solid phase (LFS), two distinct values of $\bar{\rho}_{m}$ in the adjacent minima are obtained (at given $V_0$) corresponding to the alternating high and low density regions of the LSm and LFS in Figures~\ref{Fig_exemplary_density_profiles_rho=4.5}(b) and (c).
The substrate periodicity is $L_s / R = 0.6$ for both figure parts.
%
%
In part (a) the average density is $\bar{\rho} \, R^2= 4.5$ as in Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}.
The left black solid line corresponds to the numerically obtained phase transition from the ML to the LSm ($p=2$) at $\beta V_0 = 2.0$.
The right black solid line corresponds to the numerically obtained phase transition from the LSm ($p=2$) to the LFS ($p=2$) at $\beta V_0 = 3.1$.
%
%
%
Part (b) shows the same analysis as in (a) but for a slightly higher average density $\bar{\rho} \, R^2= 4.6$.
The black solid lines have the same meaning as in (a) and are at $\beta V_0 = 1.8$ and $\beta V_0 = 2.6$.
}
\label{Fig_rhoBar_m_vs_V_0}
\end{figure}
After demonstrating the existence of the intermediate LSm phase (as exemplarily presented in Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}), we proceed by analysing our DFT data quantitatively.
In particular, to further investigate the symmetry breaking of the discrete substrate symmetry, we consider the average density $\bar{\rho}_{m}$ in one modulation of the substrate,
\begin{align}
\bar{\rho}_{m} = \frac{1}{L_y L_s} \int\limits_{-L_y/2}^{L_y/2} dy \int\limits_{x_{\text{min}} - \frac{L_{s}}{2} }^{x_{\text{min}} + \frac{L_{s}}{2} } dx\, \rho(x,y),
\label{Eq_def_rho_modulation}
\end{align}
and compare values of $\bar{\rho}_{m}$ in adjacent minima (say $x_{\text{min}}$ and $x_{\text{min}} + L_s$).
Thus, in addition to direct visual inspection of the density profile (see Figure~\ref{Fig_exemplary_density_profiles_rho=4.5}), we identify a broken translational symmetry between neighbouring minima based on the average density $\bar{\rho}_{m}$ in one modulation of the substrate.
When the discrete symmetry is broken, this leads to two distinct values of $\bar{\rho}_{m}$ in adjacent minima $x_{\text{min}}$ for the LSm and the LFS.
Figure~\ref{Fig_rhoBar_m_vs_V_0} shows values of $\bar{\rho}_{m}$ for various values of the potential amplitude~$V_0$ (at two values of the average density).
We clearly observe that a transition from a ML (characterized by one value of $\bar{\rho}_m$) to a LSm phase is accompanied by a splitting of $\bar{\rho}_m$ into two values related to adjacent minima.
The splitting arises without notable jumps, which indicates a continuous phase transition.
A more detailed view of the transition region ML-LSm is given in Figure~\ref{Fig_rhoBar_m_vs_V_0_Zoom_in_on_ML_LSm_transition}.
We found that the data points for the LSm are well represented by a fitting function $\bar{\rho} + C (\beta V_0 - \beta V_{0,c})^\nu$, where $C$ is a proportionality constant and $\nu$ denotes the critical exponent.
In particular, we obtained $\nu \approx 0.49$, $\beta V_{0,c} \approx 1.915$, and $C R^2\approx 2.7$ (with fitting errors below 1 \% for all quantities and numerical values slightly depending on the exact details of the fit).
We thus find that the critical exponent $\nu$ is close to the value of 1/2, as is typical for a mean-field system~\cite{HansenMcDonald, Reichl1998modern}. %
In contrast, upon increasing $V_0$, we see that the transition from the LSm to the LFS is accompanied by a discontinuous jump in the $\bar{\rho}_m$ values.
We note in passing that the low density region in the LFS is not completely depleted of particles, that is, $\bar{\rho}_m$ is clearly non-zero. Thus one may imagine that the low density regions mediate the interaction between the neighbouring high density regions in which the lattice sites are located.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig_3}
\caption{
Same as Figure \ref{Fig_rhoBar_m_vs_V_0}(a), but close to the modulated liquid (ML) - locked smectic (LSm) phase transition. The solid curve is a fit to the data points for the locked smectic phase as described in the main text.
}
\label{Fig_rhoBar_m_vs_V_0_Zoom_in_on_ML_LSm_transition}
\end{figure}
Furthermore, it is interesting to consider the width of the region where the LSm appears as an intermediate phase between the ML and the LFS. In particular, we are interested in the sensitivity with respect to the density $\bar{\rho}$. We have thus repeated the same analysis for a slightly higher average density $\bar{\rho} \, R^2= 4.6$ than the previously chosen value $\bar{\rho} \, R^2= 4.5$. Results for $\bar{\rho}_m$ are shown in Figure~\ref{Fig_rhoBar_m_vs_V_0}(b). Despite the minor change in density~$\makeAverageSystemDensitySymbol$ (an increase by around 2.2\%), the range of $V_0$ values in which the LSm arises is drastically reduced (a decrease by roughly 27 \%). In summary, we find that the closer the density $\makeAverageSystemDensitySymbol$ is to the bulk freezing density $\makeAverageSystemDensitySymbol_f$ (with $\bar{\rho}_f R^2 = 5.48$ according to \cite{Archer2014}), the smaller is the width of the intermediate LSm.
We will also see this explicitly in the phase diagrams presented in Section~\ref{Sec:Phase_Diagrams}.
To close this section, we turn our attention to the transition region between the LSm and LFS which (as seen in Figure~\ref{Fig_rhoBar_m_vs_V_0}) is characterized by a discontinuous behaviour of $\bar{\rho}_m$.
We repeatedly calculated density profiles in the vicinity of the transition (for slightly perturbed initial conditions) and observed a bistability, in the sense that calculated density profiles were either related to the LSm or the LFS phase.
The numerical DFT calculations thus hint at a coexistence between the LSm and LFS, and therefore a first order phase transition between these two phases.
We note that the data in the transition region in Figure~\ref{Fig_rhoBar_m_vs_V_0} was obtained by manually selecting the profile with minimal grand potential $\Omega$.
This procedure, however, becomes quite unfeasible when scanning entire phase diagrams.
We will return to this question of the order of the phase transitions in Section~\ref{Sec:Phase_Diagrams}.
\subsection{Phase diagrams\label{Sec:Phase_Diagrams}}
In this section, we present an overview of the phase behaviour of the GEM-4 system on the cosine and the Gaussian substrate [see Equations~\eqref{Eq_cosine_substrate} and \eqref{Eq_Gaussian_substrate}] with $p=2$.
To this end, we have scanned large portions of the phase diagram on both substrates.
We indeed found the same sequence of ML-LSm-LFS phase transitions for different physical scenarios, as the phase diagrams in Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60} reveal:
(i)~At constant substrate amplitude $V_0$, upon increasing the average density $\bar{\rho}$ of the system.
(ii)~At constant average density $\bar{\rho}$, upon increasing the substrate amplitude $V_0$.
(iii)~At constant $\bar{\rho}$ and constant $V_0$, upon reducing the available space for particles (that is, by increasing the range of the Gaussian maxima $R_g$ on a Gaussian substrate).
We observe this same sequence of ML-LSm-LFS phase transitions for two different types of substrates (cosine and Gaussian), thus demonstrating that it is not a peculiarity of the substrate.
A common feature of all three diagrams in Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60} is that the range of control parameters (i.e., $V_0$ or $R_g$) where the LSm arises becomes narrower with increasing average density (as already indicated at the end of Section \ref{Sec:Characteristics_of_the_ML_LSm_LFS_phases}).
We now focus in more detail on the diagrams obtained upon variation of $V_0$ [see Figures ~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}(a) and (b)].
Here, the LSm phase appears either in between the ML and LFS phase (high densities) or as the (only) stable phase at large $V_0$ in the range considered by us (low densities).
It is indeed unclear whether the LSm phase in the present system will eventually freeze into a LFS at sufficiently large values of $V_0$. In fact, a similar observation has been made in Monte Carlo simulations of hard discs~\cite{Buerzle2007}. There, the phase diagram shows that for values of the potential amplitude $V_0$ as large as $\beta V_0 = 10000$, there is a range of densities for which the LSm phase remains the stable phase and does not freeze into a LFS.
Furthermore, the possibility of a LSm remaining the stable phase at large $V_0$ is also in agreement with the theoretical prediction of Frey, Nelson, and Radzihovsky~\cite{Frey1999, Radzihovsky2001} (see, in particular, Figure 3(a) in \cite{Frey1999}).
Furthermore, interestingly, we do not observe a re-entrant melting, i.e., a transition from the more ordered
LFS phase to the less ordered LSm or ML phase upon increase of $V_0$.
In this regard, our results differ from experimental results for charged polystyrene spheres~\cite{Baumgartl2004} and MC data for hard discs \cite{Buerzle2007} at $p=2$, where re-entrant melting occurs and causes an "up-bending" of the transition curves at large $V_0$ (see e.g. Ref.~\cite{Buerzle2007}).
We have seen a similar discrepancy in our previous mean-field-DFT study of the case $p=1$ \cite{Kraft2020a}, where we didn't find re-entrant melting of the LFS ($p=1$) solid upon increase of $V_0$, contrary to corresponding findings of more repulsive systems at $p=1$ in the literature.
Whether or not the absence of re-entrant melting is a feature of the ultra-soft system considered here, or an artefact of our mean-field treatment, remains to be explored.
Turning now to the phase diagram obtained through variation of $R_g$ [see Figure ~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}(c)], we indeed find a re-entrant melting
(very similar to the same system at $p=1$ \cite{Kraft2020a}).
However, here the physical reason is different: upon increasing $R_g$ to large values, the overlap of neighbouring Gaussian maxima of the substrate becomes more and more significant. This causes a reduction of the effective barrier felt at a potential minimum, which eventually leads to the up-bending visible in Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}(c).
So far we have focused on one value of the substrate periodicity.
It is also interesting to study the influence on the substrate periodicity being slightly away from the value of $L_s/R = 0.6$ where the LSm and the LFS fit perfectly on the substrate and are thus, to some extent, expected~\cite{Bechinger2007}.
To this end, we show in Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.55} results for a slightly smaller value $L_s/R = 0.55$.
We find the same phenomenology as described in cases (i)-(iii) (see first paragraph of Section~\ref{Sec:Phase_Diagrams}), but with all transitions being shifted to higher values of the average density $\bar{\rho}$.
We also performed some calculations for $L_s/R= 0.65$, but did not observe a LSm ($p=2$) phase between the ML and the LFS ($p=2$). These exploratory calculations rather indicated that the ML first transforms in a LFS with each minimum being equally populated ($p=1$), followed by a transition into a LFS ($p=2$) phase.
However, we did not pursue this further, as it was not the focus of our current work.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Fig_4}
\caption{Phase diagrams obtained from DFT minimization for various average densities~$\bar{\rho}$ on
(a) the cosine substrate for varying potential amplitude~$V_0$,
(b) the Gaussian substrate for varying $V_0$ and fixed range $R_{g}$ ($R_{g}/L_s = 0.2$), and
(c) the Gaussian substrate for varying $R_g$ and fixed $V_0$ ($\beta V_0 = 10$).
The symbol type encodes whether the obtained phase is a modulated liquid (ML), a locked smectic (LSm) or a locked floating solid (LFS).
The substrate periodicity is $L_s / R= 0.6$.
In the parameter ranges $\beta V_0 < 2$ and $R_g/L_s < 0.05$ [i.e. in the ranges left of the data points in (a)-(c)], we have performed test calculations suggesting that the stable phase is a ML. We also note that the bulk coexistence densities for the liquid and solid phase are $\bar{\rho}_l R^2 = 5.48$ and $\bar{\rho}_s R^2 = 5.73$~\cite{Archer2014}.
}
\label{Fig_CompareParameterScanWithPrediction_L_s=0.60}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Fig_5}
\caption{Same as Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}, but for substrate periodicity $L_s / R= 0.55$.
}
%
\label{Fig_CompareParameterScanWithPrediction_L_s=0.55}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Fig_6}
\caption{
Grand potential $\Omega$ versus the chemical potential $\mu$ on the cosine substrate at $\beta V_0 = 2$. The symbol encoding for the phases is the same as in Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}(a).
The substrate periodicity is $L_s / R= 0.60$.
In the inset, we provide a closer view of the ML-LSm transition.
}
\label{Fig_investigate_order_of_phase_transitions_cosine}
\end{figure*}
Finally, we return to the question of the order of the ML-LSm-LFS phase transitions (within the present mean-field DFT approach) at the optimal substrate periodicity $L_s/R = 0.6$.
In Figure~\ref{Fig_investigate_order_of_phase_transitions_cosine}, we plot the grand potential $\Omega$ versus chemical potential $\mu$ on the cosine substrate. For this we make a cut along Figure~\ref{Fig_CompareParameterScanWithPrediction_L_s=0.60}(a) at fixed value of the potential amplitude ($\beta V_0 = 2$) and vary the density $\bar{\rho}$.
The grand potential~$\Omega$ is obtained through minimization of Equation~\eqref{Eq_grand_potential_functional} and the chemical potential $\mu$ follows as the associated Lagrange parameter for given system density~$\bar{\rho}$ through Equation~\eqref{Eq_beta_mu_from_density_profile}.
The slope of the curve $\Omega(\mu)$, which corresponds to the overall density, appears to be constant at the ML-LSm transition (see inset of Figure~\ref{Fig_investigate_order_of_phase_transitions_cosine}) suggesting that this transition is continuous with respect to $\bar{\rho}$.
At this point it is also worth to recall the results in Figures~\ref{Fig_rhoBar_m_vs_V_0} and \ref{Fig_rhoBar_m_vs_V_0_Zoom_in_on_ML_LSm_transition}, where we found that the order parameter $\bar{\rho}_m$ as well changes continuously at the ML-LSm transition.
Regarding the LSm-LFS transition, the results in Figure~\ref{Fig_investigate_order_of_phase_transitions_cosine} (and additional calculations not shown here) indicate a slight change of slope of the curve, and furthermore there is some overlap of the two branches related to the LSm and the LFS phases.
This suggests that there are metastable regions, supporting the picture of a first order LSm-LFS phase transition.
In summary, our results from mean-field DFT point to a continuous ML-LSm phase transition and a first order LSm-LFS phase transition.
\section{Conclusion and Outlook\label{SEC:Conclusion_and_Outlook}}
In this work, we studied the phase behaviour of a colloidal model system of ultra-soft particles subjected to two variants of one-dimensional periodic substrates.
We here focused on systems characterized by a commensurability parameter $p=2$, thereby supplementing our previous analysis for $p=1$~\cite{Kraft2020a}.
Our results are based on classical density functional theory in the mean-field approximation, and we obtained the density profiles $\rho(\bs{r})$ by (unconstrained) minimization of the grand potential $\Omega$.
Most importantly, we found an intermediate locked smectic phase ($p=2$) between a modulated liquid and a locked solid phase ($p=2$). Such a phase was predicted theoretically based on an elastic Hamiltonian~\cite{Frey1999, Radzihovsky2001}, but has been observed, so far, only in experiments~\cite{Baumgartl2004} and MC simulations~\cite{Buerzle2007} of more repulsive systems.
A closer investigation of the locked smectic phase revealed that the breaking of the substrate periodicity is accompanied by a splitting of the density distribution into alternating high and low-density regions, thus creating a periodicity of $2 L_s$.
At sufficiently high potential amplitudes $V_0$, the system freezes, and the former high density regions (of the locked smectic phase) contain lattice sites with hexagonal order; indicating a locked floating solid phase with $p=2$.
Performing extensive calculations for both, cosine and Gaussian substrates, we demonstrated that the appearance of the locked smectic phase is not a peculiarity of a specific shape of the substrate potential.
Rather, the observed sequence of transitions is robust in the sense that it appears through variation of different control parameters:
(i)~upon increase of $\bar{\rho}$ at constant $V_0$,
(ii)~upon increase of $V_0$ at constant $\bar{\rho}$,
and (iii)~upon reducing the available space for particles in the case of the Gaussian substrate.
Interestingly, we did not observe re-entrant melting~\cite{Wei1998}, that is, a transition from an ordered to a less ordered phase upon increase of $V_0$.
This is different from previous results for hard discs~\cite{Buerzle2007} and charged particles~\cite{Baumgartl2004}, but consistent with our earlier results for $p=1$~\cite{Kraft2020a}.
Regarding the order of transitions, our mean-field DFT results indicate a continuous transition between the modulated liquid and the locked smectic, and a first order transition between the locked smectic and the locked floating solid.
We also studied, for a few cases, the influence of the substrate periodicity.
For a value slightly smaller than the optimal one, we found the same phenomenology, but a shift towards higher values of the average density $\bar{\rho}$.
More dramatic changes (with disappearance of the locked smectic phase) appear at a slightly larger value of $L_s/R$ compared to the optimal one. Here we have only briefly touched this issue, which would be an interesting aspect for future investigations.
Further, one could explore whether the approach suggested by us in Ref.~\cite{Kraft2020b}, which relies solely on bulk quantities, could be extended towards prediction of the phase boundaries at $p=2$.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Fig_7}
\caption{Exemplary density profiles $\rho(x,y)$ obtained at large values of the substrate periodicity $L_s$ on the Gaussian substrate (at fixed $\beta V_0 = 10$, $R_g / L_s = 0.2$) [see Equation~\eqref{Eq_Gaussian_substrate}].
Figure parts at the top (bottom) row are obtained at average system density $\bar{\rho} \, R^2= 5$ ($\bar{\rho} \, R^2=3$).
Considering different substrate periodicities $L_s$, the following phases are obtained:
(a) locked floating solid ($p=1$) ($L_s/R= 1.2$),
(b) dumbbell solid ($L_s/R= 2.0$),
(c) Two hexagonal lattice planes per minimum ($L_s/R= 2.2$),
(d) modulated liquid with one maximum at the centre ($L_s/R= 1.2$),
(e) modulated liquid with two off-centre maxima ($L_s/R= 2.0$),
(f) as (e) but now with clearly pronounced off-centre maxima in the density profile ($L_s/R= 2.2$).
}
\label{Fig_outlook_on_dumbbell_solid}
\end{figure*}
Yet another potentially interesting extension of the present work would be a numerical DFT study of much larger values of $L_s/R$.
Indeed, preliminary calculations for Gaussian substrates with $L_s/R$ in the range $1.2-2.2$ revealed a variety of rather exotic phases with density profiles shown in Figure~\ref{Fig_outlook_on_dumbbell_solid}. An intriguing example is the "dumbbell solid" in Figure~\ref{Fig_outlook_on_dumbbell_solid}(b), which is characterized by a double-peaked density distribution around each lattice site.
Other new solid phases (for different substrate periodicities $L_s$) at high density $\bar{\rho} R^2 = 5$ are shown in Figures~\ref{Fig_outlook_on_dumbbell_solid}(a-c).
Moreover, already in the modulated liquid phase (at lower density $\bar{\rho} R^2 = 3$), we observe that more than one maxima in the density distribution can arise when $L_s/R$ is large [see Figures~\ref{Fig_outlook_on_dumbbell_solid}(d-f)].
The "dumbbell solid" seems to arise out of a modulated liquid which is close to the border between having one and two maxima in the density distribution (when it freezes upon increase of $\bar{\rho}$).
If more space is available in the vicinity of the substrate minimum such that two maxima fit easily into it, the freezing results in a solid with two hexagonal lattice planes per substrate minimum [compare Figures~\ref{Fig_outlook_on_dumbbell_solid}(c) and (f)].
Given these results, it seems very interesting to further investigate the combined effect of confinement and periodicity on freezing in ultra-soft systems, which are known to exhibit complex structures ("cluster crystals") already in the bulk~\cite{Likos1998, Likos2001, Mladek2005, Mladek2006, Mladek2007, Archer2014, Prestipino2014}.
Experimentally, such systems could be realized e.g., by the methods used in Ref.~\cite{Zaidouny2013} for the creation of the substrate potential.
It could also be of interest to investigate this combined effect upon the adsorption onto periodically corrugated substrates for particles composed of hard particles with a soft shell or with flexible polymeric "hairs"~\cite{Schoch_Langmuir2014, Schoch_SoftMatter2014}.
\section*{Acknowledgments}
S.H.L.K. would like to thank Gerhard Findenegg for many enjoyable discussions and collaboration within the DFG-funded Collaborative Research Center 448 "Mesoscopically structured composites" and the International Research Training Group~1524 "Self-assembled soft matter nanostructure at interfaces".
|
2,877,628,088,608 | arxiv | \section{Introduction}
\subsection{Motivations}
The Coordinated Attack Problem (also known in the literature as
the two generals or the two armies problem) is a long time problem in
the area of distributed computing.
It is a fictitious situation where two armies have to agree to attack
or not on a common enemy that is between them and might capture any of
their messengers.
Informally, it represents the difficulties to agree in the
presence of communication faults. The design of a solution is
difficult, sometimes impossible, as it has
to address possibly an infinity of lost mutual acknowledgments.
It has important applications for the Distributed Databases commit for
two processes, see \cite{Gray78}. It was one of the first
impossibility results in the area of fault tolerance and distributed
computing \cite{akkoyunlu_constraints_1975,Gray78}.
In the vocabulary of more recent years, this problem can now be stated
as the Uniform Consensus Problem for 2 synchronous processes communicating by
message passing in the presence of omission
faults. It is then a simple instance of a problem that had been very widely
studied \cite{AT99,CBGS00,MR98}.
See for example
\cite{RaynalSynchCons} for a
recent survey about Consensus on synchronous systems with some
emphasis on the omissions fault model.
Moreover, given that, if any message can be lost, the impossibility of
reaching an agreement is obvious, one can wonder why it has such
importance to get a name on its own, and maybe why it has been studied in
the first place...
The idea is that one has usually to restrict the way the
messages are lost in order to keep this problem relevant.
We call an arbitrary pattern of
failure by loss of messages a \emph{message adversary}. It will
formally describe the fault environment in which the system evolves.
For example, the message adversary where any message can be lost at any
round except that all messages cannot be lost indefinitely is a
special message adversary (any possibility of failure \emph{except one}
scenario), for which it is still impossible to solve the
Coordinated Attack Problem, but the proof might be less trivial.
Given a message adversary, a natural question is whether the
Coordinated Attack Problem is solvable against this environment. More
generally, the question that arises now is to describe
what are exactly the message adversaries for which the
Coordinated Attack Problem admit a solution, and for which ones is there no
solution. These later message adversaries will be called \emph{obstructions}.
\subsection{Related Works}
The Coordinated Attack Problem is a kind of folklore problem for
distributed systems.
It seems to appear first in \cite{akkoyunlu_constraints_1975} where it is a
problem of gangsters plotting for a big job. It is usually attributed to
\cite{Gray78}, where Jim Gray coined the name ``Two Generals Paradox''
and put the emphasis on the infinite recursive need for
acknowledgments in the impossibility proof.
In textbooks it is often given as an example, however the drastic
conditions under which this impossibility result yields are never
really discussed, even though for relevancy purpose they are often
slightly modified.
In \cite{LynchDA}, a different problem of Consensus (with a
weaker validity condition) is used.
In \cite{DADA}, such a possibility of eternal loss is
explicitly ruled out as it would give a trivial impossibility proof
otherwise.
This shows that the way the messages {may} be lost is an important
part of the problem definition,
hence it is interesting to characterize when the pattern of
loss allows to solve the consensus problem or not, \emph{i.e.}
whether the fault environment is an obstruction for the Coordinated
Attack Problem or not.
To our knowledge, this is the first time, this problem is investigated
for arbitrary patterns of omission failures, even in the simple case
of only two processes.
Most notably, it has been addressed for an arbitrary number
of processes and for special (quite regular) patterns in
\cite{CHLT00,GKP03,RaynalSynchCons}.
A message adversary is oblivious if the set of possible communication
pattern is the same at each step. The complete characterization of
oblivious message adversaries for which Consensus is solvable has been
given in \cite{CGPcarac}. We consider here also non oblivious
adversaries.
Note that while the model has been around for at least tens of years
\cite{timeisnotahealer}, the name ``message adversary'' has been
coined only recently by Afek and Gafni in \cite{messadv}.
\subsection{Scope of Application and Contributions}
Impossibility results in distributed computing are the more
interesting when they are tight, \emph{i.e.} when they give an exact
characterization of when a problem is solvable and when it is not.
There are a lot of results regarding distributed tasks
when the underlying network is a complete graph and the patterns are simply described by faults (namely in the context
of Shared Memory systems), see for example the works in
\cite{HS99,SZ,BoGa93} where exact topologically-based
characterizations is given for the wait-free model.
There are also more recent results when the underlying
network can be any arbitrary graph. The results given in \cite{SW07} by
Santoro and Widmayer are almost tight. What is worth to note is that
the general theorems of \cite{HS99} could not be directly used by this study
for the very reason that the failure model for communication networks
is not interestingly expressible in the fault model for systems with
one to one communication.
In the following of~\cite{CS09}, we are not interested in the exact
cause of a message not being sent/received. We are interested in as general as possible models. See Section~\ref{faultmetric} for a more detailed discussion.
We underline that the omission failures we
are studying here encompass networks with \emph{crash failures}, see
Example~\ref{ex:crash}.
It should also be clear that message adversaries can also be studied
in the context of a problem that is not the Consensus Problem.
Moreover, as we do not endorse any pattern of failures as being, say,
more ``realistic'', our technique can be applied for any
new patterns of failures.
\medskip
On the way to a thorough
characterization of all obstructions, we
address the Consensus problem for a particular but important
subclass of failure patterns,
namely the ones when no two messages can be lost at the same round.
It is long known that the Coordinated Attack Problem is unsolvable if at most one
message might be lost at each round \cite{CHLT00,GKP03}. But what happens with strictly weaker patterns was unknown.
Our contribution is the following. %
In the large subclass of message adversaries where the %
double simultaneous omission can never happen, we characterize which
ones are obstructions for the Coordinated Attack Problem. We give two
alternative proofs of this result. One is based on traditional
combinatorial techniques, that have been extended to the more involved
setting of arbitrary message adversaries. The second one presents an
extension of topological techniques suited to arbitrary message adversaries.
The combinatorial proof was
presented in \cite{FG11}. The topological proof is an improved extract
from the Master thesis of one of the authors \cite{eloiM2} where the notion of terminating subdivision from \cite{GKM14} is applied to the setting of the Coordinated Attack Problem.
More interestingly, the topological characterization gives a nice
topological unified explanation of the characterization, which is
separated in four different cases in the combinatorial presentation
from \cite{FG11}. This result is a convincing illustration of the
power of topological tools for distributed computability. Topological
tools for distributed computing have sometimes been criticized for being
over-mathematically involved. The result presented in this
paper shows that distributed computability is inherently linked to
topological properties.
But the paper also illustrate some pitfalls of such tools as we could
use the given characterization to uncover an error in the main theorem
of a paper generalizing the Asynchronous Computability Theorem to
arbitrary message adversaries \cite{GKM14}. Mathematically, this error
can be traced to the subtle differences between simplicial complexes
and abstract simplicial complexes of infinite size.
See Remark~\ref{subtle}
\bigskip
The outline of the paper is the following.
We describe Models and
define our Problem in the Section~\ref{def}. We present numerous
examples of application of our terminology and notation in
Section~\ref{examples}. We then address the characterization of
message adversaries without simultaneous faults that are obstructions for
the Coordinated Attack Problem in Theorem~\ref{thm:FG11}.
We give the proofs for necessary condition
(impossibility result) in \ref{CN} and for sufficient condition
(explicit algorithm) in \ref{CS} using the classical bivalency techniques.
We then prove the same results in a topological way in section \ref{sec:topo}.
Finally, the two results are compared and we show how the topological explanation gives more intuition about the result.
\section{Models and Definitions}
\label{def}
\subsection{The Coordinated Attack Problem}
\subsubsection{A folklore problem}
Two generals have gathered forces on top of two facing hills. In between, in the
valley, their common enemy is entrenched. Every day each general
sends a messenger to the other through the valley. However this
is risky as the enemy may capture them. Now they need to get
the last piece of information: are they \emph{both} ready to attack?
This two army problem was originated
by~\cite{akkoyunlu_constraints_1975} and then by Gray~\cite{Gray78}
when modeling the distributed database commit.
It corresponds to the binary consensus with
two processes in the omission model. If their is no restriction on the
fault environments, then any messenger may be captured.
And if any messenger may be captured, then consensus is obviously
impossible: the enemy can succeed in capturing all of them, and
without communication, no distributed algorithm.
\subsubsection{Possible Environments}
Before trying to address what can be, in some sense, the most relevant
environments, we will describe
different environments in which the enemy cannot be
so powerful as to be able to capture any messenger.
This is a list of possible environments, using the same military
analogy. The generals name are \emph{White} and \emph{Black}.
\label{7cases}
\begin{enumerate}[(1)]
\item
no messenger is captured
\item
messengers from \emph{General White} may be captured
\item
messengers from \emph{General Black} may be captured
\item
messengers from one general are at risk, and if one of them is captured,
all the following will also be captured (the enemy got the secret ``Code
of Operations'' for this general from the first captured messenger)
\item
messengers from one general are at risk (the enemy could manage to infiltrate
a spy in one of the armies) %
\item
at most one messenger may be captured each day (the enemy can't closely
watch both armies on the same day)
\item
any messenger may be captured
\end{enumerate}
Which ones are (trivial) obstructions, and which are not? Nor
obstruction, nor trivial? What about more complicated environments?
\subsection{The Binary Consensus Problem}
\label{defconsensus}
A set of synchronous processes wish to agree about a binary
value. This problem was first identified and formalized by Lamport,
Shostak and Pease \cite{LSP}. Given a set of processes, a consensus
protocol must satisfy the following
properties for any combination of initial values~\cite{LynchDA}:
\begin{itemize}
\item \emph{Termination}: every process decides some value.
\item \emph{Validity}: if all processes initially propose the same value $v$,
then every process decides $v$.
\item \emph{Agreement}: if a process decides $v$, then every process decides
$v$.
\end{itemize}
Consensus with such a termination and decision requirement for every
process is more precisely referred to as the \emph{Uniform Consensus},
see \cite{RaynalSynchCons} for example for a discussion.
Given a fault environment, the natural questions are : is the
Consensus solvable, if it is solvable, what is the minimal complexity?
\subsection{The Communication Model}
In this paper,
the system we consider is a set $\Pi$ of only $2$ processes named \emph{white}
and \emph{black}, $\Pi = \{\blanc,\noir\}$.
The processes evolve in \emph{synchronized} rounds.
In each round $r,$ every process \proc executes the following
steps: \proc sends a message to the other process, receives a
messages $M$ from the other process, and then updates its state
according to the received message.
The messages sent are, or are not, delivered according to the
environment in which the distributed computation takes place. This
environment is described by a \emph{message adversary}. Such an adversary
models exactly when some
messages sent by a process to the other process may be lost
at some rounds.
We will consider arbitrary message adversaries, they will be represented
as arbitrary sets of infinite sequences of combinations of communication rounds.
In the following of~\cite{CS09}, we are not interested in the exact
cause of a message not being sent/received. We only refer to the phenomenon:
the content of messages being transmitted or not.
The fact that the adversaries are arbitrary means that we do not endorse
any metric to count the number of ``failures'' in the
system. There are metrics that count the number of lost messages, that
count the number of process that can lose messages (both in send and
receive actions). Other metrics count the same parameters but only
during a round of the system. The inconvenient of this metric-centric
approach is that, even when restricted only to omission faults, it
can happen that some results obtained
on complete networks are not usable on arbitrary networks. Because in,
say, a ring
network, you are stating in some sense that every node is
faulty. See eg \cite{SW07} for a very similar discussion, where
Santoro and Widmayer, trying to solve some generalization of
agreement problems in general networks could not use directly the
known results in complete networks.
\label{faultmetric}
As said in the introduction, the Coordinated Attack Problem is
nowadays stated as the Uniform Consensus Problem for 2 synchronous
processes communicating by message passing in the presence of omission
faults. Nonetheless, we will use Consensus and Uniform Consensus
interchangeably in this paper. We emphasize that, because we do not
assign omission faults to any process (see previous discussion), there
are only correct processes.
\subsection{Message Adversaries}
\label{subsec:def}
We introduce and present here our notation.
\begin{definition}
We denote by $\mathcal G_2$ the set of directed graphs with vertices
in $\Pi$.
$$\mathcal G_2=\{\lok,\lblanc,\lnoir,\lall\}$$
We denote by $\Gamma$ the following subset of $\mathcal G_2$
$$\Gamma=\{\lok,\lblanc,\lnoir\}.$$
\end{definition}
At a given round, there are only four combinations of communication.
Those elements describe what can happen at a given round with the
following straightforward semantics:
\begin{itemize}
\item
$\lok,$ no process looses messages
\item
$\lblanc,$ the message of process \blanc, if any, is not transmitted
\item
$\lnoir,$ the message of process \noir, if any, is not transmitted
\item
$\lall,$ both messages, if any, are not transmitted\footnote{In order
to increase the readability, we note \lall
instead of $\blanc\;\;\noir$, the double-omission case.}.
\end{itemize}
The terminology \emph{message adversary} has been introduced in \cite{messadv}, but the concept is way older.
\begin{definition}
A \emph{message adversary} over $\Pi$ is a set of infinite
sequences of elements of $\mathcal G_2$.
\end{definition}
We will use the standard following notations in order to describe more easily
our message adversaries \cite{PPinfinite}. A (infinite) sequence is
seen as a (infinite) word over the alphabet $\mathcal G_2$.
The empty word is denoted by $\varepsilon$.
\begin{definition}
Given $A\subset\mathcal G_2$, $A^*$ is the set of all finite
sequences of elements of $A$, $A^\omega$ is the set
of all infinite ones and $A^\infty = A^* \cup A^\omega.$
\end{definition}
An adversary of the form $A^\omega$ is called an \emph{oblivious adversary}.
A word in $L\subset\mathcal G_2^\omega$ is called a \emph{communication
scenario} (or \emph{scenario} for short) of message adversary $L$.
Given a word $w\in\mathcal G_2^*$, it is called a \emph{partial scenario} and
$len(w)$ is the length of this word.
Intuitively, the letter at position $r$ of the word describes whether
there will be, or not, transmission of the message, if one is
being sent at round $r$.
A formal definition of an execution under a scenario will be given in
Section~\ref{execution}.
\medskip
We recall now the definition of the prefix of words and languages.
A word $u\in\mathcal G_2^*$ is a prefix for $w\in\mathcal G_2^*$
(resp. $w'\in\mathcal G_2^\omega$) if there exist $v\in\mathcal G_2^*$
(resp. $v'\in\mathcal G_2^\omega$) such that $w=uv$ (resp. $w'=uv'$).
Given
$w\in\mathcal G_2^\omega$ and $r\in\N$, $w_{|r}$ is the prefix of size $r$ of $w$.
\begin{definition}
Let $w\in\mathcal G_2^*$, then $Pref(w)=\{u\in\mathcal G_2^*|u \mbox{ is a
prefix of } w\}$. Let $L\subset\mathcal G_2^*$, let $r\in\N$,
$Pref_r(L)=\{w_{|r}\mid w\in L\}$ and
$Pref(L)=\mathop\bigcup\limits_{w\in L}Pref(w)=\mathop\bigcup\limits_{r\in \N}Pref_r(L)$.
\end{definition}
\subsection{Examples}
\label{examples}
We do not restrict our study to regular languages, however all message
adversaries we are aware of are regular, as can be seen in the
following examples, where the rational expressions prove to be very
convenient.
\medskip
We show how standard fault environments are conveniently described in
our framework. %
\begin{example}
Consider a system where, at each round, up to $2$ messages can be
lost. The associated message adversary is $\mathcal G_2^\omega$.
\end{example}
\begin{example}
Consider a system where, at each round, only one message can be
lost. The associated message adversary is
$\{\lok,\lblanc,\lnoir\}^\omega=\Gamma^\omega$.
\end{example}
\begin{example}
Consider a system where at most one of the processes can lose
messages. The
associated adversary is the following:
$$S_1 = \Sun$$
\end{example}
\begin{example}\label{ex:crash}
Consider a system where at most one of the processes can crash,
For the phenomena point of view, this is equivalent to the fact that
at some point, no message from a particular process will be
transmitted. The associated adversary is the following:
$$C_1=\{\lok^\omega\}\cup\{\lok\}^*(\{\lblanc^\omega, \lnoir^\omega\})$$
\end{example}
\begin{example} \label{ex7cases}
Finally the seven simple cases exposed by the possible environments described in Section \ref{7cases} are described
formally as follows:
\begin{eqnarray}
\label{S_0}
S_0&=&\Szero \\
\label{T_blanc}
T_\blanc&=&\Tblanc \\
\label{T_noir}
T_\noir&=&\Tnoir \\
\label{C_1}
C_1&=&\{\lok^\omega\}\cup\{\lok\}^*(\{\lblanc^\omega, \lnoir^\omega\})\\
\label{S_1}
S_1&=&\Sun = T_\blanc \cup T_\noir \\
\label{R_1}
R_1&=&\Go \\%
\label{S_2}
S_2&=&\mathcal G_2^\omega
\end{eqnarray}
Note that the fourth and first cases correspond respectively to the
synchronous crash-prone model~\cite{LynchDA} and to the
$1-$resilient model~\cite{Adagio03,GKP03}. Even though in our definition, sets of possible scenarios could be arbitrary,
it seems that all standard models (including crash-based models) can be described using only regular expressions.
\end{example}
\subsection{Execution of a Distributed Algorithm}
\label{execution}
Given a message adversary $L$, we define what is a run of
a given algorithm \algo subject to $L$.
An execution, or run, of an algorithm \algo under scenario $w\in L$
is the following. At round $r\in\N$, messages are sent (or not) by the
processes. The fact that the corresponding receive action will be
successful depends on $a$, the $r$-th letter of $w$.
\begin{itemize}
\item if $a = \lok,$ then all messages, if any, are correctly delivered,
\item if $a = \lblanc,$ then the message of process \blanc is not
transmitted (the receive call of \noir, if any at this round,
returns $null$),
\item if $a = \lblanc,$ then the message of process \noir is not
transmitted (the receive call of \blanc, if any at this round,
returns $null$),
\item if $a = \lall,$ no messages is transmitted.
\end{itemize}
Then, both processes updates their state according to \algo and the
value received.
An execution is a (possibly infinite) sequence of such
messages exchanges and corresponding local states.
\medskip
Given $u\in Pref(w)$, we denote by $s^\proc(u)$ the
state of process \proc at the $len(u)$-th round of the algorithm \algo
under scenario $w$. This means in particular that
$s^\proc(\varepsilon)$ represents the initial state of \proc, where
$\varepsilon$ denotes the empty word.
Finally and classically,
\begin{definition}
An algorithm \algo solves the Coordinated Attacked Problem for the
message adversary $L$ if for any
scenario $w\in L$, there exist $u\in Pref(w)$ such that the states
of the two processes ($s^\blanc(u)$ and $s^\noir(u)$) satisfy the
three conditions of Section~\ref{defconsensus}.
\end{definition}
\begin{definition}
A message adversary $L$ is said to be \emph{solvable} if there exist an
algorithm that solves the Coordinated Attacked Problem for $L$. It
is said to be an \emph{obstruction} otherwise.
A message adversary $L$ is a (inclusion) minimal obstruction if any
$L'\varsubsetneq L$ is solvable.
\end{definition}
\subsection{Index of a Scenario}
\label{sub:ind}
We will use the following integer function of scenarios that will be
proved to be a useful encoding of all the important properties of
a given message adversary by mapping scenarios in $\Gamma^r$ to integers in
$[0, 3^r-1]$. The intuition to this function will become clearer from the
topological point of view in Section~\ref{sec:topo}.
By induction, we define the following integer index given
$w\in\Gamma^*$. First, we define $\mu$ on $\Gamma$ by
\begin{itemize}
\item $\mu(\lnoir)=-1$,
\item $\mu(\lok)=0$.
\item $\mu(\lblanc)=1$,
\end{itemize}
\begin{definition}
Let $w\in\Gamma^*$. We define $ind(\varepsilon)=0$.
If $len(w)\geq1$, then we have $w=ua$ where $u\in\Gamma^*$ and
$a\in\Gamma$. In this case, we define
$$ind(w):=3ind(u)+(-1)^{ind(u)}\mu(a)+1.$$
\end{definition}
Let $n\in\N$, define $ind_n\colon\Gamma^n\to[0,1]$ and $\overline{ind}\colon\Go\to[0,1]$ to
be respectively the normalization of $ind$ and the limit index of the infinite
scenarios~:
\begin{itemize}
\item $\forall w\in\Gamma^n \quad ind_n(w) = \frac{ind(w)}{3^{n}}$
\item $\forall w\in\Go \quad \overline{ind}(w) = \lim\limits_{n\to+\infty} ind_n(w_{\mid n})$
\end{itemize}
The convergence for $\overline{ind}$ is obvious from the following lemma.
\begin{lemma}\label{bijindex}
Let $r\in\N$. The application $ind$ is a bijection from $\Gamma^r$
to $\llbracket 0,3^r-1\rrbracket.$
\end{lemma}
\begin{proof}
The lemma is proved by a simple induction. If $r=0$, then the
property holds.
Let $r>0$. Suppose the property is satisfied for $r-1$. Given
$w\in\Gamma^r$, we have $w=ua$ with $u\in\Gamma^{r-1}$ and
$a\in\Gamma$. From $ind(w)=3ind(u)+(-1)^{ind(u)}\mu(a)+1$ and
the induction hypothesis, we get immediately that $0\leq ind(w)\leq
3^r - 1$.
Now, we need only to prove injectivity. Suppose there are
$w,w'\in\Gamma^r$ such that $ind(w)=ind(w').$ So there are
$u,u'\in \Gamma^{r-1}$ and $a,a'\in \Gamma$ such that
$$3ind(u)+(-1)^{ind(u)}\mu(a)+1 =
3ind(u')+(-1)^{ind(u')}\mu(a')+1.$$
Then $$3(ind(u) - ind(u')) = (-1)^{ind(u')}\mu(a') -
(-1)^{ind(u)}\mu(a).$$
Remarking that the right hand side of this integer equality has an
absolute value that can be at most 2, we finally get
\begin{eqnarray*}
ind(u) &=& ind(u')\\
(-1)^{ind(u)}\mu(a) &=& (-1)^{ind(u')}\mu(a')
\end{eqnarray*}
By induction hypothesis, we get that $u=u'$ and $a=a'$. Hence
$w=w'$, and $ind$ is injective, therefore bijective from $\Gamma^r$
onto $\llbracket 0,3^r-1\rrbracket.$
\end{proof}
Two easy calculations give
\begin{proposition}
Let $r\in\N$, $ind(\lnoir^r)=0$ and $ind(\lblanc^r)=3^r-1.$
\end{proposition}
In Figure~\ref{fig:index}, the indexes for words of length at most 2
are given.
\begin{figure}
\centering
\begin{tabular}[c]{|r|c|c|c|}
\hline
word of length $1$&\lnoir&\lok&\lblanc\\
\hline
index&0&1&2\\
\hline
\end{tabular}
\medskip
\begin{tabular}[c]{|r|c|c|c|}
\hline
word of length $2$&\lnoir\lnoir&\lnoir\lok&\lnoir\lblanc\\
\hline
index&0&1&2\\
\hline
\end{tabular}
\medskip
\begin{tabular}[c]{|r|c|c|c|}
\hline
word of length $2$&\lok\lnoir&\lok\lok&\lok\lblanc\\
\hline
index&5&4&3\\
\hline
\end{tabular}
\medskip
\begin{tabular}[c]{|r|c|c|c|}
\hline
word of length $2$&\lblanc\lnoir&\lblanc\lok&\lblanc\lblanc\\
\hline
index&6&7&8\\
\hline
\end{tabular}
\caption{Indexes for some short words}
\label{fig:index}
\end{figure}
We now describe precisely what are the words whose indexes
differs by only 1. We have two cases, either they have the same prefix and different last letter
or different prefix and same last letter.
\begin{lemma}\label{diff1}
Let $r\in\N$, and $v,v'\in\Gamma^r$. Then $ind(v')=ind(v)+1$ if
and only if one of the following conditions holds:
\begin{theoenum}
\item $ind(v)$ is even and
\begin{itemize}
\item either there exist $u\in\Gamma^{r-1}$, and $v=u\lok$,
$v'=u\lnoir$,
\item either there exist $u,u'\in\Gamma^{r-1}$, and $v=u\lblanc$,
$v'=u'\lblanc$, and $ind(u')=ind(u)+1.$
\end{itemize}
\item $ind(v)$ is odd and
\begin{itemize}
\item either there exist $u\in\Gamma^{r-1}$, and $v=u\lblanc$,
$v'=u\lok$,
\item either there exist $u,u'\in\Gamma^{r-1}$, and $v=u\lnoir$,
$v'=u'\lnoir$, and $ind(u')=ind(u)+1.$
\end{itemize}
\end{theoenum}
\end{lemma}
\begin{proof}
The lemma is proved by a induction. If $r=0$, then the
property holds. Let $r\in\N^*$. Suppose the property is satisfied
for $r-1$.
Suppose there are
$w,w'\in\Gamma^r$ such that $ind(w')=ind(w)+1.$ So there are
$u,u'\in \Gamma^{r-1}$ and $a,a'\in \Gamma$ such that
$$3ind(u)+(-1)^{ind(u)}\mu(a)+2 =
3ind(u')+(-1)^{ind(u')}\mu(a')+1.$$
Then $$3(ind(u) - ind(u')) = (-1)^{ind(u')}\mu(a') -
(-1)^{ind(u)}\mu(a) - 1.$$
Remarking again that this is an integer equality, we then have
\begin{itemize}
\item either $(ind(u)=ind(u')$ and $(-1)^{ind(u')}\mu(a') =
(-1)^{ind(u)}\mu(a) + 1$,
\item either $(ind(u')=ind(u)+1$ and $(-1)^{ind(u')}\mu(a') =
(-1)^{ind(u)}\mu(a)-2.$
\end{itemize}
This yields the following cases:
\begin{enumerate}
\item $ind(u)=ind(u')$ is even, $a=\lblanc$ and $a'=\lok$,
\item $ind(u)=ind(u')$ is odd, $a=\lnoir$ and $a'=\lok$,
\item $ind(u)=ind(u')$ is even, $a=\lok$ and $a'=\lnoir$,
\item $ind(u)=ind(u')$ is odd, $a=\lok$ and $a'=\lblanc$,
\item $ind(u')=ind(u)+1$, $ind(u)$ is even, $a=\lblanc=a'$,
\item $ind(u')=ind(u)+1$, $ind(u)$ is odd, $a=\lnoir=a'$,
\end{enumerate}
Getting all the pieces together, we get the results of the lemma.
\end{proof}
Given an algorithm \algo, we have this fundamental corollary, that
explicit the uncertainty process can experience between two executions whose
indexes differ by only 1: one of the process is in the same state
in both cases. Which process it is depends on the parity. In other
word, when the first index is even, \noir cannot distinguish the two
executions, when it is odd, this is \blanc that cannot distinguish the
two executions.
\begin{corollary}\label{state}
Let $v,v'\in\Gamma^r$ such that $ind(v')=ind(v)+1$. Then,
\begin{theoenum}
\item if $ind(v)$ is even then $s^\noir(v)=s^\noir(v')$,
\item if $ind(v)$ is odd then $s^\blanc(v)=s^\blanc(v')$.
\end{theoenum}
\end{corollary}
\begin{proof}
We prove the result using
Lemma~\ref{diff1} and remarking that either a process receives a
message from the other process being in the same state in the
preceding configuration $u$; either it receives no message when the
state of the other process actually differ.
\end{proof}
\subsection{Characterization}
\label{sub:comb_charac}
We prove that a message adversary $L\subset\Gamma^\omega$ is solvable
if and only if it does not contain a fair scenario or a special pair
of unfair scenarios. We define the following set to help describe the
special unfair pairs.
\begin{definition}
A scenario $w\in\mathcal G_2^\omega$ is \emph{unfair} if
$w\in\mathcal G_2^*(\{\lall,\lblanc\}^\omega\cup\{\lall,\lnoir\}^\omega)$.
The set of fair scenarios of $\Gamma^\omega$ is denoted by
$Fair(\Gamma^\omega)$.
\end{definition}
In words, in an \emph{unfair} scenario, there is one or more processes for
which the messages are indefinitely lost at some point. And in a \emph{fair}
scenario there is an infinity of messages that are received from both
processes.
\begin{definition}\label{pair} We define \emph{special pairs}
as $SPair(\Gamma^\omega)=\{(w,w')\in\Gamma^\omega\times\Gamma^\omega
\mid w\neq w',
\forall r\in\N |ind(w_{\mid r})-ind(w'_{\mid r})|\leq1\}.$
\end{definition}
\begin{theorem}\label{thm:FG11}
Let $L\subset\Gamma^\omega$, then Consensus is solvable for message
adversary $L$ if and only if
$L\in\{\F_1\cup\F_2\cup\F_3\cup\F_4\}$ where
\begin{theoenum}
\item $\F_1 = \{L\subset\Go \mid \exists f\in Fair(\Go)\wedge f\notin L\}$
\item $\F_2 = \{L\subset\Go \mid \exists (w,w')\in SPair(\Go)\wedge
w,w'\notin L\}$
\item $\F_3 = \{L\subset\Go \mid \lnoir^\omega\notin L\}$
\item $\F_4 = \{L\subset\Go \mid \lblanc^\omega\notin L\}$
%
%
%
%
\end{theoenum}
\end{theorem}
We have split the set of solvable scenarios in four families for a better
understanding even though it is clear that they largely intersect.
$\F_1$ contains every scenarios for which at least one unfair scenario cannot
occur ; in $\F_2$ \emph{both} elements of a special pair cannot occur ; finally
$\F_3$ and $\F_4$ contains every scenarios for which at least one message is
received from both processes.
We present two proofs of Theorem~\ref{thm:FG11} in the following
sections.
\subsection{Application to the Coordinated Attack Problem}
\label{backto}
We consider now our question on the seven examples of
Example~\ref{ex7cases}.
The answer to possibility is obvious for the first and last cases.
In the first three cases, $S_0$, $\Tblanc$ or $\Tnoir$,
Consensus can be reached in one day (by deciding the initial value of \blanc,\noir, and \blanc respectively).
The fourth and fifth
cases are a bit more difficult but within reach of our
Theorem~\ref{thm:FG11}. We remark that the scenario
$\lblanc\lnoir\lok^\omega$ is a fair scenario that does not belong to
$C_1$, nor $S_1$. Therefore, those are solvable cases also.
In the last
case, consensus can't be achieved ~\cite{Gray78,LynchDA}, as said before.
The following observation is also a way to derive lower bounds from computability results.
\begin{proposition}
\label{complexity}
Let $O\subset \Gamma^\omega$ an obstruction for the Consensus problem. Let $L\subset \Gamma^\omega$ and $r\in\N$ such that $Pref_r(O)\subset Pref_r(L)$ then,
Consensus can not be solved in $L$ in less than $r$ steps.
\end{proposition}
Indeed if every prefixes of length $r$ of $O$ in which Consensus is
unsolvable are also prefixes of $L$, then after $r$ rounds of any scenario
$w\in L$, processes solving Consensus in $L$ would be mean it is also solvable in $O$.
Now, using Proposition~\ref{complexity} for the fourth and fifth cases of Example~\ref{ex7cases} yields the following summary
by remarking that their first round are exactly the same as $\Gamma^\omega$.
\begin{enumerate}
\item $S_0$ is solvable in $1$ round,
\item $T_\blanc$ is solvable in $1$ round,
\item $T_\noir$ is solvable in $1$ round,
\item $C_1$ is solvable in exactly $2$ rounds,
\item $S_1$ is solvable in exactly $2$ rounds.
\end{enumerate}
\subsection{About Minimal Obstructions}
Theorem~\ref{thm:FG11} shows that, even in the simpler subclass where
no double omission are permitted, simple inclusion-minimal adversaries may not
exist. Indeed, there exists a sequence of unfair scenarios
$(u_i)_{i\in\N)}$ such that $\forall i,j$, $(u_i,u_j)$ is not a
special pair. Therefore $L_n=\Gamma^\omega \backslash
{\bigcup}_{0\leq i\leq n}u_i$ defines an infinite
decreasing sequence of obstructions for the Coordinated Attack
Problem.
Considering the set of words in $SPair(\Gamma^\omega)$,
it is possible by picking up only one member of a special pair
to have an infinite set $U$ of
unfair scenarios such that, by
Theorem~\ref{thm:FG11}, the adversary
$\Gamma^\omega\backslash U$ is a minimal obstruction.
So there is no minimum obstruction.
As a partial conclusion, we shall say that the well known adversary
$\Gamma^\omega$, even not being formally a minimal obstruction, could
be considered, without this being formally defined, as the smallest
example of a \emph{simple obstruction}, as it is more straightforward to describe than, say the adversaries
$\Gamma^\omega\backslash U$ above.
This probably explains why
the other obstructions we present here have never been investigated
before.
\section{Combinatorial Characterization of Solvable Adversaries}
In this part, we consider the adversaries without double omission, that is
the adversaries $L\subset\Gamma^\omega$ and we characterize exactly which one
are solvable. When the consensus is achievable, we
also give an effective algorithm. In this section, we rely on a classical combinatorial bivalency technique \cite{FLP85}.
\subsection{Necessary Condition: a Bivalency Impossibility Proof}
\label{CN}
We will use a standard, self-contained, bivalency proof technique.
We suppose now on that there is an algorithm \algo to solve Consensus
on $L$.
We proceed by contradiction. So we suppose that all the conditions of
Theorem~\ref{thm:FG11} are not true, \emph{i.e.} that
$Fair(\Gamma^\omega)\cup
\{\lnoir^\omega,\lblanc^\omega\} \subset L,$
and that for all $(w,w')\in SPair(\Gamma^\omega), w\notin L
\Longrightarrow w'\in L.$
\begin{definition}
Given an initial configuration, let $v\in Pref(L)$ and
$i\in\{0,1\}$. The partial scenario $v$ is
said to be \emph{$i$-valent} if, for all scenario $w\in L$ such that $v\in
Pref(w)$, \algo decides $i$ at the end. If $v$ is not $0-$valent nor
$1-$valent, then it is said \emph{bivalent}.
\end{definition}
By hypothesis, $\lblanc^\omega,\lnoir^\omega\in L$. Consider
our algorithm \algo with input 0 on both processes running under
scenario $\lblanc^\omega$. The algorithm terminates and has to output
$0$ by Validity Property. Similarly, both processes output $1$ under
scenario $\lnoir^\omega$ with input $1$ on both processes.
From now on, we have this initial configuration $I$: $0$ on process \blanc
and $1$ on process \noir. For \blanc there is no difference under
scenario $\lnoir^\omega$ for this initial configurations and the
previous one. Hence,
\algo will output $0$ for scenario $\lnoir^\omega$. Similarly, for
\noir there is no difference under scenario $\lblanc^\omega$ for both
considered initial
configurations. Hence, \algo will output $1$ for this other scenario.
Hence $\varepsilon$ is bivalent for initial configuration $I$. In the
following, valency will always be implicitly defined with respect to
the initial configuration $I$.
\begin{definition}
Let $v\in Pref(L)$, $v$ is \emph{decisive} if
\begin{theoenum}
\item $v$ is bivalent,
\item For any $a\in\Gamma$ such that $va\in Pref(L)$, $va$ is not
bivalent.
\end{theoenum}
\end{definition}
\begin{lemma}
There exist a decisive $v\in Pref(L).$
\end{lemma}
\begin{proof}
We suppose this not true. Then, as $\varepsilon$ is bivalent, it is
possible to construct $w\in\Gamma^\omega$ such that, $Pref(w)\subset
Pref(L)$ and for any $v\in Pref(w)$, $v$ is bivalent. Bivalency for
a given $v$ means in particular that the algorithm \algo has not
stopped yet. Therefore $w\notin L$.
This means that $w$ is unfair, because from initial assumption, $Fair(\Gamma^\omega)\subset L$. Then this means that,
w.l.o.g we have $u\in\Gamma^+$, such that $w=u\lblanc^\omega$ and
$ind(u)$ is even. We denote by
$w'=ind^{-1}(ind(u)-1)\lblanc^\omega$.
The couple $(w,w')$ is a special pair. Therefore $w'$ belongs to
$L$, so \algo halts at some round $r_0$ under scenario $w'$.
By Corollary~\ref{state}, $s_\noir(w_{|r_0})=s_\noir(w'_{|r_0})$
This means that $w|_{r_0}$ is not bivalent. As $w|_{r_0}\in Pref(L)$
this gives a contradiction.
\end{proof}
We can now end the impossibility proof.
\begin{proof}
Consider a decisive $v\in Pref(L)$. By definition, this means that
there exist $a,b\in\Gamma$, with $a\neq b$, such that $va,vb\in
Pref(L)$ and $va$ and $vb$ are not bivalent and are of different
valencies. Obviously there is an extension that has a different valency of the one of $\lok$
and w.l.o.g., we choose $b$ to be $\lok$. Therefore $a=\lblanc$ or $a=\lnoir$.
We terminate by a case by case analysis.
Suppose that $a=\lblanc,$ and $ind(v)$ is even.
By Corollary~\ref{state}, this means \noir is in the same state
after $va$ and $vb$. We consider the scenarios $va\lblanc^\omega$
and $vb\lblanc^\omega$, they forms a special pair. So one belongs to $L$
by hypothesis, therefore both processes should halt at some point under
this scenario. However, the state of \noir is always the same
under the two scenarios because it receives no message at all
so if it halts and decides some value, we get
a contradiction with the different valencies.
Other cases are treated similarly using the other cases of Corollary~\ref{state}.
\end{proof}
\subsection{A Consensus Algorithm}
\label{CS}
Given a word $w$ in $\Gamma^\omega$, we define the following algorithm
$\algo_w$ (see Algorithm~\ref{consalgo}).
It has messages always of the same type. They
have two components, the first one is the initial bit, named
$init$. The second is an integer named $ind$. Given a message $msg$,
we note $msg.init$ (resp. $msg.ind$) the first (resp. the second)
component of the message.
\begin{algorithm}
\KwData{$w\in\Gamma^\omega$}
\KwIn{$init\in\{0,1\}$}
r=0\;
initother=null\;
\eIf{$\proc=\blanc$}{ind=0\;}{ind=1\;}
\While{$|ind - ind(w|_r)| \leq 2$}{
msg = (init,ind)\;
send(msg)\;
msg = receive()\;
\eIf(// message was lost){msg == null}
{$ind = 3*ind$\;}
{$ind = 2*msg.ind+ind$\;
initother = msg.init\;}
r=r+1\;
}
\eIf{$\proc=\blanc$}{%
\eIf{$ind < ind(w|_r)$}
{\KwOut{init}}
{\KwOut{initother}}
}{
\eIf{$ind > ind(w|_r)$}
{\KwOut{init}}
{\KwOut{initother}}
}
\caption{\label{consalgo}Consensus Algorithm $\algo_w$ for Process \proc:}
\end{algorithm}
We prove that the $ind$ values
computed by each process in the algorithm differ by only one.
Moreover, we show that the actual index is equal to the minimum of the $ind$ values.
More precisely, with $sign(n)$ being $+1$ (resp. $-1$) when $n\in\Z$ is
positive (resp. negative), we have
\begin{proposition}\label{indexes}
For any round $r$ of an execution of Algorithm $\algo_w$ under
scenario $v\in\Gamma^{r}$, such that no process has already halted,
$$\left\{\mbox{
\begin{minipage}[c]{0.6\linewidth}
$|ind^\noir_r-ind^\blanc_r| = 1$,\\
$sign(ind^\noir_r-ind^\blanc_r) = (-1)^{ind(v)}$,\\
$ind(v) = \min\{ind^\blanc_r,ind^\noir_r\}.$
\end{minipage}}\right.
$$
\end{proposition}
\begin{proof}
We prove the result by induction over $r\in\N$.
For $r=0$, the equations are satisfied.
Suppose the property is true for $r-1$. We consider a round of the
algorithm. Let $u\in\Gamma^{r-1}$ and $a\in\Gamma$, and consider
an execution under environment $w=ua$. There are
exactly three cases to consider.
Suppose $a=\lok$. Then it means both messages are received and
$ind^\blanc_r=2ind^\noir_{r-1}+ind^\blanc_{r-1}$ and
$ind^\noir_r=2ind^\blanc_{r-1}+ind^\noir_{r-1}.$ Hence
$ind^\noir_r-ind^\blanc_r = ind^\blanc_{r-1}-ind^\noir_{r-1}$.
Thus, using the recurrence property, we get
$|ind^\noir_r-ind^\blanc_r| = 1.$
Moreover, by construction
\begin{eqnarray*}
ind(v)&=&ind(ua)\\
&=&3ind(u)+(-1)^{ind(u)}(\mu(a))+1\\
&=&3ind(u)+1
\end{eqnarray*}
The two indices $ind(u)$ and $ind(v)$ are therefore of
opposite parity. Hence, by induction property,
$sign(ind^\noir_r-ind^\blanc_r) = (-1)^{ind(v)}$.
And remarking that $\min\{ind^\blanc_r,ind^\noir_r\} =$
\begin{eqnarray*}
&&
\min\{2ind^\noir_{r-1}+ind^\blanc_{r-1},2ind^\blanc_{r-1}+ind^\noir_{r-1}\}\\
&=&ind^\noir_{r-1}+ind^\blanc_{r-1}+\min\{ind^\blanc_{r-1},ind^\noir_{r-1}\}\\
&=&2\min\{ind^\blanc_{r-1},ind^\noir_{r-1}\}+|ind^\noir_{r-1}-ind^\blanc_{r-1}|\\
& & +\min\{ind^\blanc_{r-1},ind^\noir_{r-1}\}\\
&=&3\min\{ind^\blanc_{r-1},ind^\noir_{r-1}\}+1
\end{eqnarray*}
we get that the third equality is also verified in round $r$.
Consider now the case $a=\lnoir$. Then \blanc gets no message from
\noir and \noir gets a message from \blanc. So we have that
$ind^\blanc_r=3ind^\blanc_{r-1}$ and
$ind^\noir_r=2ind^\blanc_{r-1}+ind^\noir_{r-1}.$
So we have that $ind^\noir_r-ind^\blanc_r = ind^\noir_{r-1}-ind^\blanc_{r-1}$.
The first equality is satisfied.
We also have that $ind(v)=3ind(u)+(-1)^{ind(u)}\mu(\lnoir)+1=
3ind(u)+\alpha,$ with $\alpha=0$ if $ind(u)$ is even and $\alpha=2$
otherwise. Hence, $ind(u)$ and $ind(v)$ are of the same parity, and
the second equality is also satisfied.
Finally, we have that $\min\{ind^\blanc_r,ind^\noir_r\}=
\{3ind^\blanc_{r-1},2ind^\blanc_{r-1}+ind^\noir_{r-1}\}$. If
$ind(u)$ is even, then $ind(u)=ind^\blanc_{r-1}$ and the equality
holds. If $ind(u)$ is odd, then $ind(u)=ind^\noir_{r-1}$ and the
equality also holds.
The case $a=\lblanc$ is a symmetric case and is proved similarly.
\end{proof}
\subsection{Correctness of the Algorithm}
Given a message adversary $L$, we suppose that one of the following holds.
\begin{itemize}
\item $\exists f\in Fair(\Gamma^\omega), f\notin L,$
\item $\exists (u,u')\in SPair(\Gamma^\omega), u,u'\notin L,$
\item $\lnoir^\omega\notin L,$
\item $\lblanc^\omega\notin L.$
\end{itemize}
In particular, $L\subsetneq\Gamma^\omega$ and we denote by $w$, a scenario
in $\Gamma^\omega\backslash L$. If we can choose $w$ such that it is fair, we choose such a $w$.
Otherwise $w$ is unfair, and we assume it is
either $\lnoir^\omega$ or $\lblanc^\omega$, or it belongs to a special
pair that is not included in $L$.
We consider the algorithm $\algo_w$ with parameter $w$
as defined above.
\begin{lemma}\label{diff2}
Let $v\in L$. There exist $r\in\N$ such that $|ind(v|_r)-ind(w|_r)|\geq3.$
\end{lemma}
\begin{proof}
Given $w\notin L$, we have $v\neq w$ and at some round $r$, $w|_r\neq v|_r.$
Therefore $|ind(v|_r)-ind(w|_r)|\geq1$.
From Lemma~\ref{diff1} and Definition~\ref{pair}, it can be seen
that the only way to remain indefinitely at a difference of one is
exactly that $w$ and $v$ form a special pair. Given the way we
have chosen $w$, and that $v\in L$, this is impossible.
So at some round $r'$, the difference will be greater than 2 :
$|ind(v|_{r'})-ind(w|_{r'})|\geq2$.
Then by definition of the index we have that
$|ind(v|_{r'+1})-ind(w|_{r'+1})|\geq3$.
\end{proof}
Now we prove the correctness of the algorithm under the message adversary $L$.
\begin{proposition}
The algorithm $\algo_w$ is correct for every $v\in L$.
\end{proposition}
\begin{proof}
First we show Termination. This is a corollary of
Lemma~\ref{diff2} and Proposition~\ref{indexes}.
Consider the execution $v\in L$.
From Lemma~\ref{diff2}, there exists a round $r\in\N$ such that $|ind(v|_r)-ind(w|_r)|\geq3.$
Denote by $r$ the round when it is first
satisfied for one of the process. From the condition of the While loop, it means this process will stop at round $r$.
If the $ind$ value of the other process \proc at round $r$ is also at distance 3 or more from the index of $w|_r$,
then we are done. Otherwise,
from Proposition~\ref{indexes}, we have
$|ind^\proc_r-ind(w|_r)| = 2.$
In the following round, \proc
will receive no message (the other process has halted) and
$|ind^\proc_r-ind(w|_r)| \geq3$ will hold .
Note also, that even though the final comparisons are strict, the output value is well defined at the end
since $ind_r\neq ind(w|_r)$ at this stage.
The validity property is also obvious, as the output values are ones
of the initial values. Consider the case for \noir (the case for \blanc is symmetric).
Since the only case where
$initother^\noir$ would be $null$ is when \noir has
not received any message from \blanc, \emph{i.e.} when
$v=\lblanc^r$. But as $ind(\lblanc^r)=3^r-1$ is the maximal possible
index for scenario of length $r$, and \noir outputs $initother^\noir$ only if
$ind^\noir < ind(w|_r)$, it cannot have to output
$initother$ when it is $null$.
Similarly for \blanc, this proves that $null$ can never be output by any process.
We now prove the agreement property. Given that
$|ind^\blanc_r-ind^\noir_r|=1$ by Proposition~\ref{indexes}, when
the processes halt, from Lemma~\ref{diff2} the $ind$ values are on the same side of
$ind(w{|_r})$. This means that one of the process outputs $init$, the
other outputting $initother$. By construction, they
output the same value.
\end{proof}
\section{Topological approach}
\label{sec:topo}
In this Section, we provide a topological characterization of solvable message
adversaries for the Consensus Problem.
First, we will introduce some basic topological
definitions in Section~\ref{sub:def_topo}, then we will explain the link
between topology and distributed computability in
Section~\ref{sub:topo_rep_of_dist_syst} in order to formulate our result in
Section~\ref{sub:topo_carac}.
We then show in the following
Section~\ref{sub:link_with_the_combinatorial_characterization} how this new
characterization matches the combinatorial one described by
Theorem~\ref{thm:FG11}.
We also discuss a similar characterization in \cite{GKM14} and we show
that our result indicates that there is a flaw in the statement of
Theorem 6.1 of \cite{GKM14}. If no restriction are given on the kind of adversary are addressed in this theorem,
then, in the case of 2 processes, this would imply that $\Gamma^\omega\backslash\{w\}$ is solvable for any $w$.
From our result, this is incorrect when $w$ belongs to a special pair.
This has been confirmed by the authors \cite{K15} that the statement has to be corrected by restricting to adversaries $L$ that are closed for special pairs (if $\{w,w'\}$ is a special pair, then $w\in L \Leftrightarrow w'\in L$).
Moreover, even if the present work is for only 2 processes,
the approach that is taken might help correct the general statement of \cite{GKM14}.
\subsection{Definitions}
\label{sub:def_topo}
The following definitions
are standard definitions from algebraic topology \cite{Munkres84}.
We fix an integer $N\in\N$ for this part.
\begin{definition}
Let $n\in\N$.
A finite set $\sigma=\{v_0,\dots,v_n\}\subset\R^N$ is called a \emph{simplex} of dimension $n$ if the vector space generated by
$\{v_1-v_0,\dots,v_n-v_0\}$ is of dimension $n$.
We denote by $|\sigma|$ the convex hull of $\sigma$ that we call
the \emph{geometric realization} of $\sigma$.
\end{definition}
\begin{definition}
A \emph{simplicial complex} is a collection $C$ of \emph{simplices}
such that~:
\begin{enumerate}[(a)]
\item If $\sigma\in C$ and $\sigma'\subseteq\sigma$, then $\sigma'\in C$,
\item If $\sigma,\tau\in C$ and $|\sigma|\cap|\tau|\neq\emptyset$ then there exists $\sigma'\in C$ such that
\begin{itemize}
\item $|\sigma|\cap|\tau|=|\sigma'|$,
\item $\sigma'\subset\sigma, \sigma'\subset\tau.$
\end{itemize}
\end{enumerate}
\end{definition}
The simplices of dimension 0 (singleton) of $C$ are called vertices,
we denote $V(C)$ the set of vertices.
The \emph{geometric realization} of $C$, denoted $|C|$, is the union
of the geometric realization of the simplices of $C$.
Let $A$ and $B$ be simplicial complexes. A map $f\colon V(A)\to V(B)$ is called
\emph{simplicial} (in which case we write $f\colon A\to B$) if it preserves the
simplices, \emph{i.e.} for each simplex $\sigma$ of $A$, the image $f(\sigma)$ is a
simplex of $B$.
In this paper, we also work with colored simplicial complexes. These
are simplicial complexes $C$ together with
a function $c:V(C)\to \Pi$ such that the restriction of $c$ on any
maximal simplex of $C$ is a bijection. A simplicial map that preserves colors is called chromatic.
As a final note, since we only deal with two processes, our simplicial complexes will be of
dimension 1. The only simplices are edges (sometimes called \emph{segments})
and vertices, and the latter are colored with $\Pi=\{\b,\n\}$.
\begin{remark}\label{subtle}
The combinatorial part of simplicial complexes (that is the sets of
vertices and the inclusion relationships they have) is usually
referred as abstract simplicial complexes. Abstract simplicial complex can be equivalently defined as a
collection $C$ of sets that are closed by inclusion, that is if
$S\in C$ and if $S'\subset S$ then $S'\in C$.
For finite complexes, the
topological and combinatorial notions are equivalent, including
regarding geometric realizations.
But it should be noted that infinite abstract simplicial complex might not
have a unique (even up to homeomorphisms) geometric realization.
However, here the infinite complexes we deal with
are derived from the
subdivision, see below, of a finite initial complex, hence they have a unique
realization in that setting. So in the context of this paper, we
will talk about ``the'' realization of such infinite complexes.
\end{remark}
The elements of $|C|$ can be expressed as convex combinations of the
vertices of $C$, \emph{i.e.} $\forall x\in|C|\quad x=\underset{v\in V(C)}{\sum}\alpha_v
v$ such that $\underset{v\in V(C)}{\sum}\alpha_v=1$ and
$\{v\mid \alpha_v\neq0\}$ is a simplex of $C$.
The geometric realization of a simplicial map $\delta:A\to B$, is
$|\delta|:|A|\to|B|$ and is obtained by
$f(x)=\underset{v\in V(C)}{\sum}\alpha_vf(v)$.
A \emph{subdivision} of a simplicial complex $C$ is a simplicial
complex $C'$ such that~:
\begin{enumerate}[(1)]
\item the vertices of $C'$ are points of $|C|$,
\item for any $\sigma'\in C'$, there exists $\sigma\in C$ such that
$|\sigma'|\subset|\sigma|$;
\item $|C|=|C'|$.
\end{enumerate}
Let $C$ be a chromatic complex of dimension 1, its \emph{standard chromatic subdivision}
$\text{Chr~}C$ is obtained by replacing each simplex $\sigma\in C$ by its
chromatic subdivision. See \cite{HKRbook} for the general definition
of the chromatic subdivision of a simplex, we present here only
the chromatic subdivision of a segment $[0,1]\subset\R$ whose vertices are colored $\b$
and $\n$ respectively.
The subdivision is defined
as the chromatic complex consisting of 4 vertices at position
$0$, $\frac{1}{3}$, $\frac{2}{3}$ and $1$; and colored $\b$, $\n$, $\b$, $\n$ respectively. The edges are the 3 segments
$[0,\frac{1}{3}]$, $[\frac{1}{3}, \frac{2}{3}]$ and $[\frac{2}{3},1]$.
The geometric realization of the chromatic subdivision of the segment $[0,1]$
is identical to the segment's.
If we iterate this
process $m$ times we obtain the $m$\textsuperscript{th} chromatic subdivision
denoted $\text{Chr}^m~C$.
Figure~\ref{fig:sub_segment}
shows $\text{Chr~}[0,1]$ to $\text{Chr}^3~[0,1]$.
\begin{figure}[ht]
\centering
\includestandalone{figures/sub_segment}
\caption{Chromatic subdivision of the segment.\\
The correspondence with executions (bolder simplices) is explained at the end of section
\ref{sub:topo_rep_of_dist_syst}.
\label{fig:sub_segment}}
\end{figure}
\subsection{Topological representation of distributed systems}
\label{sub:topo_rep_of_dist_syst}
As shown from the celebrated Asynchronous Computability Theorem
\cite{HS99}, it is possible to encode the global state of a
distributed system in simplicial complexes. First we give the
intuition for the corresponding abstract simplicial complex.
We can represent a process in a given state by a vertex composed of a color and a value~:
the \emph{identity} and the \emph{local state}.
A \emph{global configuration} is an edge whose
vertices are colored $\b$ and $\n$.
The set of input vectors of a distributed task can thus be represented by a
colored simplicial complex $\I$ (associating all possible global initial configurations).
\begin{remark}
The vertices that belongs to more than one edge illustrate
the uncertainty of a process about the global state of a distributed system. In other words, a local state can be common
to multiple global configurations, and the process does not know in which
configuration it belongs. We have a topological (and geometrical)
representation of theses uncertainties.
For example, Figure~\ref{fig:ex_incertitude} shows a very simple graph where
each colored vertex is associated with a value (0 or 1). The vertex in the
middle is common to both edges; it represents the uncertainty of the process
$\n$ concerning the value of $\b$, \emph{i.e.} $\n$ doesn't know if it is in the global
configuration $(0,0)$ of $(0,1)$
\begin{figure}[ht]
\centering
\includestandalone{figures/ex_incertitude}
\caption{Example of a simplicial complex with uncertainty}
\label{fig:ex_incertitude}
\end{figure}
\end{remark}
In the same way than $\I$, we construct (for a distributed task) the output
complex $\O$ that contains all possible output configurations of the processes.
For a given problem, it is possible to construct a relation $\Delta\subset\I\times\O$ that is chromatic and relates the input
edges with the associated possible output edges. So, any task can
be topologically represented by a triplet $(\I,\O,\Delta)$.
\begin{figure}[ht]
\centering
\includestandalone[width=0.8\textwidth]{figures/cons_bin_2proc}
\caption{Representation of the Consensus Task with 2 processes}
\label{fig:cons_bin_2proc}
\end{figure}
For example, the
Binary Consensus task with two processes, noted $(\I_{2gen},\O_{2gen},\Delta_{2gen})$,
is shown in Figure~\ref{fig:cons_bin_2proc}.
The input complex $\I_{2gen}$, on the left-hand side, consists of a square. Indeed, there are only four
possible global configurations given that the two process can only be in two
different initial states (proposed value 0 or 1).
The output complex, on the right-hand side, has only two edges
corresponding to the valid configurations for the Consensus (all 0 or all 1).
Finally $\Delta$ maps the input configuration with the possible output ones,
according to the \emph{validity} property of the Consensus.
\bigskip
Any protocol can be encoded as a full information protocol
\footnote{since the full information
protocol send all possible information,
the computations can be emulated provided it is allowed to send so much information.}
any local state is related to what is called the view of the execution.
A protocol simplex is a global configuration such that there exists an
execution of the protocol in which the processes end with theses states. The
set of all theses simplices forms the \emph{protocol complex} associated to an
input complex, a set of executions and an algorithm. Given any algorithm, it
can be redefined using a full-information protocol, the protocol
complex thus depends only on the input complex and the set of executions.
Given an input complex $\I$, we construct the \emph{protocol complex at step $r$}
by applying $r$ times to each simplex $\sigma$
the chromatic subdivision shown in
Section~\ref{sub:def_topo} and Figure~\ref{fig:sub_segment}.
\bigskip
The input complex of the Consensus and the
first two steps of the associated protocol complex are shown
in Figure~\ref{fig:cpx_proto_cons32}.
\begin{figure}[ht]
\centering
\begin{subfigure}{.32\textwidth}
\includestandalone[width=\linewidth]{figures/cpx_init_2gen}
\caption{Initial complex}
\end{subfigure}%
\hfill
\begin{subfigure}{.32\textwidth}
\includestandalone[width=\linewidth]{figures/cpx_proto1_2gen}
\caption{First step}
\end{subfigure}
\hfill
\begin{subfigure}{.32\textwidth}
\includestandalone[width=\linewidth]{figures/cpx_proto2_2gen}
\caption{Second step}
\end{subfigure}
\caption{Protocol complex of the initial, first and second rounds}
\label{fig:cpx_proto_cons32}
\end{figure}
For example, let $w_0=\lblanc\lok\lnoir^\omega$. $w$ corresponds to the following
execution~:
\begin{itemize}
\item[--] first, the message from $\b$ is lost;
\item[--] $\b$ and $\n$ receive both message;
\item[--] the message from $\n$ is lost;
\item[--] ... (this last round is repeated indefinitely)
\end{itemize}
This can be represented as a sequence of simplices.
Each infinite sequence of edges ($\sigma_0,\sigma_1,\ldots$) with
$\sigma_{i+1}\in \text{Chr}\sigma_i$, and $\sigma_0$ the initial configuration,
corresponds to a unique scenario and vice versa.
Consider Figure~\ref{fig:sub_segment}, at $[\frac{2}{3},1]$
the thick red simplex corresponds to $\lblanc$.
Then, at $[\frac{7}{9},\frac{8}{9}]$, this corresponds to $\lblanc\lok$.
Finally, at $[\frac{23}{27},\frac{24}{27}]$
the thick red simplex of $\text{Chr}^3~\sigma$
corresponds to the execution $\lblanc\lok\lnoir$.
For a given finite execution, the embedding is exactly
given by the normalized index $ind_n$.
Thus, when the initial simplex is $[0,1]$ ($\b$ initial value is 0, $\n$
is 1) for an infinite execution, the
corresponding sequence of simplices will converge to a point of $[0,1]$.
At the limit, the vertex' convergence point is given by $\overline{ind}$.
For example, $w_0$ converges to $1/9$.
Without loss of generality, in the rest of the paper, we will describe
what happens on the segment $[0,1]$ instead of the whole
complex. Indeed, the behaviour of subdivision of initial segments is
identical (through a straightforward isometry for each segment) as
this behaviour does not depend on the initial values. The "gluing" at
``corners'' is always preserved since it corresponds to state obtained
by a process when it receives no message at all.
Given $x\in [0,1], i_\blanc,i_\noir\in\{0,1\}$, we denote by
$geo(x,i_\blanc,i_\noir)$ the point in the geometrical realization
that corresponds to $x$ in the simplex
$[(\blanc,i_\blanc),(\noir,i_\noir)]$, namely $x(\blanc,i_\blanc) + (1-x)(\noir,i_\noir)$.
Given $L\subset\Gamma^\omega$, we denote $|\mathcal
C^L|=\{geo(\overline{ind}(w),i_\blanc,i_\noir)\mid \exists w\in L,
i_\blanc,i_\noir\in\{0,1\}\}$. Note that it is possible to define the
limit of the geometric realizations this way, but that there are no
sensible way to define a geometric realization directly for the corresponding abstract simplicial
complex. See Section \ref{contrex}.
\bigskip
\subsection{Topological Characterization}
The following definition is inspired by \cite{GKM14}.
\begin{definition}
Let $C$ be a colored simplicial complex.
A \emph{terminating subdivision} $\TS$ of $C$ is a (possibly
infinite) sequence of colored simplicial complexes
$(\Sigma_k)_{k\in\N}$ such that $\Sigma_0=\emptyset$, and
for all $k\geq1$ $\Sigma_k\subset Chr^k C$, and
$\cup_{i\leq k} \Sigma_i$ is a simplicial complex for all $k$.
\end{definition}
The intuition for this definition is as follows. It is well known
that (non-terminated) runs of length $r$ in $\Gamma^\omega$ with
initial values encoded as $C$ are represented by a protocol complex
that is the chromatic subdivision $Chr^k C$. This definition refines
the known correspondence by looking at the actual runs for a given
algorithm. When the algorithm stops, the protocol complex should
actually be no more refined. For a given algorithm, we end up with
protocol complexes that are of a special form : where, possibly, the
level of chromatic subdivision is not the same everywhere. We will
prove later that those resulting complexes are exactly terminating
subdivisions. Or to say it differently, terminating subdivisions are
the form of protocol complexes for non-uniformly terminating
algorithms (that is for algorithms that do not stop at the same round
for all possible executions).
From the correspondence between words and simplexes of the chromatic
subdivision, we can see that the corresponding set of words is an
anti-chain for the prefix order relation.
An edge of $\Sigma_k$ for some $k$ is called a \emph{stable
edge} in the subdivision $\TS$.
The union
$\cup_k\Sigma_k$ of stable edges in $\TS$ forms a
colored simplicial complex, and a stable edge can only intersect
another given stable edge at an extremity (a single vertex).
For $r\in\N$, we denote by $K_r(\TS)$ the complex $\cup_{k\leq r}\Sigma_k$.
We denote by $K(\TS)$ the complex $\cup_k\Sigma_k$; it possibly has infinitely many
simplices. Observe that the geometric realization $|K(\TS)|$ can be
identified with a subset of $|C|$.
For example, Figure~\ref{fig:ex_sub_ter} shows a terminating subdivision of
$[0,1]$ up to round $3$.
\begin{figure}[ht]
\centering
\includestandalone[width=1\textwidth]{figures/ex_sub_ter}
\caption{ \label{fig:ex_sub_ter}Example of a terminating subdivision.\\
The bolder simplices show the stable edges (colored red at first appearance). For the correspondence of simplices with executions, see section \ref{sub:topo_rep_of_dist_syst}}
\end{figure}
There is an example in Fig.~\ref{fig:ex_sub_ter} of a terminating subdivision for $\Gamma^\omega$.
The following definition expresses when a terminating subdivision
covers all considered scenarios of a given $L$. The intuition is that
every scenario of $L$ should eventually land into a simplex of $\TS$.
In terms of words, this means that all words of $L$ have a prefix in
the corresponding words of the simplexes of $\TS$.
\begin{definition}
A terminating subdivision $\TS$ of C is \emph{admissible} for
$L\subseteq\Go$ if for any scenario $\rho\in L$ the corresponding
sequence of edges $\sigma_0,\sigma_1,\ldots$ is such that there exists
$r>0$, $|\sigma_r|$ is a stable edge in $K(\TS)$.
\end{definition}
\label{sub:topo_carac}
We can now state and prove our new characterization theorem. First, for
simplicity, notice that the output complex Consensus
$O_{2gen}$ has two connected components representing 0 and 1. Thus we can
identify it to the set $\{0,1\}$ and define the relation
$\Delta'_{2gen}\subset\I_{2gen}\times\{0,1\}$ analogous to $\Delta_{2gen}$.
\begin{theorem}
The task $T_{2gen}=(\I_{2gen},\O_{2gen},\Delta_{2gen})$ is solvable in a
sub-model $L\subseteq\Go$ if and only if there exist a terminating
subdivision $\Phi$ of $\I_{2gen}$ and a simplicial function
$\delta\colon K(\Phi)\to\{0,1\}$ such that~:
\begin{theoenum}
\item $\Phi$ is admissible for $L$;
\item \label{thii}
For all simplex $\sigma\in\I_{2gen}$, if $\tau\in K(\Phi)$ is such
that $|\tau|\subset|\sigma|$, then $\delta(\tau)\in\Delta'_{2gen}(\sigma)$;
\item $|\delta|$ is continuous.
\end{theoenum}
\label{thm:GACT'2gen}
\end{theorem}
\begin{proof}
\textbf{Necessary condition.}
Suppose we have an algorithm $\mathscr{A}$ for solving the Binary
Consensus task, we will
construct a terminating subdivision $\Phi$ admissible for $L$ and a function
$\delta$ that satisfies the conditions of the theorem.
When running $\mathscr{A}$, we can establish which nodes of the protocol
complex decide a value by considering the associated state of the process.
Intuitively, $\Phi$ and $\delta$ are built as the following~:
in each level $r$, we consider all runs of length $r$ in $L$.
Each run yields a simplex, if the two nodes of the simplex have
decided a (unique) value $v$, we add this simplex to $\Sigma_{r}$ and set
$\delta(\sigma)=v$.
Formally, let $\mathscr{C}^L(r)$ be the protocol complex of a full
information protocol subject to the language $L$ at round $r$ and
$V(\mathscr{C}^L(r))$ its set of vertices.
Given a vertex $x$,
let $val(x)$ be the value decided by the corresponding process in
the execution that leads to state $x$. And $\forall r\geq0$ define
\begin{align*}
\Sigma_{r}&=\{\{x,y\}\in \mathscr{C}^L(r)\mid
\text{$x$ and $y$ have both decided and at least one has just
decided
in round }r\}\\
%
%
%
%
%
\delta(x)&=val(x)\quad\forall x\in V(\Sigma_r)\\
\end{align*}
The function $\delta$ is well defined since it depends only on the
state encoded in $x$ and in $\Sigma_r$ all vertex have decided.
For all $\{x,y\}\in\Sigma_r$, note that $val(x)=val(y)$ because $\mathscr{A}$
satisfies the \emph{agreement} property.
%
By construction for $\Phi$, it is a terminating subdivision.
Furthermore, $\Phi$ is admissible for $L$.
Since $\mathscr{A}$ terminates subject to $L$,
all processes decide a value in a run of $L$, so all
nodes of the complex protocol restricted to $L$ will be in $\Phi$
(by one of their adjacent simplexes).
The condition~\ref{thii} is also satisfied by
construction because $\mathscr{A}$ satisfies the \emph{validity} property,
\emph{i.e.} $val(x)\in\Delta'_{2gen}(\sigma)$.
We still have to prove that $|\delta|$ is continuous, in other words for all
$x\in|K(\Phi)|$, we must have
$$\forall\varepsilon>0\quad\exists\eta_x>0\quad\forall y\in|K(\Phi)|\quad
|x-y|\leq\eta_x\Rightarrow ||\delta|(x)-|\delta|(y)|<\varepsilon$$
This property for continuity says that when two points $x,y\in|K(\Phi)|$ are
close, their value of $|\delta|$ is close. When $\varepsilon$ is small (\emph{e.g.}
$\varepsilon<1/2$), $||\delta|(x)-|\delta|(y)|<\varepsilon$ implies that
$|\delta|(x)=|\delta|(y)$. This is because $|\delta|$'s co-domain is discrete.
When $\varepsilon$ is large, the property is always verified and thus is not
much of interest. We can thus reformulate it without considering
$\varepsilon$~: we must show that $\forall x\in|K(\Phi)|$
\begin{eqnarray}\label{continu}
\exists\eta_x>0\quad\forall y\in|K(\Phi)|\quad |x-y|\leq\eta_x\Rightarrow
|\delta|(x)=|\delta|(y)
\end{eqnarray}
In defining $\eta_x$ as followed, we have the continuity condition for all
$x\in V(K(\Phi))$~:
%
$$\eta_x=\min\{\frac{1}{3^{r+1}}\mid \exists r\in\N, \exists y,\{x,y\}
\in V(\Sigma_r)\}.$$
Since there exists at least one ($\algo$ terminates for any
execution in $L$) and at most two such $y$, the minimum is well
defined for all $x\in V(K(\Phi))$. Moreover, we remark that since
the geometric realization of simplices of $\Sigma_r$ are of size
$\frac{1}{3^r}$, the ball centered in $x$ and of diameter $\eta_x$
is included in the stable simplices and the function $\delta$ is
constant on this ball (by agreement property).
Let $x,y\in V(K(\Phi))$, let $z\in[x,y]$, we define
$\eta_z=\min(d(x,z),d(y,z))$.
By construction, using such function $\eta$,
for all $z\in|K(\Phi)|$, Proposition (\ref{continu}) is satisfied.
%
%
%
%
\bigskip
\textbf{Sufficient condition.}
Given a terminating subdivision $\Phi$ admissible for $\I_{2gen}$ and a
function $\delta$ that satisfies the conditions of the theorem, we present an
algorithm $\mathscr{A}$ that solves Consensus.
First, we describe how to obtain a function $\eta$ from the
continuity of $|\delta|$. For any $x\in V(K(\Phi))$, there exists
$\eta(x)$ such that for any $y\in|K(\TS)|$, when $|x-y|\leq\eta(x)$, we
have $\delta(y)=\delta(x)$.
Notice we can choose $\eta$ such that $\forall x\in|\I| \forall y_1,y_2\in|K(\TS)|,
|y_1-x|\leq\eta(y_1)\wedge|y_2-x|\leq\eta(y_2) \Rightarrow |\delta|(y_1)=|\delta|(y_2).$
Now, we show how to extend the definition to
$|K(\Phi)|$. Consider $w\in |\Phi|\backslash V(K(\Phi))$ and denote $x,y$
the vertices of $K(\Phi)$ that defines the segment to which $w$
belongs. We define $\eta(w)=\min\{\eta(x),\eta(y)\}$.
\bigskip
We recall that $B(z,t)$ (resp. $\bar{B}(z,t)$) is the open (resp. closed) ball of center $z$ and radius $t$.
We define a boolean function $Finished(r,x)$ for $r\in\N$ and $x\in |\I|$ that is true when
$\exists y\in V(Chr^r(\I)), y\in |K_r(\Phi)|, \bar{B}(x,\frac{1}{3^r})\subset B(y,\eta(y))$.
The Consensus algorithm is described in
Algorithm~\ref{alg:algo_cons_bin}, using the function $Finished$ we just
defined from $\eta$.
Notice that $\eta$ is fully defined on $|K(\Phi)|$ and that the existential condition at line \ref{algline:if} is over a finite subset of $|K(\Phi)|$.
As in Algorithm~\ref{consalgo},
it has messages always of the same type. They
have two components, the first one is the initial bit, named
$init$. The second is an integer named $ind$. Given a message $msg$,
we note $msg.init$ (resp. $msg.ind$) the first (resp. the second)
component of the message. The computation of the index is similar.
We maintain in an auxiliary variable $r$ the current round (line \ref{diamsub}).
The halting condition is now based upon the position of the geometric
realization $geo$ of the current round with regards to the terminating
subdivision $K(\Phi)$ and the $\eta$ function : we wait until the
realization is close enough of a vertex in $|K(\Phi)|$ and the
corresponding open ball of radius $\eta$ contains a neighbourhood of
the current simplex. Note that when $initb$ or $initw$ is still
$null$, this means that we are at the corners of the square \I.
\begin{algorithm}
\KwData{function $\eta$}
\KwIn{$init\in\{0,1\}$}
$r=0$\;
\eIf{$\proc=\noir$}{ind=1\;
initw=null\;initb=init\;}{ind=0\;initw=init\;initb=null\;}
%
\Repeat{$\exists y\in V(Chr^r(\I)), y\in |K_r(\Phi)|, \bar{B}(geo(ind/3^r,initw,initb),\frac{1}{3^r})\subset B(y,\eta(y))$}{\label{algline:if}
msg = (init,ind)\;
send(msg)\;
msg = receive()\;
\eIf(// message was lost){msg == null}
{$ind = 3*ind$\;}
{$ind = 2*msg.ind+ind$\;
\eIf{$\proc=\noir$}{$initw = msg.init$\;}{$initb = msg.init$\;}}
$r=r+1$\;
}
\KwOut{$|\delta|(y)$\label{diamsub}}
\caption{Algorithm $\mathscr{A}_{\eta}$ for the binary consensus with two
processes for process $p$ where $\eta$ is a function $[0,1]\to[0,1]$.
geo(x,w,b) is the embedding function into the geometric realization.
\label{alg:algo_cons_bin}
%
}
\end{algorithm}
We show that the algorithm solves Consensus. Consider an execution
$w$, we will prove it terminates. The fact that the output does not
depends on $y$ comes from the choice of $\eta$. The properties
Agreement and Validity then come immediately from condition
\ref{thii} on $\delta$ and $\Delta'_{2gen}$.
First we prove termination for at least one process. Assume none halts.
The admissibility of $\Phi$ proves that there is a round $r_0$ for
which the simplex corresponding to the current partial scenario
$w_{\mid r_0}$ is a simplex $\sigma$ of $\Phi$. Denote $r_1\geq r_0$ an
integer such that $\frac{1}{3^{r_1}}<\eta(y)$ with $y\in|\sigma|$.
Since from round $r_0$, all future geometric realizations will remain in
$|\sigma|$, we have, at round $r_1$, that $Finished$ is
true. A contradiction.
Now, from the moment the first process halts with an index with geometric
realization $x$ because of some $y$, if the other one has not halted
at the same moment, then this one will receive no other message from
the halted process, and its $ind$ variable, with geometric
realization $z$, will remain constant forever. The closed ball
centered in $x$ has a neighbourhood inside the open ball centered in
$y$, therefore there exists $r_2$ such that $\frac{1}{3^{r_2}}$ is
small enough and $Finished(r_2,z)$ is true.
%
%
%
%
%
%
%
%
%
%
%
%
\end{proof}
We say that the corner of the squares in Fig.~\ref{fig:cpx_proto_cons32} are \emph{corner
nodes}. If we compare the two algorithms, we can see that the
differences are in the halting condition and the decision value. In
Algorithm 1, once a forbidden execution $w$ has been chosen, the
algorithm runs until it is far away enough of a prefix of $w$ to
conclude by selecting the initial value of the corner node which is on
the same side as the prefix.
Algorithm 2 is based on the same idea but somehow is more flexible
(more general). If there are many holes in the geometric realization,
it is possible to have different chosen output values for two
different connected components. Of course, the connected components
that contains the corner nodes have no liberty in choosing the decision value.
\section{About the two Characterizations}
\label{sub:link_with_the_combinatorial_characterization}
In this section, we explain how the combinatorial and the topological
characterizations match. This is of course convenient but the more important
result will be to see that the topological characterization permits to
derive a richer intuition regarding the characterization, in
particular about the status of the special pairs.
We will also illustrate how the Theorem 6.1 of \cite{GKM14} does not
handle correctly the special pairs by showing that in this
cases, combinatorial and geometric simplicial protocol complexes do
differ in precision to handle Consensus computability.
\smallskip
For any adversary $L\subseteq\Go$, $L$ can be either an
obstruction or solvable for Consensus. In other words, the set of
sub-models $\mathcal{P}(\Go)$ is partitioned in two subset~: obstructions and
solvable languages. Theorem~\ref{thm:FG11} explicitly describes the
solvable languages and classes them in four families. In contrast,
Theorem~\ref{thm:GACT'2gen} gives necessary and sufficient conditions for a
sub-model to be either an obstruction or not.
We first give an equivalent version of Theorem~\ref{thm:GACT'2gen}.
\begin{theorem}
The task $T_{2gen}$ is solvable in $L\subseteq\Go$ if and only if
$|\mathcal C^L|$ is not connected.
\label{thm:cons_ssi_k_deco}
\end{theorem}
\begin{proof}
($\Rightarrow$) If $T_{2gen}$ is solvable in $L$, let $\Phi$ and $\delta$
as described in Theorem~\ref{thm:GACT'2gen}.
From admissibility, we have that $|\mathcal C^L|\subset|K(\Phi)|.$
Now $|\delta|$ is a continuous surjective function from
$|\mathcal C^L|$ to $\{0,1\}$, this implies that the domain $|\mathcal C^L|$ is not
connected since the image of a connected space by a continuous
function is always connected.
%
%
%
%
%
%
%
%
($\Leftarrow$) With $|\mathcal C^L|$ disconnected, we can associate
output value to connected components in such a way that $\Delta'_{2gen}$ is
satisfied. Consider the segment $[0,1]$, there exists $z\in[0,1]$
and $z\notin |\mathcal C^L|$.
We define $\Phi$ in the following way :
$$\Sigma_k=\{S\mid
S=[\overline{ind}(w),\overline{ind}(w)+\frac{1}{3^k}], w\in\Gamma^k,
\exists w'\in \Gamma^\omega, ww'\in L
\mbox{ and } \forall i_w,i_b\in\{0,1\}\;geo(z,i_w,i_b)\notin |S|\}$$
We now denote $\delta$ the function $K(\Phi)\to \{0,1\}$ such that
for $v\in[0,1]$, for $X=geo(v,i_w,i_b)$, $\delta(X)=i_w$ if $v<z$
and $\delta(v)=i_b$ otherwise.
We now check that these satisfies the conditions of
Theorem~\ref{thm:GACT'2gen}. Admissibility comes from the fact
that, by construction of $z$, there are no runs in $L$ that converge
to $z$, therefore for any run $w\in L$, at some point,
$\frac{1}{3^k}$ is small enough such that there is an edge in
$\Sigma_k$ that contains $\overline{ind_{w_{|k}}}$ and not $z$.
The function $|\delta|$ is continuous and has been defined such that condition~\ref{thii} is also clearly satisfied.
\end{proof}
\bigskip
We recall the definitions of $Fair$ and $SPair$ languages, and the admissible language
families defined in Section~\ref{sub:comb_charac}.
\begin{align*}
Fair(\Go)&=\Go\backslash\{xy\mid x\in\Gamma^*\quad
y\in\{\lall,\lblanc\}^\omega\cup\{\lall,\lnoir\}^\omega\} \\
SPair(\Go)&=\{(w,w')\in\Go\times\Go \mid w\neq w', \quad \forall r\in\N \quad
|ind(w_{|r})-ind(w'_{|r})|\leq1\}
\end{align*}
A message adversary is solvable if it is one of the following (non-exclusive)
forms
\begin{enumerate}
\item $\F_1 = \{L \mid \exists f\in Fair(\Go), f\notin L\}$
\item $\F_2 = \{L \mid \exists (w,w')\in SPair(\Go), w,w'\notin L\}$
\item $\F_3 = \{L \mid \lnoir^\omega\notin L\}$
\item $\F_4 = \{L \mid \lblanc^\omega\notin L\}$
\end{enumerate}
We give now a topological interpretation of this description.
We start with a topological characterization of special pairs.
\begin{lemma}
Let $w,w'\in \Gamma^\omega$, $w\neq w'$,
$\overline{ind}(w)=\overline{ind}(w')$ if and only $(w,w')\in SPair(\Go)$.
\end{lemma}
\begin{proof}
This comes from Lemma~\ref{diff2} and the very definition of special pairs.
\end{proof}
In others words, special pairs are exactly the runs that have another
run that converges to the same geometric realization.
So removing only one member of a special pair is not enough
to disconnect $|\mathcal C^L|$.
It is also straightforward to see that removing a fair run implies
disconnection and that removing $\lnoir^\omega$ or $\lblanc^\omega$
implies disconnecting at the corners.
\subsection{Counter-Example}\label{contrex}
Now we look at Theorem~6.1 from \cite{GKM14}. The only difference is that in \cite{GKM14} $\delta$ is only required to be chromatic. Here we have shown that we need a stronger condition that is the
continuity of $|\delta|$. We explain why this is strictly stronger, ie how it is possible to have
simplicial mapping while not having continuity of the geometric
realization.
Consider $L=\Gamma^\omega\backslash w_0$, with
$w_0=\lok\lblanc^\omega$. We define $\Phi$ as follows.
$\Sigma_1=\{[0,\frac{1}{3}],[\frac{2}{3},1]\}$
For $r\geq 2$, $\Sigma_r=\{[\frac{2}{3}-\frac{1}{3^{r-1}},\frac{2}{3}-\frac{2}{3^{r}}],[\frac{2}{3}-\frac{2}{3^{r}},\frac{2}{3}-\frac{1}{3^{r}}]\}$.
The terminating subdivision $\Phi$ is admissible for $L$ and there
exists a simplicial mapping from $K(\Phi)$ to $\{0,1\}$. We can set
$\delta(x)=0$ when $x<\frac{2}{3}$ and $\delta(x)=1$ otherwise.
$K(\Phi)$ has two connected components $[\frac{2}{3},1]$ on one side,
and all the other segments on the other side. The function $\delta$ is
therefore simplicial since there is no $[z,\frac{2}{3}]$ interval in
$K(\Phi)$.
So such a $L$ satisfies the assumptions for Theorem~6.1 of
\cite{GKM14}. To see that it does not satisfy the assumptions of
Theorem~\ref{thm:GACT'2gen}, we remark that $\mid K(\Phi)\mid$ is
$[0,1]$ and the $\delta$ function we define above does not have a geometric realization that is continuous
and moreover there is no way to define a continuous surjective
function from $[0,1]$ to $\{0,1\}$. So the statements of the Theorem
are not equivalent. To correct Theorem~6.1 of \cite{GKM14}, it is
needed to add an assumption, that is that $L$ has to be ``closed'' for
special pairs : either both members belong to $L$ or none, this has
been confirmed by the authors \cite{K15}.
The simplicial complex $K(\Phi)$ has two connected components when
seen as an abstract simplicial complex, however, it is clear that the
geometric realization of $K(\Phi)$ is the entire interval $[0,1]$, it
has one connected component.
\section{Conclusion}
To conclude, we have that the two Theorems~\ref{thm:FG11} and
\ref{thm:GACT'2gen} are indeed equivalent, even if their formulation
is very different. We emphasize that the topological characterization
with Theorem~\ref{thm:cons_ssi_k_deco} gives a better explanation of
the results primarily obtained in \cite{FG11}. The different cases of
Theorems~\ref{thm:FG11} are unified when considered topologically.
Note also that in the general case, the topological reasoning should
be done on the continuous version of simplicial complexes, not on the
abstract simplicial complexes. From Section \ref{contrex}, we see we
have a simplicial complex $K(\Phi)$ that is disconnected when seen as
an abstract simplicial complex, but whose embedding (geometric
realization) makes for a connected space.
This study of the solution to the Consensus problem in the general
case for two processes is another argument in favor of topological
methods in Distributed Computability.
We are aware of \cite{consensus-epistemo} that appeared between
revisions of this paper. We underline that Theorem~\ref{thm:GACT'2gen}
cannot be obtained in a straightforward way from
\cite{consensus-epistemo}.
\input{2generals-journal.arxiv.bbl}
\end{document}
|
2,877,628,088,609 | arxiv | \section{Introduction} \label{s:intro}
The first step of planet formation is the growth of \micron-sized dust particles into km-sized planetesimals.
Planetesimals are believed to form through the collisional growth of dust particles into larger aggregates \citep[e.g.,][]{Okuzumi2012Rapid-Coagulati,Windmark2012Planetesimal-fo,Kataoka2013Fluffy-dust-for,Arakawa2016Rocky-Planetesi} and/or the gravitational and streaming instabilities of the aggregates \citep[e.g.,][]{Goldreich1973The-Formation-o,Johansen2007Rapid-planetesi,Youdin2011On-the-Formatio,Ida2016Formation-of-du,Drc-azkowska2017Planetesimal-fo}.
For both scenarios,
it is crucial to understand how large the aggregates grow in protoplanetary disks.
One of the major obstacles against dust growth is the fragmentation of dust aggregates upon high-velocity collisions.
In protoplanetary disks, macroscopic dust aggregates can collide at several tens of $\rm m~s^{-1}$ ~\citep[e.g.,][]{Johansen2014The-Multifacete,Birnstiel2016Dust-Evolution-}.
However, a number of laboratory experiments~\citep{Blum2000Experiments-on-,Guttler2010The-outcome-of-} and numerical simulations~\citep{Dominik1997The-Physics-of-,Wada2009Collisional-Gro,Wada2013Growth-efficien} have shown that aggregates made of 0.1--1~\micron-sized silicate grains are unable to stick at such a high velocity.
Therefore, it is widely believed that planetesimals do not form through
the direct coagulation of silicate grains
(but see \citealt{Kimura2015Cohesion-of-Amo,Arakawa2016Rocky-Planetesi,Steinpilz2019Sticking-Proper} for the possibility that silicate grains can actually be sticky).
Aggregates including water ice grains would be stickier \citep{Wada2009Collisional-Gro,Wada2013Growth-efficien,Gundlach2015The-Stickiness-}, but would only form icy planetesimals like comets.
It is still worth asking if there are any materials that can act as a glue holding poorly sticky silicate grains together in the inner part of protoplanetary disks.
One candidate for such a material is organic matter.
One piece of evidence for the importance of organic matter comes from the observations of chondritic porous interplanetary dust particles (CP IDPs).
They are particle aggregates of likely cometary origin and represent the most primitive materials in the solar system~\citep[e.g.,][]{Ishii2008Comparison-of-C}.
Individual CP IDPs primarily consist of submicron-sized mineral and glass grains and, importantly, are bound together by organic matter mantling the individual grains \citep[e.g.,][]{Flynn1994Interplanetary-}.
Such organic-mantled grains can form when icy grain aggregates lose ice by sublimation \citep[e.g.,][]{Poch2016Sublimation-of-}.
Another piece of evidence comes from the viscoelastic measurements of interstellar and molecular-cloud organic matter analogs
\citep{Kudo2002The-role-of-sti,Piani2017Evolution-of-Mo}.
At temperatures above $\sim 200$~K, the analogs have low elasticity compared to silicates and also high viscosity, both of which can result in a high stickiness of the organic matter.
In fact, \citet{Kudo2002The-role-of-sti} demonstrated that a millimeter-sized sphere of velocity $\sim 1~\rm m~s^{-1}$ can stick to a sheet of the organic matter analog at $\ga 250~\rm K$.
The organic mantle can survive up to 400 K \citep{Kouchi2002Rapid-Growth-of,Gail2017Spatial-distrib}, and therefore in principle can be present on rocky dust grains residing interior to the snow line in protoplanetary disks.
Therefore, one can anticipate a scenario in which silicate grains with organic mantles, which we call {\it Organic-Mantled Grains} (OMGs), grow into planetesimals through mutual collisions.
Although such a scenario was already proposed by \citet{Kouchi2002Rapid-Growth-of} and more recently by \citet{Piani2017Evolution-of-Mo}, detailed modeling based on the theory of protoplanetary dust evolution has never been done so far.
In this study, we explore the possibility that OMG aggregates form planetesimals through mutual collisions in the inner part of the protoplanetary disks, in two steps.
Firstly, we model the adhesion of OMGs to study how the maximum collision velocity for sticking of two OMG aggregates depends on the grain size, temperature, and mantle thickness.
Secondly, we simulate the global collisional evolution of OMGs in a protoplanetary disk to demonstrate that OMG aggregates can grow into planetesimals under favorable conditions.
This paper is organized as follows.
In Section \ref{s:adhesion}, we describe our simple model for the adhesion of OMGs and explore
the parameter dependence of the fragmentation threshold of OMG aggregates.
Section \ref{s:simulation} presents global simulations of the growth of OMG aggregates in a disk.
In Section \ref{ss:validity}, we discuss the validity and limitations of our model and implications for terrestrial planet formation in Section \ref{ss:imp} and \ref{ss:how}.
Section~\ref{s:summary} presents a summary.
\section{Modeling the Adhesion of Organic-mantled Grains}\label{s:adhesion}
The goal of this section is to evaluate the stickiness of aggregates made of OMGs (see Figure~\ref{f:2l} for a schematic illustration of an OMG aggregate).
In general, a collision of grain aggregates can lead to sticking, bouncing, or fragmentation depending on their collision velocity. For aggregates with porosities $\la 0.1$--0.3, sticking and fragmentation are the dominant collision outcomes \citep{Dominik1997The-Physics-of-,Blum2000Experiments-on-,Langkowski2008The-Physics-of-,Wada2011The-Rebound-Con,Meru2013Growth-and-frag}, and the collision velocity above which fragmentation dominates over sticking is called the fragmentation threshold $v_{\rm frag}$.
The fragmentation threshold is one of the key parameters that determine the fate of dust evolution and planetesimal formation \citep{Brauer2008Coagulation-fra}.
\begin{figure*}[ht]
\centering
\resizebox{13cm}{!}{\includegraphics{aggregate3.pdf}}
\caption{Schematic illustration of the contact of two OMGs in a dust aggregate. The gray circles and white layers represent the silicate cores and organic mantles, respectively. In the limit where the organic mantles are hard or thick (panel (a)), the silicate cores have no effect on the adhesion of the grains. In the opposite limit (panel (b)), the hard silicate cores beneath the mantle limits the size of the contact area and hence the breakup energy.} \label{f:2l}
\end{figure*}
In this section, we construct a simple model that provides $v_{\rm frag}$ for OMG aggregates.
Our model is based on the results of previous aggregates collision simulations by \citet{Wada2013Growth-efficien}, which shows that the fragmentation threshold can be estimated as
\begin{eqnarray}\label{vfrag}
v_{\rm frag} \approx 20 \sqrt{\frac{E_{\rm break}}{m}},
\end{eqnarray}
where $m$ is the mass of individual grains (called monomers) that constitute the aggregates and $E_{\rm break}$ is the energy needed to break the contact of two monomers.
Equation~\eqref{vfrag} implies that $v_{\rm frag}$ scales with the threshold velocity for monomer--monomer sticking, $\sim \sqrt{E_{\rm break}/m}$, which reflects the fact that both the impact energy and total binding energy of the aggregates scale with the number of the monomers \citep[see, e.g.,][]{Dominik1997The-Physics-of-,Blum2000Experiments-on-,Wada2009Collisional-Gro}.
Thus, the breakup energy $E_{\rm break}$ is the fundamental quantity that determines the stickiness of grain aggregates.
Below we describe how we estimate
$E_{\rm break}$ for OMGs.
In the following, we approximate each OMG as a sphere consisting of a silicate core and an organic mantle. We denote the radius of an OMG by $R$, and the thickness of the organic mantle by $\Delta R_{\rm or}$, as illustrated in Figure \ref{f:2l}. The radius of the silicate core is thus $R - \Delta R_{\rm or}$.
The mass of an OMG is given by $m = (4\pi/3)\{\rho_{\rm sil}(R-\Delta R_{\rm or})^{3}+\rho_{\rm or}(R^3-(R-\Delta R_{\rm or})^{3})\}$, where $\rho_{\rm or}$ and $\rho_{\rm sil}$ are the material densities of the organics and silicate, respectively.
For simplicity, we assume that all monomers constituting a single aggregate are identical.
\subsection{The Breakup Energy for OMGs}\label{s:modeling}
In principle, evaluation of $E_{\rm break}$ can be done by modeling the deformation and surface attraction of particles in contact.
An example of such a model is the Johnson--Kendall--Roberts
(JKR) model ~\citep{Johnson301}, which has been widely used in the astronomical literature
\citep{Chokshi1993Dust-coagulatio,Dominik1997The-Physics-of-,Wada2007Numerical-Simul,Wada2009Collisional-Gro}.
The JKR model describes the contact of two elastic particles with a positive surface energy (which represents attractive intermolecular forces) under the assumption that
the particles have uniform internal structure.
According to this model, the breakup energy
for two identical spheres is given by
\begin{eqnarray}\label{JKR}
E_{\rm break} \approx 23
\frac{\gamma^{5/3} R^{4/3} (1-\nu^2)^{2/3}}{Y^{2/3}},
\end{eqnarray}
where $\gamma$, $Y$, and $\nu$ are the surface energy per unit area, Young's modulus, and Poisson's ratio of the spheres, respectively.
Because the JKR model assumes uniform particles, caution is required when applying the model to OMGs, each consisting of a hard silicate core and a soft organic mantle. In general, the size of the contact area two attached particles make in equilibrium is determined by the balance between the attractive surface forces and repulsive elastic force.
Softer particles make a larger contact area, thus acquiring a higher breakup energy (note that $E_{\rm break}$ given by Equation~\eqref{JKR} increases with decreasing $Y$). However, a large contact area requires a large normal displacement (i.e., large deformation) of the particle surfaces.
Now imagine the contact of two OMGs.
One can expect that the JKR model is inapplicable to OMGs if the organic mantle is so soft or thin that the normal displacement of the mantle surfaces is comparable to the mantle thickness.
In that case, the hard silicate cores should contribute significantly to the elastic repulsion between the grains, which should act to prevent the formation of a large contact area.
Ignoring this effect would cause an overestimation of the breakup energy.
Unfortunately, there is no model that gives an exact and closed-form expression
for $E_{\rm break}$ for particles of core--mantle structure.
Therefore, we opt to evaluate $E_{\rm break}$ for OMGs approximately, by first considering two extreme cases and then interpolating into the intermediate regime.
\subsubsection{The Hard/thick Mantle Regime}
We start by considering an extreme case in which the organic mantle is thick or hard enough for the silicate cores to be negligible (see Figure \ref{f:2l}(a)). In this case, the JKR model serves as a good approximation, and hence $E_{\rm break}$ is approximately given by
\begin{eqnarray}\label{Ethinc}
E_{\rm break} = E_{\rm or},
\end{eqnarray}
where
\begin{eqnarray}\label{Eor}
E_{\rm or} \approx 23
\frac{\gamma_{\rm or}^{5/3} R^{4/3} (1-\nu_{\rm or}^2)^{2/3}}{Y_{\rm or}^{2/3}}
\end{eqnarray}
is the breakup energy from the JKR model for the contact of uniform organics grains with radius $R$, Poisson ratios $\nu_{\rm or}$, and surface energy $\gamma_{\rm or}$ (see Equation~\eqref{JKR}).
\subsubsection{The Soft/thin Mantle Regime}
We now consider the opposite case where the organic mantles are so soft or thin that the normal displacement of the mantles are comparable to the mantle thickness (see Figure \ref{f:2l}(b)). Because the attractive surface forces try to pull the grains together, one may approximate
the silicate cores with a soft/thin mantle
with two attached spheres, as shown in Figure \ref{f:2l}(b). We also neglect the deformation of the mantle surface outside the contact area and approximate the mantle surfaces with two spheres of radius $R$.
Under these approximations, the radius of the contact area is
$\sqrt{R^2-(R-\Delta R_{\rm or})^2} = \sqrt{(2R-\Delta R_{\rm or})\Delta R_{\rm or}}$ (see Figure \ref{f:2l}(b)), and the area is
$\pi (2R-\Delta R_{\rm or})\Delta R_{\rm or}$.
The geometry defined above allows us to calculate
the surface energy loss arising from the contact of the mantle surfaces. Since the surface energy per unit area is $\gamma_{\rm or}$ and the total area of the two contacting surfaces is
$2\pi (2R-\Delta R_{\rm or})\Delta R_{\rm or}$,
the associated energy loss is
\begin{eqnarray} \label{Uor}
|\Delta U_{\rm or}| = 2\pi \gamma_{\rm or} (2R-\Delta R_{\rm or})\Delta R_{\rm or}.
\end{eqnarray}
If we neglect elastically stored energy within the mantles, $|\Delta U_{\rm or}|$ gives the binding energy of the mantle contact.
The elastic energy of the soft/thin mantles is indeed negligible because mantle deformation is limited by the hard silicate cores (Young's modulus of the silicate cores is more than 10 times larger than that of the organic mantles; see Section \ref{ss:cha}).
The total binding energy $E_{\rm break}$ of the OMGs is given by the sum of $|\Delta U_{\rm or}|$ and the binding energy of the silicate cores.
In the limit of small $\Delta R_{\rm or}$, the latter contribution should approach the breakup energy of the bare silicates from the JKR model,
\begin{eqnarray}\label{Esil}
E_{\rm sil} \approx 23
\frac{\gamma_{\rm sil}^{5/3}
(R-\Delta R_{\rm or})^{4/3}
(1- \nu_{\rm sil}^{2})^{2/3}
}{Y_{\rm sil}^{2/3}}.
\end{eqnarray}
We therefore evaluate $E_{\rm break}$ as
\begin{eqnarray} \label{eq:Estart}
E_{\rm break}= |\Delta U_{\rm or}| + E_{\rm sil}.
\end{eqnarray}
In fact, $E_{\rm sil}$ is negligible as long as $\Delta R_{\rm or}/R \ga 0.01$ since the silicate cores are hard and poorly sticky (see Section~\ref{ss:example}).
\subsubsection{$E_{\rm break}$ for General Cases}
Based on the above arguments, we derive a general formula for $E_{\rm break}$ for OMGs.
With respect to $\Delta R_{\rm or}$, We use $E_{\rm break}$ connecting equation \refp{Ethinc} and equation \refp{eq:Estart},
\begin{eqnarray} \label{modelJKR}
E_{\rm break}= {\rm min} \qr{ E_{\rm or} , |\Delta U_{\rm or}| + E_{\rm sil} }.
\end{eqnarray}
\subsection{Material Properties}\label{ss:cha}
To evaluate $v_{\rm frag}$, one needs to assume the density, elastic constants, and surface energy of silicate and organics in addition to the grain size and mantle thickness. Here we describe our assumptions for the material properties.
\subsubsection{Silicate}
Following \citet{Chokshi1993Dust-coagulatio}, we adopt $\rho_{\rm sil} = 2.6 ~\rm g~cm^{-3}$, $Y_{\rm sil} = 5.4 \times 10^{10} ~\rm Pa $, $\nu_{\rm sil} = 0.17$, $ \gamma_{\rm sil} = 2.5 \times 10^{-2} ~\rm N~m^{-1}$.
\subsubsection{Organics}\label{ss:or}
We evaluate Young's modulus of organic matter using the data provided by \citet{Kudo2002The-role-of-sti}. They measured the dynamic shear modulus $G_{\rm or}$ as well as viscosity of an analog of the organic matter formed in dense molecular clouds.
The analog is a mixture of various organic materials, including glycolic acid (see in Table 1 of \citealt{Kudo2002The-role-of-sti} for the full composition), prepared based on the analytical data of UV irradiation experiments for astrophysical ice analogs \citep{Greenberg1993Interstellar-Du,Briggs1992Comet-Halley-as}.
We use these data because the organic mantles on protoplanetary dust particles can form in parent molecular clouds or in the disks through a similar UV irradiation process (e.g., \citet{Ciesla2012Organic-Synthes}).
The shear modulus can be translated into Young modulus if we further assume Poisson's ratio (see below).
\citet{Kudo2002The-role-of-sti} measured the shear modulus for two vibrational frequencies of 1.08 and 232 ${\rm rad~s^{-1}}$ and in the temperature range of 130--300~K, and the results are provided in Figure 4(a) of \citet{Kudo2002The-role-of-sti}.
From the data of \citet{Kudo2002The-role-of-sti} for 1.08 ${\rm rad~s^{-1}}$, we obtain an empirical fit
\begin{eqnarray} \label{shear}
G_{\rm or} = &10&^{8.8}\frac{1 + \tanh\{(204 {~\rm K}-T)/12{~\rm K}\} }{2}\nonumber \\
&+&10^{9.2}\pr{\frac{1+\tanh\{(272{~\rm K}-T)/7 {~\rm K}\}}{2}}e^{-T/35{~\rm K}} \nonumber \\
&\quad &+10^{3.1}\frac{1+\tanh\{(310 {~\rm K}-T)/20 {~\rm K}\}}{2}\ ~ \rm Pa,
\end{eqnarray}
which is shown in Figure \ref{f:Gor}.
Fitting to the data for 232 ${\rm rad~s^{-1}}$ yields a similar result
except in the temperature rage 200--240 K, in which the shear modulus curve
is offset toward higher temperatures compared to the data for 1.08 ${\rm rad~s^{-1}}$
(see Figure 4(a) of \citet{Kudo2002The-role-of-sti}).
Since the offset is only seen in the limited temperature range, we neglect this frequency dependence in this study.
More discussion on this point is given in Section~\ref{ss:validity}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{Gog.pdf}}
\caption{Shear modulus $G_{or}$ of the organic mantles, given by Equation~\eqref{f:Gor}, as a function of temperature $T$. This is a fit to the experimental data for an analog of molecular cloud organic matter (Figure 4(a) of \citealt{Kudo2002The-role-of-sti}). \label{f:Gor}}
\end{figure}
The data show that $G_{\rm or}$ decreases with increasing temperature, with two plateaus $G_{\rm or} \sim 10^9~\rm Pa$ and $G_{\rm or} \sim 10^6~\rm Pa$ at $T \la 200~\rm K$ and $T \approx 240$--$270~\rm K$, respectively.
A similar temperature dependence is also commonly observed for polymers \citep[e.g.,][]{Ward2004An-Introduction}, for which the first and second plateaus in $G_{\rm or}$ are called the glassy and rubbery states, respectively \citep{Ward2004An-Introduction}.
As we show in Section~\ref{ss:example}, the transition from the glassy to rubbery states is essential for the growth of OMGs into planetesimals.
At $T \ga 270~\rm K$, the shear modulus further declines, attributable to the melting of the sample matter. At these temperatures, the organic matter likely behaves as a viscous fluid rather than as a solid. Viscoelastic modeling of the contact of OMGs in this temperature range will be presented in future work.
Once $G_{\rm or}$ is known, Young's modulus $Y_{\rm or}$ can be estimated using the relation $Y_{\rm or} = 2(1+\nu_{\rm or})G_{\rm or}$, where $\nu_{\rm or}$ is Poisson's ratio of the organic matter. We here assume $\nu_{\rm or} \approx 0.5$, which is the typical value for rubber.
Because Poisson's ratio generally falls within the range $0 < \nu < 0.5$ (for example, glass has $\nu \approx$ 0.2--0.3),
its uncertainty has little effect on the estimates of $Y_{\rm or}$ and $E_{\rm or}$.
The assumed value of $\nu_{\rm or}$ gives $Y_{\rm or} = 3G_ {\rm or}$.
The surface energy $\gamma_{\rm or}$ of the organic matter of \citet{Kudo2002The-role-of-sti} is unknown, but is expected to be of the order of $10^{-2}~\rm N~m^{-1}$ as long as van der Waals interaction dominates the surface attraction force \citep[see, e.g.,][]{Chokshi1993Dust-coagulatio}.
For this reason, we simply adopt $\gamma_{\rm or} = \gamma_{\rm sil}
= 2.5 \times 10^{-2}~\rm N~m^{-1}$.
The material density of organics is taken to be $\rho_{\rm or} = 1.5 ~\rm g~cm^{-3}$. This is equal to the density of glycolic acid, one of the main constituent of the organic matter sample of \citet{Kudo2002The-role-of-sti}.
Some of the other constituents have lower densities of $\approx {1 ~\rm g~cm^{-3}}$, but
this has little effect on the estimate of the grain mass (and hence $v_{\rm frag} \propto m^{-1/2}$) because the mass fraction of the organic mantle is small.
\subsection{Parameter Dependence of $v_{\rm frag}$}\label{ss:example}
Equation~\eqref{vfrag} combined with Equation~\eqref{modelJKR} gives the fragmentation threshold $v_{\rm frag}$ for OMG aggregates as a function of temperature $T$, grain size $R$, and organic mantle thickness $\Delta R_{\rm or}$. We here illustrate how $v_{\rm frag}$ depends on these parameters.
\subsubsection{Dependence on $T$ and $\Delta R_{\rm or}/R$}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{vfragcolor4.pdf}}
\caption{Fragmentation threshold $v_{\rm frag}$ for aggregates made of $0.1~\micron$-sized OMGs as a function of temperature $T$.
The pink, red, and dark red lines are for $\Delta R_{\rm or}/R= 0.1, 0.03$ and $0.02$, respectively.
The gray line shows the fragmentation threshold for aggregates of 0.1 \micron-sized silicate grains.
The dotted line marks $50~\rm m~s^{-1}$, which is approximately equal to the maximum collision velocity in typical weakly turbulent protoplanetary disks.
}
\label{f:vfrag}
\end{figure}
\begin{figure*}[ht]
\centering
\resizebox{8cm}{!}{\includegraphics{vfragmonomer3ogkaikai.pdf}}
\hspace{5mm}
\resizebox{8cm}{!}{\includegraphics{vfragmonomersiallkaikai.pdf}}
\caption{Fragmentation threshold $v_{\rm frag}$ for OMG aggregates with $\Delta R_{\rm or}/R = 0.03$ (left panel) and for bare silicate grain aggregates (right panel) as a function of monomer radius $R$ and temperature $T$. The colored area shows the parameter region in which $v_{\rm frag}$ falls below $50 ~\rm m~s^{-1}$, which is approximately equal to the maximum collision velocity in typical weakly turbulent protoplanetary disks. The vertical dotted lines indicate the range of the typical monomer radii of CP IDPs \citep{Rietmeijer1993Size-distributi,Rietmeijer2009A-cometary-aggr}.}\label{f:monomer1}
\end{figure*}
To begin with, we focus on the dependence on $T$ by fixing $R = 0.1~\rm \micron$ and $\Delta R_{\rm or}/R = 0.03$.
The organic mantle thickness adopted here translates into an organic mass fraction of $\approx 5~\rm wt\%$, close to the organic content of typical IDPs \citep[$\sim 3$--$5~\rm wt\%$;][]{Flynn2004An-assessment-o}.
The red line in Figure \ref{f:vfrag} shows $v_{\rm frag}$ for $R = 0.1~\rm \micron$ and $\Delta R_{\rm or}/R = 0.03$ as a function of temperature $T$.
The temperature dependence comes from that of $Y_{\rm or} \propto G_{\rm or}$ involved in the expression of $E_{\rm break}$ in the hard mantle regime (Equation~\eqref{Eor}).
As shown in Figure~\ref{f:Gor}, the shear modulus of the organic mantle is constant at $T \la 200~\rm K$ and starts to decline above this temperature range.
Accordingly, the fragmentation threshold for hard mantles, $v_{\rm frag} \propto E_{\rm or}^{1/2} \propto G_{\rm or}^{-1/3}$, increases at $T \ga 200~\rm K$. The increase in $v_{\rm frag}$ stops at $T\approx 220~\rm K$, at which the breakup energy reaches the soft-mantle limit set by the finite thickness of the organic mantle, Equation~\eqref{eq:Estart}.
In the particular case shown here, the contribution $|\Delta U_{\rm or}|$ from the mantle interface gives the dominant contribution to $E_{\rm break}$, with the core contribution $E_{\rm sil}$ being 60 times smaller than $|\Delta U_{\rm or}|$.
The mantle thickness $\Delta R_{\rm or}$ controls the maximum fragmentation threshold attained in the soft/thin limit. The pink and dark red lines in Figure~\ref{f:vfrag} show $v_{\rm frag}$ for $\Delta R_{\rm or}/R = 0.1$ and 0.02, respectively.
In the soft/thin limit, the fragmentation threshold scales as
$v_{\rm frag} \propto |\Delta U_{\rm or}|^{1/2} \propto \Delta R_{\rm or}^{1/2}$, where we have assumed $|\Delta U_{\rm or}| \gg E_{\rm sil}$
and $\Delta R_{\rm or} \gg R$.
For $\Delta R_{\rm or}/R =$ 0.1, 0.03, and 0.02,
the maximum fragmentation threshold is
$v_{\rm frag} \approx 110, 60$ and $49 ~\rm m ~ s^{-1}$, respectively.
The results shown here have two important implications.
Firstly, OMG aggregates are stickier than aggregates of bare silicates, even at low temperatures.
The fragmentation threshold for bare silicate aggregates is $v_{\rm frag} = 20\sqrt{E_{\rm sil}/m}$,
which is $\approx 8~\rm m~s^{-1}$ for $R = 0.1~\micron$
(see the gray line in Figure~\ref{f:vfrag}).
This is smaller than the threshold for OMG aggregates in the low temperature limit, $v_{\rm frag} \approx 22~\rm m~s^{-1}$.
This is because the organic mantles at low temperatures is still softer than the silicate cores: $Y_{\rm or} = 2.7\times 10^9 ~\rm Pa$ versus $Y_{\rm sil} = 5.4\times 10^{10}~\rm Pa$.
Secondly, the fragmentation threshold of OMGs can exceed $50 ~\rm m~s^{-1}$, which is the maximum collision velocity of dust aggregates in weakly turbulent protoplanetary disks \citep[see, e.g., Figure 1 of][]{Birnstiel2016Dust-Evolution-}.
Therefore, the OMG aggregates may grow to be planetesimals through collisions.
For $R = 0.1~\micron$, $v_{\rm frag}$ exceeds $50~\rm m~s^{-1}$ if $\Delta R_{\rm or}/R \ga 0.03$.
\subsubsection{Dependence on $R$}
In general, the fragmentation threshold
$v_{\rm frag} \propto \sqrt{E_{\rm break}/m}$
decreases with increasing the monomer size $R$ because $E_{\rm break}$ increases more slowly than the monomer mass $m$ $(\propto R^3)$.
To see this, we plot in the left panel of Figure~\ref{f:monomer1} the fragmentation threshold as a function of $R$
and $T$ for OMG aggregates with $\Delta R_{\rm or}/R= 0.03$.
For comparison, the fragmentation threshold for bare silicate aggregates is also shown in the right panel of Figure~\ref{f:monomer1}.
As long as the JKR theory is applicable to the organic mantle (the hard/thick mantle limit), $v_{\rm frag}$ scales with $R$ as $R ^{-5/6}$. This scaling can be seen in the left panel of Figures~\ref{f:monomer1} at $T \la 220~\rm K$, and in the entire part of the right panel of Figure~\ref{f:monomer1} because the JKR theory applies to bare silicate grains.
At higher temperatures, at which the soft/thin mantle limit applies, the scaling becomes
$v_{\rm frag} \propto R^{-1} \Delta R_{\rm or}^{1/2}$
because $E_{\rm break} \approx |\Delta U_{\rm or}| \propto R \Delta R_{\rm or}$.
Figure \ref{f:monomer1} can be used to infer the parameter range in which OMG aggregates can grow beyond the fragmentation barrier. Taking $50~\rm m~s^{-1}$ as a representative value for the maximum collision velocity in weakly turbulent protoplanetary disks, the fragmentation threshold exceeds the maximum collision velocity in the color shaded regions in Figure \ref{f:monomer1}.
For comparison, the dotted lines in Figure \ref{f:monomer1} mark the sizes of grains that dominate the matrix of IDPs, $R \approx 0.05$--$0.25 ~\micron$ (diameters of 0.1--0.5 \micron; e.g., \citealt{Rietmeijer1993Size-distributi,Rietmeijer2009A-cometary-aggr}).
We find that aggregates made of OMGs of $R \approx 0.05$--$0.14 ~\rm \micron$ can grow beyond the fragmentation barrier as long as the temperature is above $200~\rm K$.
In contrast, bare silicate grain aggregates (the left panel of Figure~\ref{f:monomer1}) can only overcome the fragmentation barrier if the monomer radius is as small as $R \la 0.01~\micron$ \citep[see also][]{Arakawa2016Rocky-Planetesi}
\section{Global Evolution of OMG Aggregates} \label{s:simulation}
In Section~\ref{ss:example}, we showed that OMG aggregates can break through the fragmentation barrier if the temperature is high and/or the grains are sufficiently small.
In this section, we present simulations of the size evolution of OMG aggregates in protoplanetary disks to demonstrate that the OMG aggregates indeed form planetesimals under favorable conditions.
\subsection{Model}\label{ss:simulation methods}
\subsubsection{Disk Model}
We adopt the minimum-mass solar nebula (MMSN) model~\citep{Hayashi1981Structure-of-th} around a solar-mass star as a model of protoplanetary gas disks.
In this model, the gas surface density is given by $\Sigma_{\rm g} = 1700(r/1~\rm au)^{-3/2}~\rm g~ cm^{-2}$, where $r$ is the distance from the central star.
Assuming hydrostatic equilibrium and uniform temperature in the vertical direction, the gas density is given by $\rho_{\rm g} = \Sigma_{\rm g }/(\sqrt{2\pi}h_{\rm g})\exp (-z^{2}/{(2h_{\rm g}^2)})$, where $h_{\rm g} = c_{s}/\Omega$, is the scale height with $c_{s}$ and $\Omega$ being the sound velocity and Keplerian frequency, respectively.
The gas temperature is simply set to be $T = 280(r/1~\rm au)^{-1/2}~K$, which corresponds to the temperature profile in an optically thin disk around a solar-luminosity star \citep{Hayashi1981Structure-of-th}.
In reality, dusty protoplanetary disks are optically thick, and the gas temperature at the midplane may be higher or lower than assumed here depending on the distribution of turbulence inside the disks \citep{Hirose2011Heating-and-Coo,Mori2019Temperature-Str}.
Thus, the temperature model adopted here should be taken as one reference model.
The initial dust-to-gas mass ratio is taken to be 0.01.
In the simulations, the radial computational domain is taken to be $0.5 {~\rm au} \leq r \leq 3 {~\rm au}$.
The outer edge of the computational domain is equivalent to the snow line in our disk model.
\subsubsection{Dust Model}
We follow the global evolution of OMG aggregates in the modeled disk using the bin scheme described in \citet{Okuzumi2012Rapid-Coagulati}. This scheme, originally developed by \citet{Brauer2008Coagulation-fra}, evolves the full size distribution of OMG aggregates at different radial locations by taking into account the coagulation/fragmentation and radial transport of the aggregates. We refer to Sections 2.2 and 2.4 of \citet{Okuzumi2012Rapid-Coagulati} for details on the algorithm and numerical implementation.
The dust in the disk is assumed to be initially in the form of detached OMGs of equal radius $R = 0.1~\micron$ and equal organic-mantle thickness $\Delta R_{\rm or}$.
The mantle thickness is taken to be either $\Delta R_{\rm or}/R = 0.02$ or $0.03$ to demonstrate that breaking through the fragmentation barrier requires a sufficiently thick organic mantle (see Section~\ref{ss:example}).
The OMG grains are assumed to grow into aggregates of constant internal density $\rho_{\rm int} = 0.1\rho_{\rm m}$, where $\rho_{\rm m}$ is the density of individual OMGs. For $\Delta R_{\rm or}/R = 0.03$, we have $\rho_{\rm m} = 2.5 ~\rm g~cm^{-3}$, which gives $\rho_{\rm int} = 0.25 ~\rm g~cm^{-3}$. This assumed internal density is comparable to those of CP IDPs \citep[e.g.,][]{Rietmeijer1993Size-distributi}.
\begin{figure*}[t]
\centering
\resizebox{8cm}{!}{\includegraphics{grow3st1BPCAlow.pdf}}
\hspace{5mm}
\resizebox{8cm}{!}{\includegraphics{grow2st1BPCAlow.pdf}}
\caption{Evolution of the size distribution $\Delta \Sigma_{\rm d} /\Delta \log m$ of OMG aggregates for $\Delta R_{\rm or}/R = 0.03$ and 0.02 (left and right panels, respectively). The dotted line shows the aggregate size at which the radial drift velocity takes its maximum. Near this size, the collision velocity of the aggregates is also maximized. }
\label{f:grow32}
\end{figure*}
Because of the sub-Keplerian motion of the gas disk, macroscopic aggregates undergo radial inward drift \citep{Adachi1976The-gas-drag-ef,Weidenschilling1977Aerodynamics-of,Whipple1972On-certain-aero},
which is included in our model.
The magnitude of the radial drift velocity, which depends on aggregate size, is evaluated using Equations (4)--(7) of \citet{Okuzumi2012Rapid-Coagulati}.
In our disk model, the drift speed is maximized when the aggregate mass is $\approx 4.7 \times 10^5$ g at $r = 0.5~\rm au$ and $\approx 8.9 \times 10^6$ g at $r =3~\rm au$, respectively.
The outcome of aggregate collisions is determined using a simple model adopted by \citet{Okuzumi2012Planetesimal-Fo}.
This model assumes that when two aggregates of masses $M_{l}$ and $M_{s} (<M_l)$ collide at velocity $\Delta v$, a new aggregate of mass $M = M_{l} + s(\Delta v, v_{\rm frag})M_{s}$ forms, where
$ s(\Delta v,v_{\rm frag})$ is the dimensionless sticking efficiency depending on the ratio between $\Delta v$ and the fragmentation threshold $v_{\rm frag}$ (see below).
The rest of the mass of the two collided aggregates, $M_l+M_s-M = (1-s)M_s$, goes to small fragments.
For simplicity, we neglect the size distribution of the fragments by assuming that all fragments are small as the initial OMGs (i.e., $R=0.1~\micron$).
This assumption overestimates the number of tiny aggregates, but little affects the growth of the largest aggregates because $\Delta v$ is generally controlled by the larger of two colliding aggregates.
The fragmentation threshold for OMG aggregates is determined as a function of $\Delta R_{\rm or}/R$ and $T$ using Equations~\eqref{vfrag} and \eqref{modelJKR}.
The sticking efficiency $ s(\Delta v, v_{\rm frag})$ is evaluated as
\begin{eqnarray}\label{e:s}
s(\Delta v, v_{\rm frag}) = {\rm min} \left\{ 1, -\frac{ \ln (\Delta v/v_{\rm frag})}{\ln 5} \right\}.
\end{eqnarray}
This, originally proposed by \citet{Okuzumi2012Planetesimal-Fo}, is a fit to the results of aggregate collision simulations by \citet{Wada2009Collisional-Gro}.
If the large and small aggregates collide at $\Delta v \ll v_{\rm frag} $, these aggregates stick perfectly~($s(\Delta v, v_{\rm frag})= 1$).
If aggregates collide at $\Delta v \approx v_{\rm frag} $, a part of them are released as fragments ($0< s(\Delta v, v_{\rm frag})< 1$).
The collision velocity $\Delta v$
includes the contributions from Brownian motion~\citep{Nakagawa1986Settling-and-gr}, radial and azimuthal drift, vertical settling~\citep{Adachi1976The-gas-drag-ef, Weidenschilling1977Aerodynamics-of} and gas turbulence
~\citep{Ormel2007Closed-form-exp}, which we compute using Equations (16)--(20) of \citet{Okuzumi2012Rapid-Coagulati}.
The turbulence-driven collision velocity depends on a dimensionless parameter $\alpha$ that quantifies the strength of turbulence, and we take $\alpha = 10^{-3}$ in this study.
The aggregate size at which the collision velocity is maximized is approximately equal to the size at which the radial drift velocity is maximized. In particular, for collisions of different-sized aggregates,
the maximum collision velocity is approximately equal to the maximum speed of radial drift, which is $\approx 54~\rm m~s^{-1}$ in our disk model.
\subsection{Results}\label{ss:simulation results}
Figure \ref{f:grow32} presents our simulation results for $\Delta R_{\rm or}/R = 0.03$ and 0.02, respectively.
The figure shows the radial distribution of aggregate surface density per decade in aggregate mass, $\Delta \Sigma_{\rm d}/ \Delta\log{M}$, at different times $t$ after the beginning of the simulations.
The dotted line marks the aggregates mass at which the radial drift becomes fastest. This line can be regarded as the peaks of the drift and fragmentation barriers because the collision velocity takes its maximum near this size.
The results shown in Figure~\ref{f:grow32} demonstrate that OMG aggregates in the inner warm region of protoplanetary disks can form km-sized planetesimals if the organic mantle is sufficiently thick.
For $\Delta R_{\rm or}/R = 0.03$, the left panels of Figure \ref{f:grow32} show that growth beyond the fragmentation barrier occurs at $r \la 1.6 ~\rm au$.
In this region, the gas temperature exceeds $220~\rm K$, so that the fragmentation threshold exceeds the maximum collision velocity $\sim 50~\rm m~s^{-1}$ as already seen in Figure~\ref{f:vfrag}.
Within $10^3$ years, the mass of the largest aggregates reaches $10^{15}~\rm g$, which amounts to the mass of km-sized solid bodies.
At $r \ga 1.6~\rm au$, the growth of OMG aggregates stalls at the size at which the collision velocity reaches the fragmentation threshold in this region, $\approx 20~\rm m~s^{-1}$.
In this particular simulation, the maximum aggregate mass in this region is $\sim 10^5$--$10^6$ g, corresponding to aggregate radii of $\sim 1~\rm m$.
For $\Delta R_{\rm or}/R = 0.02$, the right panels of Figure \ref{f:grow32} show that fragmentation limits the growth of OMG aggregates even in the inner warm region of the disk, again confirming our prediction.
It is interesting to note that the OMG aggregates that have overcome the fragmentation barrier also break through the radial drift barrier, i.e., they grow faster than they fall toward the central star, at $r \la 1.0 ~\rm au$. As we explain below, this is a purely aerodynamic effect, not directly related to the stickiness of OMGs. In this inner disk region, the gas drag acting on rapidly drifting aggregates follows the Stokes drag law, i.e., the aggregates are larger than the mean free path of gas molecules. In this case, the aggregate size at which the fastest radial drift occurs decreases with decreasing $r$ \citep[e.g.,][see also our Figure~\ref{f:grow32}]{Birnstiel2010Gas--and-dust-e,Okuzumi2012Rapid-Coagulati,Drc-azkowska2014Rapid-planetesi}. For this reason, aggregates at $r \la 1.0 ~\rm au$ are able to overcome the drift barrier unless they experience mass loss due to fragmentation (see, e.g., Figure 11 of \citealt{Birnstiel2010Gas--and-dust-e}). This aerodynamic effect depends on the assumed porosity of the aggregates \citep{Okuzumi2012Rapid-Coagulati}; for $\rho_{\rm int} = 0.01\rho_{\rm m}$, we find that the OMG aggregates overcome the radial drift barrier already at $r \approx 1.6~\rm au$.
\section{Discussion}\label{s:disussion}
\subsection{Validity and Limitations of the Adhesion Model}\label{ss:validity}
As illustrated in Section~\ref{ss:example}, one of the most important quantities that determine the stickiness of OMGs is Young's modulus of their organic mantles, in particular its dependence on temperature.
The transition to the rubbery ($Y_{\rm or} \sim 10^6~\rm Pa$) state in warm environments is essential for aggregates of the OMGs to grow even at the maximum collision velocity in protoplanetary disks (Figures~\ref{f:vfrag} and \ref{f:grow32}).
The question is then whether such temperature dependence is peculiar to the organic matter of \citet{Kudo2002The-role-of-sti} or can commonly be observed for organics formed by UV irradiation.
\citet{Piani2017Evolution-of-Mo} recently measured the viscoelasticity of their own molecular cloud organic matter analogs at room temperature, showing that Young's modulus measured at a frequency of 1 Hz is $\sim 10^6~\rm Pa$,
similar to the value for the organic matter of \citet{Kudo2002The-role-of-sti} at $T \approx 260~\rm K$.
Therefore, we expect that the glassy-to-rubbery transition leading to the breakthrough of the fragmentation barrier occurs at $T\approx 260$--$300~\rm K$, even if the mechanical properties of real organic matter in protoplanetary disks are highly uncertain.
One potentially important uncertainty in the elasticity of the organic mantles is its dependence on dynamical timescale.
Our model for the shear modulus of organics (Equation~\eqref{f:Gor}) is based on the viscoelastic measurements at frequencies of 1.08 and 232 ${\rm rad~s^{-1}}$ (0.17 and 37 Hz, respectively), thus best representing the elastic properties on dynamical timescales of $10^{-2}$--$10~\rm s$.
These timescales are, however, far longer than the typical timescale of the deformation of aggregates that collide in protoplanetary disks.
For a collision velocity of $\approx 50 ~\rm m~s^{-1}$ and an organic-mantle thickness of $\Delta R_{\rm or}\approx 0.003 ~\micron$, the timescale on which the organic mantles inside the colliding aggregates deform is estimated to be as short as $\sim 10^{-10} ~\rm s$.
The viscoelastic data of \citet[][their Figure 4]{Kudo2002The-role-of-sti} seem to imply that the shear modulus in the temperature range $200$--$240~\rm K$ increases slowly with increasing frequency.
A similar trend can also be seen in the viscoelastic data for the molecular cloud organic analogs by \citet[][their Figure 7(a)]{Piani2017Evolution-of-Mo} measured at room temperature and in the frequency range $1$--250 Hz.
Therefore, using elasticity data at the low frequencies might result in an overestimation of the stickiness of protoplanetary OMGs.
However, if such frequency dependence is only limited to a narrow temperature range as expected from the data of \citet{Kudo2002The-role-of-sti}, it would not affect our main conclusion that OMG aggregates can break through the fragmentation barrier in warm regions of disks.
Finally, we note that the viscosity of the organic matter, which is not included in our grain contact model, can further enhance the stickiness of OMGs as expected by \citet{Kudo2002The-role-of-sti}.
In principle, this effect can be evaluated by using a contact model that treats viscoelasticity \citep[e.g.,][]{Krijt2013Energy-dissipat} together with the viscosity data obtained by \citet{Kudo2002The-role-of-sti}. We leave this for future work.
\subsection{Rocky Planetesimal Formation in a Narrow Annulus? }\label{ss:imp}
We have shown that OMGs in warm regions of protoplanetary disks can grow into planetesimals.
In fact, however, dust growth facilitated by organics is unlikely to occur in hot regions of $T \ga$ 400 K, because organic mantles are no longer stable at such temperatures.
For example,
\citet{Kouchi2002Rapid-Growth-of} report that their molecular cloud organic matter analog evaporated at approximately 400 K.
\citet{Gail2017Spatial-distrib} also point out that pyrolysis of organic materials occurs at 300--400 K.
Thus, we can only expect planetesimal formation through the direct sticking of OMGs to occur in a certain temperature range (e.g., $T \sim 200$--400 K for the organic matter analog of \citealt{Kouchi2002Rapid-Growth-of}) where the organic mantles are stable and also soft enough to bind silicates together.
Because the disk temperature is a decreasing function of the orbital radius, the above implies that planetesimal formation aided by organics only occurs in a certain range of orbital radii, as first pointed out by \citet{Kouchi2002Rapid-Growth-of}.
Interestingly, such a narrow planetesimal-forming zone can provide favorable conditions for the formation of the terrestrial planets in the solar system.
\citet{Kokubo2006Formation-of-Te} and \citet{Hansen2009Formation-of-th} showed using $N$-body simulations of planetary accretion that protoplanets initially placed in an annulus around 1 au form a planetary system whose final configuration resembles that of the inner solar system.
However, the initial protoplanets in their simulations, which are $\sim 0.01$--$0.1M_\oplus$ in mass, are much larger 1--100 km planetesimals.
The simulations of planetesimal formation that we showed in the previous section do not include gravitational scattering and focusing between solid bodies, both of which are important for the growth and orbital evolution of planetesimal-sized bodies. Dynamical simulations bridging the gap between the planetesimals and protoplanets are needed to assess whether dust growth facilitated by organics can indeed result in the formation of planetary systems like the inner solar system.
\subsection{How Can Carbon-poor Rocky Planetesimals Form?}\label{ss:how}
Another issue in relating our planetesimal formation scenario to the formation of the solar-system terrestrial planets is the low carbon content of the Earth, and possibly of other terrestrial planets. The carbon content of the Earth's bulk mantle is estimated to be $\approx 0.01$--$0.08$~wt\%~\citep{McDonough1995The-composition,Marty2012The-origins-and,Palme20143.1---Cosmochem}.
Some Martian meteorites contain magmatic carbon of $\sim 10^{-4}$--$10^{-2}$ wt\% \citep{Grady2004Magmatic-carbon}, possibly pointing to a carbon abundance in the Martian mantle as low as that in the Earth's mantle.
The cores of terrestrial planets could have a higher carbon abundance because carbon is highly soluble in liquid iron \citep{Wood1993Carbon-in-the-c,Wood2013Carbon-in-the-C,Chi2014Partitioning-of,Tsuno2018Core-mantle-fra}, although the carbon abundance of the Earth's core is unlikely to be much in excess of 1 wt\% \citep{Nakajima2015Carbon-depleted}.
Even if all the terrestrial carbon had been delivered in the form of organic mantles, its abundance would not have been high enough for the organic-mantled dust grains to grow beyond the fragmentation barrier, which requires an organic content of $\ga 5~\rm wt\%$ (see Section~\ref{ss:example}).
Although it is possible that rocky protoplanets lose some amount of volatiles including carbon when they grow through giant impacts \citep[e.g.,][]{Genda2005Enhanced-atmosp}, it seems unlikely that the carbon content of the protoplanets that formed the Earth was many orders of magnitude higher than that of the present-day Earth.
However, it is possible that the carbon content of the planetesimals that formed the terrestrial planets decreased when they grew by accreting a large amount of carbon-poor solid particles.
A piece of evidence for this comes from ordinary and enstatite chondrites, which are meteorites whose parent bodies are thought to have formed in the inner solar system.
The carbon content of these types of chondrites is $\sim 0.1~\rm wt\%$ \citep{Jarosewich1990Chemical-analys}. This value is much lower than that of IDPs, and would be more consistent with that of the bulk Earth if we hypothesize that the Earth's core (which constitutes 1/3 of the Earth mass) contains $1~\rm wt\%$ carbon \citep{Wood2013Carbon-in-the-C}.
Moreover, \citet{Gail2017Spatial-distrib} have recently suggested that the materials comprising these chondrites could have formed from initially carbon-rich dust. They show that the flash heating events that produced chondrules, mm-sized spherules that dominate the ordinary and enstatite chondrites, were able to destroy most carbonaceous materials contained in the precursor dust aggregates.
This does not mean that all OMG aggregates would have lost organic mantles before they formed planetesimals, because the the duration of chondrule formation ($\approx 3~\rm Myr$; \citealt{Connelly2012The-Absolute-Ch}) is much longer than the timescale of dust growth into planetesimals ($\approx 10^3~\rm yr$; see the left panel of Figure~\ref{f:grow32}).
\begin{figure*}[t]
\centering
\resizebox{18cm}{!}{\includegraphics{scenario1.pdf}}
\caption{Possible pathway for rocky planet formation aided by organics. See Section \ref{ss:imp} for explanation of the three steps.}
\label{f:scenario}
\end{figure*}
Therefore, we can speculate a scenario in which the ``seeds'' of planetesimals form promptly with the aid of organics and then grow into carbon-poor planetesimals and protoplanets through carbon-poor chondrule accretion.
We postulate that the dust grains in the inner region of the solar nebula had organic mantles similar to those of CP IDPs before the grains formed large aggregates. We then envisage that the OMGs grew into carbon-poor solid bodies in the following three steps (Figure \ref{f:scenario}):
\begin{enumerate}
\item The OMGs in a disk annulus of $T\sim 200$--400 K quickly grew into planetesimals thanks to the soft, sticky organic mantles. Outside this annulus, dust grains did form macroscopic aggregates up to $\sim 1$ m in size, but collisional fragmentation inhibited their further growth into planetesimals because their organic mantles were either not soft enough (for $T \la 200~\rm K$) or absent (for $T \ga 400~\rm K$).
\item Repeated heating events converted the organic-rich aggregates into carbon-poor chondrules. A substantial fraction of the organic matter, including organics, inside the aggregates was lost during these events. The chondrules that formed from the macroscopic aggregates outside the seed planetesimal annulus were then gradually implanted into the annulus through radial inward drift and perhaps turbulent diffusion.
\item The seed planetesimals accreted the implanted carbon-poor chondrules, forming rocky planetesimals that were larger but less carbon-rich than the seed planetesimals. Most of the carbon-poor planetesimals grew into the terrestrial planets and/or their embryos (protoplanets) through planetesimal--planetesimal collisions and/or further accretion of chondrules \citep{Johansen2015Growth-of-aster,Levison2015Growing-the-ter}.
The remnants might be the parent bodies of the ordinary and enstatite chondrites.
\end{enumerate}
In this scenario, the final carbon content of rocky planetesimals would depend on whether they only capture chondrules or capture chondrites, mixtures of chondrules and fine-grained matrices, because matrix grains are the dominant carbon carriers in chondrites \citep[e.g.,][]{Makjanic1993Carbon-in-the-m,Alexander2007The-origin-and-}. In the latter case, the final carbon abundance of large planetesimals could be as high as the bulk carbon content of chondrites, $\sim$ 0.1--1 wt${\%}$ \citep{Jarosewich1990Chemical-analys}.
The final carbon content may also have a spatial variation depending on how deep inside the planetesimal-forming zone chondrules can be implanted. However, the spatial variation would be small if the planetesimal belt that formed the solar-system terrestrial planets were narrow as suggested by \citet{Kokubo2006Formation-of-Te} and \citet{Hansen2009Formation-of-th}.
Our scenario requires heating events that can efficiently convert organic-rich aggregates outside the planetesimal-forming zone into chondrules. The total mass of the chondrules must have been higher than that of initially carbon-rich planetesimals, and than that of the carbon-rich aggregates that survived the heating events.
The latter condition is necessary because planetesimals can in principle accrete both chondrules and carbon-rich aggregates.
The question is what mechanism was responsible for such heating events. Possible mechanisms for chondrule-forming heating events proposed so far include bow shocks around eccentric planetesimals/protoplanets \citep{Morris2018Formation-of-Ch}, bow shocks associated with spiral density waves in the solar nebula \citep{Desch2002A-model-of-the-}, heating by lightning discharge \citep{Horanyi1995Chondrule-forma,Johansen2018Harvesting-the-}, radiative heating by hot planetesimals \citep{Herbst2016A-new-mechanism,Herbst2019A-Radiative-Hea}, and planetesimal collisions \citep{Johnson2015Impact-jetting-}.
Our scenario prefers mechanisms that can form millimeter-sized chondrules from
meter-sized aggregates, which dominate the total mass of dust outside the planetesimal-forming zone in our simulation.
Heating by shock waves is one mechanism that could produce millimeter-sized melt droplets from larger aggregates \citep{Susa2002On-the-Maximal-,Kadono2005Breakup-of-liqu,Kato2006Maximal-size-of}. The radiative heating model by \citet{Herbst2019A-Radiative-Hea} also invokes meter-sized aggregates as the source of chondrules, but the amount of the produced chondrules may not be high enough to meet our requirement. We cannot rule out that other mechanisms are also compatible with our scenario.
The scenario proposed above is speculative and needs to be tested in future work. First of all, there is no observational support for the assumption that thick organic mantles were present on the grains in the inner region of the solar nebula. The scenario also assumes that there was a continuous supply of chondrules, and hence their precursor dust aggregates, so that the final planetesimals were dominantly composed of the carbon-poor chondrules.
Assessment of these assumptions will require modeling of the global transport of chondrules and precursor dust aggregates combined with realistic models for organic synthesis and chondrule formation in the solar nebula.
\section{Summary} \label{s:summary}
We have explored the possibility that rocky dust grains with organic mantles grow into rocky planetesimals through mutual collisions inside the water snow line of protoplanetary disks.
We constructed a simple adhesion model for organic-mantled grains by modifying a contact model for uniform elastic spheres
(Section \ref{s:adhesion}).
In general, the stickiness of uniform elastic grains with surface adhesion forces is high when the grains are soft, simply because softer particles make a larger contact area \citep{Johnson301}.
Organic matter is sticky in this respect because they have low elasticity, in particular in warm environments \citep[][see also our Figure \ref{f:Gor}]{Kudo2002The-role-of-sti}.
In the case of OMGs, however, the hard silicate core limits the size of the contact area, and therefore the thickness of the organic mantle relative to the grain radius, $\Delta R_{\rm or}/R$, enters as an important parameter that determines the maximum stickiness of OMGs (Figure~\ref{f:2l}).
Our adhesion model shows that aggregates of $0.1 ~\micron$-sized OMGs still can overcome the fragmentation barrier in weakly turbulent protoplanetary disks if
$\Delta R_{\rm or}/R \ga 0.03$ and if the temperature is above $\approx 220~\rm K$ (Figures \ref{f:vfrag} and \ref{f:monomer1}).
Using the adhesion model, we also simulated the global collisional evolution of OMG aggregates inside the snow line of a protoplanetary disk (Section \ref{s:simulation}).
Our simulations demonstrate that OMG aggregates with $R = 0.1~\micron$ and $\Delta R_{\rm or}/R = 0.03$ can overcome the fragmentation barrier and form rocky planetesimals at $r \la 1 ~\rm au$ (the left panel of Figure \ref{f:grow32}). At these orbits, the radial drift barrier is also overcome thanks to the aerodynamic properties of the drifting aggregates in this high-density region \citep{Birnstiel2010Gas--and-dust-e,Okuzumi2012Rapid-Coagulati,Drc-azkowska2014Rapid-planetesi}. Because organic matter is likely unstable at $T \ga 400~\rm K$ \citep{Kouchi2002Rapid-Growth-of}, one can only expect planetesimal formation by the direct coagulation of OMGs in a temperature range $200~{\rm K} \la T \la 400~\rm K$. Such a narrow planetesimal-forming zone can provide favorable conditions for the formation of the terrestrial planets in the solar system (Section~\ref{ss:imp}).
It is not obvious if our findings can explain the formation of the solar-system terrestrial planets, because their carbon content is low (Section \ref{ss:how}).
The organic-rich rocky planetesimals produced in our simulations have a high carbon content of $\sim$ 5 wt\%, which is too high to be consistent with the carbon content of the present-day Earth, even if the Earth's core contains 1 wt\% carbon.
Thus, we proposed a scenario in which sticky OMGs form the seeds of planetesimals and then grow into larger, less carbon-rich planetesimals through the accretion of carbon-poor (ordinary and enstatite) chondrules (Figure \ref{f:scenario}).
These carbon-poor chondrules could also form from OMG aggregates because most carbonaceous materials contained in the precursor aggregates can be destroyed during flash heating events \citep{Gail2017Spatial-distrib}.
The proposed scenario for terrestrial planet formation is still highly speculative and further assessments are needed in future work.
\acknowledgments
We thank Hidekazu Tanaka, Akira Kouchi, Hiroko Nagahara, Joanna Dr\c{a}\.{z}kowska, Shoji Mori, Kazumasa Ohno, Haruka Sakuraba, and Sota Arakawa for useful comments and discussions. This work was supported by JSPS KAKENHI Grant Numbers JP16H04081, JP16K17661, JP18H05438, and JP19K03926.
|
2,877,628,088,610 | arxiv | \section{Introduction}
W{ith} the development of remote sensing and hyperspectral sensors,
hyperspectral imaging has been widely adopted in many fields. However, due
to the low spatial resolution of hyperspectral sensors and the complexity of
material mixing process, an observed pixel may contain several different
materials. Hyperspectral unmixing analyzes the mixed pixel at a subpixel
level, by decomposing the mixed pixel into a set of endmembers and their
corresponding fractional
abundances~\cite{keshava2002spectral,bioucas2012hyperspectral}.
In the past few decades, many hyperspectral unmixing methods have been
proposed, including step-by-step unmixing methods (first extracting
endmembers and then estimating abundances) and simultaneous decomposing
methods (simultaneously determining endmembers and abundances ). Nonnegative
matrix factorization (NMF) is a popular simultaneously decomposing method.
It represents a hyperspectral image by a product of two nonnegative matrices
under the linear mixing assumption, namely, one represents the endmember
matrix, and the other represents the abundance matrix~\cite{li2016robust}.
However, NMF is a nonconvex problem that does not admit a unique solution.
To overcome this drawback, regularizers are proposed to be added to the
objective function to mathematically constrain the space of solutions as
well as physically exploit the spatial and spectral properties of
hyperspectral images. So far, sparsity and spatial smoothness are the most
useful and common regularizers~\cite{lu2013double,salehani2017smooth}. As
mixed pixels often contain a subset of endmembers, sparsity-promoting
regularizers have been widely applied in NMF based unmixing
methods~\cite{wang2016hypergraph,lu2019subspace}. The
$\ell_1$-norm~\cite{he2017total}, and more generally, $\ell_p$-norm
($0<p<1$)~\cite{sigurdsson2016blind,salehani2017smooth} regularizers are
introduced to produce sparse unmixing results. Thus, many NMF works use the
$\ell_p$-norm to enhance the sparsity of results. Spatial smoothness is an
inherent characteristic of natural hyperspectral images. Many works
integrate the total-variation (TV) regularization to enhance the
similarities of neighbor
pixels~\cite{feng2018hyperspectral,sigurdsson2016blind}. Reweight TV
regularization is also proposed to adaptively capture the smooth structure
of abundance maps~\cite{he2017total,feng2019hyperspectral}. Graph based
regularization is also a sophisticated choice to cope with spatial
information, especially images with complex structures. Many works, such as
MLNMF~\cite{shu2015multilayer}, AGMLNMF~\cite{tong2020adaptive} and
SGSNMF~\cite{wang2017spatial}, use graph regularization to capture the
comprehensive spatial information. SSRNMF~\cite{huang2019spectral} uses the
$\ell_{2,1}$-norm and the $\ell_{1,2}$-norm to jointly capture spatial and
spectral priors. \textcolor{black}{However, all these methods use handcrafted
regularizers to learn the inherent information of hyperspectral data and
enhance the unmixing accuracy of NMF.} \textcolor{black}{Generally, designing a proper
regularization is a non-trivial task}, and complex regularizers may increase
the difficulty of solving the optimization problem. Several plug-and-play
methods have been proposed to solve various hyperspectral image inverse
problems~\cite{sreehari2016plug,teodoro2018convergent,wang2020learning}.
However\textcolor{black}{,} this strategy has not been considered in the hyperspectral
unmixing with NMF.
In this letter, we propose \textcolor{black}{a novel} NMF based unmixing framework which
jointly considers both handcrafted and learnt regularizers to fully
investigate the prior information embedded in hyperspectral images. To be
specific, \textcolor{black}{we plug the learnt spectral and spatial information by using
a denoising operator. Using such an operator gets rid of manually designing
regularizer and solving the complex optimization problems. A variety of
denoisers can be plugged into the framework, which make the method
flexible and extendable.} \textcolor{black}{Further, handcrafted prior is also kept to encode some physically explicit
priors.} \textcolor{black}{We add an $\ell_{2,1}$-norm to the objective
function to enhance the sparsity of abundances as an example, as the number of endmembers
is usually larger than the number of materials existing in a mixed pixel.}
\section{Problem Formulation}
We consider the linear mixture model (LMM), which assumes that an observed
pixel is a linear combination of a set of endmembers and their associated
fractional abundances. Let $\mathbf{R}\in\mathbb{R}^{L\times N}$ be an
observed hyperspectral image, where $L$ is the number of spectral bands, $N$
is the number of pixels. The LMM for a hyperspectral image can be expressed
as:
\begin{equation}\label{eq.LMM}
\mathbf{R} = \mathbf{EA}+\mathbf{N},
\end{equation}
where $\mathbf{E}$ is an $L\times P$ matrix, which denotes the endmember
spectra library with $P$ the number of endmembers, and $\mathbf{A}$ is a
$P\times N$ matrix representing the abundance matrix, and $\mathbf{N}$
denotes an i.i.d. zero-mean Gaussian noise matrix. For the physical
characteristics of hyperspectral data, the endmember matrix $\mathbf{E}$ is
required to satisfy the endmember nonnegative constraint (ENC), and the
abundance matrix $\mathbf{A}$ is required to satisfy the abundance
nonnegative constraint (ANC) and sum-to-one constraint (ASC), i.e.,
$\mathbf{E} \geq \mathbf{0}$ and $ \mathbf{A} \geq
\mathbf{0},~\mathbf{1}^{\top}\mathbf{A}=\mathbf{1}$.
\section{Proposed Method}
In our work, we propose an NMF based unmixing framework which uses a
handcrafted regularizer and learnt priors to enhance the unmixing
performance. More specially, we use an $\ell_{2,1}$-norm to enhance the
sparsity of abundances, and we plug the learnt spectral and spatial
information by using a denoising operator. We shall elaborate our framework
as follows.
\subsection{Objective Function}
We use the NMF model to solve the blind unmixing problem. The general
objective function is expressed as:
\begin{equation}\label{eq.loss0}
\mathcal{L}(\mathbf{E}, \mathbf{A})= \mathcal{L}_{\rm data}(\mathbf{E}, \mathbf{A})
+ \alpha \mathcal{L}_{\rm hand}(\mathbf{E},\mathbf{A}) + \mu \mathcal{L}_{\rm learnt}(\mathbf{E}, \mathbf{A}).
\end{equation}
The terms on the right-hand-side of~\eqref{eq.loss0} are interpreted as follows:
\begin{itemize}
\item $\mathcal{L}_{\rm data}(\mathbf{E}, \mathbf{A})$ is the term
associated with the data fitting quality. In this work, we set
$\mathcal{L}_{\rm data}(\mathbf{E}, \mathbf{A})=
\frac{1}{2}\|\mathbf{R}-\mathbf{E} \mathbf{A}\|_{\text{F}}^{2}$, with
$\|\cdot\|_{\text{F}}$ being the Frobenius norm.
\item $\mathcal{L}_{\rm learnt}(\mathbf{E}, \mathbf{A})$ is a
regularization term that \textcolor{black}{represents} priors learnt from data,
which does not has an explicit form. Here we only concentrate on
priors of $\mathbf{A}$ by setting $\mathcal{L}_{\rm
learnt}(\mathbf{E}, \mathbf{A}) = \Phi(\mathbf{A})$ for illustration
purpose.
\item $\mathcal{L}_{\rm hand}(\mathbf{E}, \mathbf{A})$ is a handcrafted
regularization term. Though the learnt regularizer $\mathcal{L}_{\rm
learnt}(\mathbf{E}, \mathbf{A})$ is powerful to represent
spatial-spectral information of the image, $\mathcal{L}_{\rm
hand}(\mathbf{E}, \mathbf{A})$ is still necessary to encode physically
meaningful properties \textcolor{black}{and the prior knowledge of endmebers} that
might not easy to be learnt. Here, we use $\mathcal{L}_{\rm
hand}(\mathbf{E},\mathbf{A})=\|\mathbf{A}\|_{2,1}$, which is a
structured sparsity regularizer showing effectiveness in sparse
unmixing. \textcolor{black}{Other meaningful handcrafted regularization can also
be adopted.}
\item $\alpha$ is a positive parameter that controls the impact of the
sparse regularizer. The positive regularization parameter $\mu$
controls the strength of plugged priors.
\end{itemize}
Then, the unmixing problem writes:
\begin{equation}\label{eq.loss_2}
\begin{split}
\widehat{\mathbf{E}}, \widehat{\mathbf{A}}=&\arg\min _{\mathbf{E}, \mathbf{A}}
\frac{1}{2}\|\mathbf{R}-\mathbf{E} \mathbf{A}\|_{\text{F}}^{2}+\alpha\|\mathbf{A}\|_{2,1}+\mu\Phi(\mathbf{{A}}),\\
&\text{s.t.}~~\mathbf{E} \geq \mathbf{0},~\mathbf{A} \geq \mathbf{0},~\mathbf{1}^{\top}\mathbf{A}=\mathbf{1}, \\
\end{split}
\end{equation}
where $\|\mathbf{A}\|_{2,1}=\sum_{i=1}^{P}\|\mathbf{A}_{i}\|_{2}$, and
$\mathbf{A}_i$ is the $i$-th row of $\mathbf{A}$. The same as some previous
works~\cite{he2017total,feng2018hyperspectral}, we introduce an auxiliary
variable $\mathbf{\tilde{A}}$ and constraint
$\mathbf{\tilde{A}}=\mathbf{A}$. The objective function~\eqref{eq.loss_2} is
rewritten as:
\begin{equation}\label{eq.loss_3}
\begin{split}
\mathcal{L}(\mathbf{E}, \mathbf{A},\mathbf{\tilde{A}})=
&\min _{\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}}} \frac{1}{2}\|\mathbf{R}-\mathbf{E} \mathbf{A}\|_{\text{F}}^{2}
+\alpha\|\mathbf{A}\|_{2,1}+\mu\Phi(\mathbf{\tilde{A}}),\\
\text{s.t.}~~\mathbf{E} & \geq \mathbf{0},~\mathbf{A} \geq \mathbf{0},~\mathbf{1}^{\top}\mathbf{A}=\mathbf{1},~\mathbf{\tilde{A}}=\mathbf{A}. \\
\end{split}
\end{equation}
The associated augmented Lagrangian function is given by
\begin{equation}\label{eq.loss_4}
\begin{split}
\mathcal{L}(\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}})=&\min _{\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}}}
\frac{1}{2}\|\mathbf{R}-\mathbf{E} \mathbf{A}\|_{\text{F}}^{2}+\frac{\lambda}{2}\|\mathbf{A}-\mathbf{\tilde{A}}\|_{\text{F}}^{2}\\
&+\alpha\|\mathbf{A}\|_{2,1}+\mu\Phi(\mathbf{\tilde{A}})\\
\text{s.t.}~~\mathbf{E} &\geq \mathbf{0},~\mathbf{A} \geq \mathbf{0},~\mathbf{1}^{\top}\mathbf{A}=\mathbf{1}, \\
\end{split}
\end{equation}
where $\lambda$ is the penalty parameter. The blind unmixing problem can be
solved by iteratively addressing the following three subproblems:
\begin{align}
\mathbf{E}=&\arg \min _{\mathbf{E}} \mathcal{L}(\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}})\\
\label{eq.loss_6}\mathbf{A}=&\arg \min _{\mathbf{A}} \mathcal{L}(\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}})\\
\mathbf{\tilde{A}}=&\arg \min _{\mathbf{\tilde{A}}} \mathcal{L}(\mathbf{E}, \mathbf{A}, \mathbf{\tilde{A}}).
\end{align}
The first two subproblems are forward models used to update the endmember
and abundance matrices. We use the NMF method to solve the first two
subproblems. As to be seen later, the third subproblem can be considered as
an image denoising problem which can be solved by a variety of denoisers.
This framework is flexible and can automatically plug spatial and spectral
priors with the choice of different denoisers.
\subsection{Optimization}
\subsubsection{Endmember estimation}
In order to estimate the endmember matrix, we devote to solve the following
minimization objective problem:
\begin{equation}\label{eq.end}
\mathcal{L}(\mathbf{E})=\min_{\mathbf{E}} \frac{1}{2}\|\mathbf{R}-\mathbf{E} \mathbf{A}\|_{\text{F}}^{2}+\text{Tr}(\mathbf{\Lambda} \mathbf{E}),
\end{equation}
where $\mathbf{\Lambda}$ is the Lagrange multiplier to control the impact of
ENC. We calculate the gradients about $\mathbf{E}$ and set it to
$\mathbf{0}$:
\begin{equation}\label{eq.end_1}
\frac{\partial \mathcal{L}(\mathbf{E})}{\partial\mathbf{E}}=\mathbf{EA}\mathbf{A}^{\top}-\mathbf{R}\mathbf{A}^{\top}+\mathbf{\Lambda}=\mathbf{0}.
\end{equation}
By element-wise multiplication $\mathbf{E}$ of both sides
of~\eqref{eq.end_1} and according to the Karush-Kuhn-Tucker (K-K-T)
conditions $\mathbf{E}\odot\mathbf{\Lambda}=\mathbf{0}$, the endmember
matrix $\mathbf{E}$ is updated as:
\begin{equation}\label{eq.end_2}
\mathbf{E}\leftarrow \mathbf{E} \odot(\mathbf{RA}^{\top})\oslash(\mathbf{EAA}^{\top}),
\end{equation}
where $\odot$ is element-wise multiplication, and $\oslash$ is element-wise division.
\subsubsection{Abundance estimation}
The second subproblem is to estimate abundance. It is difficult to solve
ASC, and we use two augmented matrixes $\mathbf{R}_{f}, \mathbf{E}_{f}$ to
address this issue:
\begin{equation}\label{eq.abu}
\mathbf{R}_{f}= \left[
\begin{array}{c}
\mathbf{R} \\
\delta\mathbf{1}_{N}^{\top} \\
\end{array}
\right],
~~ \mathbf{E}_{f}= \left[
\begin{array}{c}
\mathbf{E} \\
\delta\mathbf{1}_{P}^{\top} \\
\end{array}
\right],
\end{equation}
where $\delta$ is the penalty parameter controlling the strength of ASC. The
objective problem of abundance estimation is as follows:
\begin{equation}\label{eq.abu_1}
\begin{split}
\mathcal{L}(\mathbf{A})=& \min_{\mathbf{A}} \frac{1}{2}\|\mathbf{R}_{f}-\mathbf{E}_{f}
\mathbf{A}\|_{\text{F}}^{2}+\frac{\lambda}{2}\|\mathbf{A}-\mathbf{\tilde{A}}\|_{\text{F}}^{2}+\\
&\text{Tr}(\mathbf{\Gamma} \mathbf{A})+\alpha\|\mathbf{A}\|_{2,1},
\end{split}
\end{equation}
where $\mathbf{\Gamma}$ is the Lagrange multiplier to control the impact of ANC.
We calculate the derivative of~\eqref{eq.abu_1} and set it to 0:
\begin{equation}\label{eq.abu_2}
\frac{\partial \mathcal{L}(\mathbf{A})}{\partial\mathbf{A}}=\mathbf{E}_{f}^{\top}\mathbf{E}_{f}\mathbf{A}-\mathbf{E}_{f}\mathbf{R}_{f}
+\lambda(\mathbf{A}-\mathbf{\tilde{A}})+\mathbf{\Gamma}+\alpha \mathbf{DA}=\mathbf{0},
\end{equation}
where
\begin{equation}
\mathbf{D}=\operatorname{diag}\left\{\frac{1}{\left\|\mathbf{A}_{1}\right\|_{2}},
\frac{1}{\left\|\mathbf{A}_{2}\right\|_{2}}, \cdots, \frac{1}{\left\|\mathbf{A}_{P}\right\|_{2}}\right\}.
\end{equation}
According to KKT conditions, we get
$\mathbf{A}\odot\mathbf{\Gamma}=\mathbf{0}$. By element-wise multiplication
$\mathbf{A}$ of both sides of~\eqref{eq.abu_2}, we update the abundance
matrix as follows:
\begin{equation}\label{eq.abu_3}
\mathbf{A}\leftarrow \mathbf{A}\odot (\mathbf{E}_{f}\mathbf{R}_{f}
+\lambda\mathbf{\tilde{A}})\oslash[\mathbf{E}_{f}^{\top}\mathbf{E}_{f}\mathbf{A}+\lambda \mathbf{A}+\alpha \mathbf{DA}].
\end{equation}
\subsubsection{Plugged priors}
In the third subproblem, we focus on solving the following optimization problem:
\begin{equation}\label{eq.denoise}
\mathcal{L}(\mathbf{\tilde{A}})= \frac{\lambda}{2}\|\mathbf{A}-\mathbf{\tilde{A}}\|_{\text{F}}^{2}+\mu\Phi(\mathbf{\tilde{A}}).
\end{equation}
This step can be seen as an abundance denoising problem, where
$\mathbf{\tilde{A}}$ is the clean version of abundance maps. This problem
can be rewritten as:
\begin{equation}\label{eq.denoise_1}
\mathcal{L}(\mathbf{\tilde{A}})= \frac{1}{2(\sqrt{\mu/\lambda})^2}
\|\mathbf{A}-\mathbf{\tilde{A}}\|_{\text{F}}^{2}+\Phi(\mathbf{\tilde{A}}).
\end{equation}
According to the maximum a posteriori model (MAP), the problem
in~\eqref{eq.denoise_1} can be regarded as denoising abundance maps with
additive gaussian noise with a standard deviation
$\sigma_{n}=\sqrt{\mu/\lambda}$ and the priors are encoded in
$\Phi(\mathbf{\tilde{A}})$. In our proposed unmixing framework, instead of
solving this problem using optimization methods, we use denoisers to solve
this regularized optimization problem. Plugged denoisers can automatically
carry prior information. The denoising operator is actually performed in the
3D domain to fully exploit the spectral and spatial information, we
rewrite~\eqref{eq.denoise_1} as follows:
\begin{equation}\label{eq:denosier_2}
\mathbf{\tilde{A}} \leftarrow Denoiser(\mathcal{T}(\mathbf{A}), \sigma_{n}),
\end{equation}
where $\mathcal{T}(\cdot)$ is an operator that transforms the 2D matrix to
3D data cube. Plenty of denoisers can be applied in this step and carry
various kinds of priors. In our work, we use a conventional linear denoiser
non-local means denoising (\texttt{NLM}), a nonlinear denoiser
block-matching and 3D filtering (\texttt{BM3D}) to our NMF based unmixing
framework. These two denoisers are 2D cube based and solve the abundances
denoising problem band by band. Further, two 3D cube based denoisers
\texttt{BM4D} and a total variation regularized low-rank tensor
decomposition denoising model (\texttt{LRTDTV}) are also plugged into our
proposed framework. We denote our proposed methods as PNMF-NLM, PNMF-BM3D,
PNMF-BM4D and PNMF-LRTDTV, respectively. Our NMF based unmixing framework is
presented in Algorithm~\ref{alg_1}.
\begin{algorithm}[!t]
\label{alg_1}
\KwIn{Hyperspectral image $\bf R$, regularization para -meters $\lambda$, $\alpha$ $\mu$, $\delta$, the iteration number $K$.}
\KwOut{Endmember matrix $\bf E$, abundance matrix $\bf A$.}
Initialize ${\bf E}$ with VCA, $\bf A$, $\bf \tilde{A}$ with FCLS. \\
\While{Stopping criterias are not met and $k\le K$}{
Update $\mathbf{E}_k$ with~\eqref{eq.end_2};\\
Augment $\bf R$ and $\bf E$ to obtain $\mathbf{R}_f$ and $\mathbf{E}_f$, respectively;\\
Update $\mathbf{A}_k$ with~\eqref{eq.abu_3};\\
Update $\mathbf{\tilde{A}}_k$ using the strategy of~\eqref{eq:denosier_2};\\
$k=k+1$;
}
\caption{Proposed NMF based framework for hyperspectral unmixing.}
\end{algorithm}
\section{Experiments}
\begin{table*}[]
\footnotesize \centering
\caption{\small RMSE, SAD and PSNR Comparison of Synthetic Dataset.}
\vspace{-0.2cm}
\renewcommand\arraystretch{1.5}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
\hline
& \multicolumn{3}{c|}{5dB} & \multicolumn{3}{c|}{10dB} & \multicolumn{3}{c|}{20dB} & \multicolumn{3}{c}{30dB} \\ \hline
\multicolumn{1}{l|}{} & RMSE & SAD & PSNR & RMSE & SAD & PSNR & RMSE & SAD & PSNR & RMSE & SAD & PSNR \\ \hline
VCA-SUnSAL-TV & 0.1392 & 6.0675 & 17.5317 & 0.0692 & 3.0419 & 23.1989 & 0.0240 & 0.7611 & 32.4014 & 0.0094 & 0.1880 & 40.5592 \\ \hline
CoNMF & 0.1074 & 6.8326 & 9.7881 & 0.0773 & 4.4600 & 22.2385 & 0.0248 & 0.7609 & 32.1133 & 0.0094 & 0.1879 & 40.5005 \\ \hline
TV-RSNMF & 0.1390 & 6.0034 & 9.6711 & 0.0740 & 4.3929 & 22.6155 & 0.0265 & 1.0429 & 31.5214 & 0.0092 & 0.2169 & 40.7600 \\ \hline
NMF-QMV & 0.1392 & 23.4638 & 17.1248 & 0.0854 & 8.3570 & 21.3688 & 0.0247 & 0.6208 & 32.1312 & 0.0092 & 0.1902 & 40.6972 \\ \hline
PNMF-NLM & 0.0817 & 5.1458 & 22.2549 & 0.0560 & 2.9961 & 24.8780 & 0.0212 & 0.5116 & 33.0728 & 0.0092 & 0.1877 & 40.1695 \\ \hline
PNMF-BM3D & 0.0779 & 5.1164 & \textbf{23.4658} & \textbf{0.0557} & \textbf{2.0428} & 25.2735 & \textbf{0.0176} & \textbf{0.3898} & \textbf{35.2867} & 0.0092 & 0.1878 & 40.5740 \\ \hline
PNMF-BM4D & 0.0838 & 5.2499 & 21.2835 & 0.0579 & 2.0840 & \textbf{25.4650} & 0.0186 & 0.4242 & 34.4498 & 0.0092 & \textbf{0.1876} & \textbf{41.0646} \\ \hline
PNMF-LRTDTV & \textbf{0.0773} & \textbf{5.0415} & {21.9143} & 0.0597 & 2.7060 & 25.1639 & 0.0211 & 0.4873 & {33.5669} & \textbf{0.0091} & 0.1878 & 40.5740 \\ \hline\hline
\end{tabular}
\label{tab.syn_results}
\vspace{-3mm}
\end{table*}
In this section, we use both synthetic data and real data experiments to
evaluate the unmixing performance of our proposed NMF based unmixing
framework. Our methods were compared with several state-of-the-art methods.
First, we considered a sequential unmixing method, where we extracted the
endmembers with VCA~\cite{Nascimento2003Vertex} and estimated the abundances
using SUnSAL-TV~\cite{iordache2012total}. We also compared with NMF based
methods. CoNMF~\cite{li2016robust} is a robust collaborative NMF method for
hyperspectral unmixing. TV-RSNMF~\cite{he2017total} is a TV regularized
reweighted sparse NMF method. NMF-QMV~\cite{Lina2019Regularization} is a
variational minimum volume regularized NMF method. We used the spectral
angle distance (SAD) to evaluate the endmember extraction results: $
\text{SAD} = \frac{1}{P}\sum_{k=1}^{P}\cos^{-1}\left(\frac{\mathbf{e}_{k}^{\top}\hat{\mathbf{e}}_{k}}{\|\mathbf{e}_{k}\|\|\hat{\mathbf{e}}_{k}\|}\right),
$ where $\mathbf{e}_{k}$ is the ground-truth and $\hat{\mathbf{e}}_{k}$ is
the estimated endmember. We used the root mean square error (RMSE) to
evaluate the abundance estimation results: $
\text{RMSE} = \sqrt{\frac{1}{NP}\sum_{i=1}^{N}\|\mathbf{a}_{i}-\hat{\mathbf{a}}_{i}\|^{2}},
$ where $\mathbf{a}_{i}$ is the $i$-th column of $\mathbf{A}$ and represents
the ground-truth, and $\hat{\mathbf{a}}_{i}$ is the estimated abundance.
Further, we used the peak signal-to-noise ratio (PSNR) to evaluate the
denoising quality between estimated abundances and ground-truth: $
\text{PSNR} = 10\times \log_{10}\left(\frac{\text{MAX}^{2}}{\text{MSE}}\right),
$ in which $\text{MAX}$ is the maximum abundance value, and
$\text{MSE}=\frac{1}{N}\sum_{i}\sum_{j}[A(i,j)-\hat{A}(i,j)]^{2}$, where
$\hat{A}$ is the estimated abundance and $A$ is the clean ground-truth.
\subsection{Synthetic data}
In this experiment, we generated the synthetic data using Hyperspectral
Imagery Synthesis tools with Gaussian
Fields\footnote{\textcolor{black}{http://www.ehu.es/ccwintco/index.php/Hyperspectral\_Imagery\_Synthesis\_
tools\_for\_MATLAB}}, \textcolor{black}{and the LMM is adopted}. The spatial size of the
synthetic data is $256\times 256$.
A selection of four endmembers from USGS spectral library were used as the
endmember library, \textcolor{black}{with $224$ bands covering wavelength from 400~nm to
2500~nm}. \textcolor{black}{There are both mixed and pure pixels in the dataset.} To
evaluate the robustness of our methods and the efficacy of plugged
denoisers, we added Gaussian noise to the clean data with the
signal-to-noise ratio (SNR) setting to 5~dB, 10~dB, 20~dB and 30~dB.
In our work, we used VCA to initialize the endmembers and the fully
constrained least square method (FCLS)~\cite{heinz2001fully} to initialize
the abundances. After multiple experiments, we set the penalty parameter of
$\ell_{2,1}$ ($\alpha$) to 0.1, the penalty parameter of denoising term
($\lambda$) to $3\times 10^{4}$, the penalty parameter of ASC ($\rho$)
\textcolor{black}{to} 10, and $\mu$ to 500. Unmixing results of RMSE, SAD and PSNR of
this experiment are reported in Table~\ref{tab.syn_results}. From the
results, we observe that our proposed methods get \textcolor{black}{the best RMSE and
PSNR results} and achieve the lowest mean SAD values than other methods.
This highlights the effects of our sparse regularizer and the superiority of
priors learnt by denoisers. \textcolor{black}{Moreover, benefitting from the denoisers,
when the noise level is high, the unmixing results of our methods show more significant
enhancement. This indicates that our methods are robust to noise.}
\textcolor{black}{Figure~\ref{fig.map_syn} shows the abundance maps of four compared
methods and our proposed methods with SNR = 10~dB. We can see that the
abundance estimated results of our methods are with less noise and more
close to ground-truth.} \textcolor{black}{The endmember extracted results of PNMF-NLM
(SNR = 20~dB) are shown in Figure~\ref{fig.end_syn}, where the red curves
are ground-truth, and the blue curves are estimated endmembers.}
\textcolor{black}{Moreover, Figure~\ref{fig.RMSE_curve} presents how parameters affect
the unmixing results with SNR = 10~dB.} The RMSE convergence curves of
synthetic data of our proposed methods are shown in
Figure~\ref{fig.loss_curve}, which indicate that our NMF based unmixing
framework has a stable convergence property.
\begin{figure*}
\centering
\includegraphics[width=17cm]{plot_map.pdf}\\
\caption{\textcolor{black}{Abundance maps of synthetic data (SNR = 10~dB). From top to bottom:
different endmembers. From left to right: Ground-truth, VCA-SUnSAL-TV, CoNMF, TV-RSNMF, NMF-QMV,
PNMF-NLM, PNMF-BM3D, PNMF-BM4D and PNMF-LRTDTV.}}\label{fig.map_syn}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=9cm]{plot_curve.pdf}\\
\caption{\textcolor{black}{Endmembers extracted by PNMF-NLM (SNR=20~dB).}}\label{fig.end_syn}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{loss_par.pdf}\\
\caption{\textcolor{black}{RMSE as a function of the regularization parameters for PNMF-NLM.}}\label{fig.RMSE_curve}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=7.8cm]{RMSE_loss_1.pdf}\\
\caption{The RMSE convergence curves of synthetic data of our proposed methods (5dB).}\label{fig.loss_curve}
\end{figure}
\subsection{Real data}
\subsubsection{Cuprite dataset}
In this experiment, we used a well-known real hyperspectral dataset (AVIRIS
Cuprite) to evaluate our unmixing methods. The dataset was captured from the
Cuprite mining district in west-central Nevada by AVIRIS in 1997. We used a
subimage of size $250\times 191$ in our experiment. This dataset has $224$
bands. Following other works~\cite{li2016robust,tong2020adaptive}, we
removed the water absorption and noisy bands (2, 105-115, 150-170, 223 and
224) with 188 exploitable bands remained. The number of endmembers was set
to 12.
In this experiment, we set $\alpha$ to 0.1, $\lambda$ to $3\times 10^{4}$,
$\rho$ to 10 \textcolor{black}{and} $\mu$ to 100. As there is no ground-truth for this
dataset, we can not give a quantitative comparison. Like many previous
works, we evaluate the unmixing results in an intuitive manner. The
abundance maps of selected \textcolor{black}{materials} of the Cuprite data are shown in
Figure~\ref{fig.real_abundance}. \textcolor{black}{We observe that our proposed methods
provide clearer and sharper results with several locations emphasized and
more detailed information.
The endmember extraction results of PNMF-BM4D are shown in Figure~\ref{fig.spe_curve}.
We also used reconstructed error (RE) to qualify these methods:}
$
\text{RE}=\sqrt{\frac{1}{NP}\sum_{i=1}^{N}\parallel \mathbf{r}_{i}-\mathbf{\hat{r}}_{i}\parallel^{2}},
$ where $\hat{\mathbf{r}}_{i}$ represents the reconstructed pixel, and
$\mathbf{r}_{i}$ is the ground-truth.
The RE comparison is shown in Table~\ref{tab.RE_results}.
We observe that PNMF-BM4D gets
the lowest RE results. Note that without the ground-truth information
on the abundances and endmembers for the real data, RE is not necessarily proportional
to the quality of unmixing performance, and therefore it can only be
considered as complementary information. Note that our methods use denoisers
to plug learnt priors, which can also denoise the abundance maps but may
increase the RE compared to the noisy input.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{spe_curve.pdf}\\
\vspace{-0.3cm}
\caption{Twelve endmembers extracted from PNMF-BM4D of Cuprite dataset.}\label{fig.spe_curve}
\vspace{-0.3cm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=16.5cm]{cuprite_1.pdf}\\
\vspace{-0.4cm}
\caption{\textcolor{black}{Abundance maps of Cuprite data. From top to bottom: selected endmembers. From left to right:
VCA-SUnSAL-TV, CoNMF, TV-RSNMF, NMF-QMV,
PNMF-NLM, PNMF-BM3D, PNMF-BM4D and PNMF-LRTDTV.}}\label{fig.real_abundance}
\vspace{-0.3cm}
\end{figure*}
\begin{table*}
\footnotesize \centering
\caption{\small RE Comparison of Cuprite Dataset.}
\vspace{-0.2cm}
\renewcommand\arraystretch{1.2}
\begin{tabular}{ccccccccc}\hline\hline
Algorithm & VCA-SUnSAL-TV & CoNMF & TV-RSNMF & NMF-QMV & PNMF-NLM & PNMF-BM3D & PNMF-BM4D & PNML-LRTDTV \\
RE & 0.0087 & 0.0235 & 0.0085 & 0.0425 & 0.0083 & 0.0081 & \textbf{0.0070} & 0.0091\\\hline\hline
\end{tabular}
\label{tab.RE_results}
\end{table*}
\subsubsection{Jasper Ridge dataset}
The Jasper Ridge dataset with $100\times100$ pixels was used for this
purpose. The data consist of 224 spectral bands ranging from 380~nm to
2500~nm with spectral resolution up to 10~nm. After removing channels [1-3,
108-112, 154-166 and 220-224] affected by dense water vapor and the
atmosphere, 198 channels were remained. Four prominent endmembers existing
in this data are considered in our experiments.
The experiment settings are
the same as Cuprite dataset. We set $\alpha$ to 0.1, $\lambda$ to
$3\times10^{-4}$, $\rho$ to 10 and $\mu$ to 100. The abundance estimated
results are shown in Figure~\ref{fig.map_jas}, and the endmembers extracted
from PNMF-BM4D is shown in~\ref{fig.curve_jas}. The unmixing results
indicate that our proposed methods are with less noise and more smoothness.
RE results of Jasper Ridge dataset are shown in Table~\ref{tab.RE_results_1},
which also demonstrate the effective performance of our methods.
\begin{figure*}[t]
\centering
\includegraphics[width=17cm]{map_jas.pdf}\\
\caption{Abundance maps of Jasper Ridge data. From top to bottom: four endmembers. From left to right: VCA-SUnSAL-TV, CoNMF, TV-RSNMF, NMF-QMV,
PNMF-NLM, PNMF-BM3D, PNMF-BM4D and PNMF-LRTDTV.}\label{fig.map_jas}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm]{curve_jas.pdf}\\
\caption{Four endmembers extracted from PNMF-BM4D of Jasper Ridge dataset.}\label{fig.curve_jas}
\end{figure}
\begin{table*}
\footnotesize \centering
\caption{\small RE Comparison of Jasper Ridge Dataset.}
\vspace{0.2cm}
\renewcommand\arraystretch{1.2}
\begin{tabular}{ccccccccc}\hline\hline
Algorithm & VCA-SUnSAL-TV & CoNMF & TV-RSNMF & NMF-QMV & PNMF-NLM & PNMF-BM3D & PNMF-BM4D & PNML-LRTDTV \\
RE & 0.0257 & 0.0198 & 0.0122 & 0.0434 & \textbf{0.0111} & 0.0123 & {0.0135} & 0.0117\\
\hline\hline
\end{tabular}
\label{tab.RE_results_1}
\end{table*}
\section{Conclusion}
In this paper, we proposed an NMF based unmixing framework that jointly
\textcolor{black}{used} handcrafted and learnt regularizers. \textcolor{black}{We used a denoiser of
abundances to plug learnt priors. Our framework allowed plugging various
denoisers which is flexible and extendable. For illustrating the integration of handcrafted
regularization, we added a structured sparse regularizer to the objective
function to enhance the sparsity of unmixing results.} Experiment results
showed the effectiveness of our proposed methods. \textcolor{black}{Future work will
focus on the adaptive parameter selection.}
\bibliographystyle{IEEEtran}
|
2,877,628,088,611 | arxiv | \section{Introduction}
The orbit of the Gaia mission has been chosen to be a controlled Lissajous orbit (amplitudes of 360,000 and 100,000 km along and perpendicular to the ecliptic) around the Lagrangian point L2 of the Sun-Earth system (about 1.5 million km in
anti-sun direction) in order to have a quiet environment for the payload in terms of thermo-me\-cha\-nical stability. Another advantage of this position is
the possibility of uninterrupted observations, since the Earth, Moon and Sun all lie inside Gaia's orbit.
The aim of Gaia is to perform absolute astrometry quasi-simultaneously on the whole celestial sphere, rather than differential measurements in a small field of view.
Compared to the precursor mission HIPPARCOS (1989-1993). the much larger telescope of Gaia, the much higher sensitivity of the detectors, the more stable environment at
L2, and other technical progress allows to measure 10,000 times more objects (a factor of 1000 improvement in limiting brightness) and reach a 50 times higher accuracy for 100 times fainter objects.
The basic design of Gaia has been described by Perryman et al. (2001) and ESA (2000).
However, in 2002 a major reduction in size, complexity, and cost (as well as a degradation
of the astrometric performance by about a factor of two) was made.
More than 300 scientists and computer experts from 18 different countries are working on the Gaia project, not included the work by employees at ESA or the industry. The main
contractor on the industrial side is EADS Astrium.
The main purpose of this paper is to briefly describe the technique of the Gaia project, its performance and status.
More details and up-to-data-information on Gaia can at all time be retrieved from the ESA-RSSD homepage (the URL is provided at the end of the References).
The scientific applications of Gaia will be the topic of
the contribution by N. Walton in these proceedings.
\section{Gaia's schedule}
The first proposal for a HIPPARCOS successor was submitted to ESA in 1993. It was accepted as a ``Cornerstone Mission'' in
2000. In 2006 the industrial phase began, and in 2007 the Preliminary Design Review was successfully completed.
Gaia is currently scheduled to be launched from Kourou, French Guiana,
in December 2011 with a Soyuz-ST rocket (which includes a restartable Fregat upper stage). Initially
the Fregat-Gaia composite will be placed into a parking orbit, after which a single Fregat boost injects Gaia
on its transfer trajectory towards the L2 Lagrange point. In order to keep Gaia in an orbit around L2, the spacecraft must
perform small maneuvers every month.
After a commissioning phase Gaia will measure the sky for five years with a possible extension for
another year. Subsequently, the final catalog, which includes astrometric and photometric information, near-infrared spectroscopy,
radial-velocity determinations, and a classification of the objects, will be produced. The completion of the Gaia project
is intended to be around 2020.
It is planned to produce one or more intermediate catalogues in the course of operations.
The exact dates for such data releases will be carefully decided on to avoid disseminating
insufficiently validated data.
\begin{figure}
\includegraphics[width=0.48\textwidth]{gaia_scanning_principle.jpg}
\caption{Scanning principle of Gaia: The constant spin rate of $60^{\prime\prime}/$s corresponds to one revolution
(great-circle scans) in six hours. The angle between the slowly precessing spin axis and the Sun
is maintained at an aspect angle of 45$^\circ$ . The basic angle is between the two fields of view is constant at 106.5$^\circ$.
Figure courtesy Karen O'Flaherty, ESA)
}
\label{f:gaia_scanning_principle}
\end{figure}
\section{The measurement principle}
In order to perform high-precision absolute astrometry, Gaia -- like its predecessor Hipparcos -- (see Lindegren 2004)
\begin{itemize}
\item simultaneously observes in two fields of view (FoVs) separated by a large
``basic angle'' (106.5$^\circ$), This allows large-angle measurements to be as precise as small-\-scale
ones. In this way it is possible to use the many references stars at large angular separation which have a very different parallax movement, so that
the distance determinations do not suffer from the uncertainties of differential (small-field) angular measurements.
\item roughly scans along great circles leading to strong ma\-the\-matical closure conditions. This one-dimensionality
also has the advantage that the measured angle between two stars, projected along-scan, is to first order independent of the
orientation of the instrument.,
\item scans the same area of sky many times during the mission
under varying orientations.
\end{itemize}
The angular precision $\sigma$ of a single astrometric one-dimensional measurement is determined by the size of the aperture of each of Gaia's two telescopes
($D=1.4$\,m in along-scan direction), the wavelength $\lambda$ of the incident light from the source, and the number of photons $N$ reaching the focal plane (which depends on the brightness of the measured
source) during the integration time of the CCD detector:
$$\sigma\approx \frac{\lambda}{D}\frac{1}{\sqrt{N}}.$$
In reality, the real accuracy that can be reached must take into account the spatial sampling of the signal, attitude high-frequency disturbances and other instrumental limitations.
Doing many such measurements for each object at different times and in different directions (position angles on the sky) will allow to derive all five astrometric parameters (two mean coordinates,
two proper motion components, and the parallax), plus additional parameters in the case of binaries.
\section{The nominal scanning law}
\begin{figure}
\includegraphics[width=0.48\textwidth]{gaia_scanning_ecliptic.jpg}
\caption{During its operational lifetime, Gaia will continuously scan the sky, roughly along great circles, according to a carefully selected pre-defined scanning law. The characteristics of this law, combined with the across-scan dimension of the astrometric fields of view, result in the above pattern for the distribution of the predicted number of transits on the sky in ecliptic coordinates.
Figure courtesy J. de Bruijne, ESA.
}
\label{f:gaia_scanning_ecliptic}
\end{figure}
\begin{figure}
\includegraphics[width=0.48\textwidth]{gaia_focal_plane.jpg}
\caption{Gaia's focal plane. The images of the stars move from left to right. The two CCD strips ($2\times 7$) to the left correspond to the Sky Mapper (SM), adjacent to the right are
the 62 CCDs of the Astrometric Field (AF), followed by the CCDs of the Blue Photometer (BP), Red Photometer (RP), and the Radial-velocity Instrument (RVS). Additionally two CCDs of the
Wavefront Sensor and two for the Basic Angle Monitor are shown.
Figure courtesy A. Short \&\ J. de Bruijne, ESA.
}
\label{f:gaia_focal_plane}
\end{figure}
The conditions mentioned in the previous section are fulfilled by Gaia's so-called nominal scanning law (see Fig.\,\ref{f:gaia_scanning_principle}).
The satellite will spin around its axis with a constant rotational period of 6 hours. The spin axis will precess
around the solar direction with a fixed aspect angle of 45$^\circ$ every 63.12 days. A somewhat larger solar-aspect angle would
be optimum, but thermal stability and power requirements limit it to $45^{\circ}$.
On average, each object in the sky is transiting the
focal plane about 70 times during the 5 year nominal mission duration. Most of the times, an object transiting through
one FoV is measured again after 106.5 or 253.5 minutes (according to the basic angle of 106.5$^\circ$) in the
other FoV. At each transit of the image of of an object, ten single measurements with ten CCDs are performed.
The essential measurements are performed along-scan, the across-scan positions only slightly increase the final precision. Since, due to the nominal scanning law, all
objects are measured under different scan directions in the course of the mission, the along-scan ones alone can build up a rigid sphere of coordinate measurements.
After about six months every point of the sky is scanned at three or more distinct epochs so that after about 18 months the five astrometric parameters can be separated from each other.
\section{The payload}
The Gaia payload consists of two telescopes, a heavy on-board data processing and storage facility, and three scientific instruments mounted on a single optical bench: The astrometric instrument,
the photometers, and a spectrograph to measure radial velocities.
Gaia has two telescopes, one for each FoV, which consist of one primary mirror of size 1.45\,m$\times$0.5\,m, a secon\-dary and a tertiary mirror.
The light from both telescopes, which are made up of silicon carbide, is combined to a common focal plane with 106 CCDs by a beam combiner at their exit pupil. An intermediate image is used for field discrimination.
The total focal length is 35\,m. The torus, on which the telescopes and instruments are mounted, requires a thermal stability of some $\mu$K, which is reached by a passive thermal design. A sunshield of 11\,m diameter, which is partially covered with solar cells to produce electricity, protects against the incident sun light.
The size of the focal plane is 420\,mm$\times$850\,mm (see Fig.\,\ref{f:gaia_focal_plane}). It consists of 14 Sky Mapper (SM) CCDs, 62 Astrometric Field (AF) CCDs, 7 CCDs for the Blue Photometer (BP),
7 for the Red Photometer (RP), and 12 for the Radial Velocity Spectrometer (RVS). Additionally, two CCDs are used for the Wave Front Sensor and two for the Basic Angle Monitor.
All CCDs operate in the so called Time Delay Integration (TDI) mode, i.e. accumulated charges
of the CCD are transported across the CCD in synchrony with the images.
During a transit through either field of view the image of a star (or an asteroid or quasar) is first registered by one of the SM CCD strips (SM1, if a star passes through FoV 1, SM2 for FoV 2). The SM shall automatically detect all
objects down to 20$^{\rm th}$ magnitude (including variable stars, supernovae, micolensing events, and solar system objects). It also has to reject prompt-particle events (``cosmics''). The on-board
processing of the SM images also
performs the first astrometric measurement in along- and across-scan direction.
In order to additionally filter out false detections, the image of an object must be confirmed on the first strip of the AF in order to be processed any further. If this check is passed, the other AF
CCDs can perform their main goal, the high-precision measurement of the along-scan position. This measurement actually consists of the time when the centroid of the image is entering the CCD's read-out
register. Since Gaia has 9 AF strips, every FoV transit delivers 9 such measurements from the AF and 1 from the respective SM. Additionally, the centroid's across-scan position is measured in AF1 and,
-- for calibrational purposes -- in all other AF strips as well for a small sample of images.
In order to reduce the data rate and the read-out noise, only small ``windows''
around each target star, additionally binned in across-scan direction depending on the object's magnitude,
are read out and transmitted to the ground (faint-star windows consist of 6 along-scan TDI pixels, bright stars have an extension of 18). 24 hours of observation data are downlinked during the 7-8 hours of ground contact per day.
The data rate to the ground station Cebreros in Spain is about five Mbit/s. When Gaia scans along the plane of the Milky Way, the seven hours of contact per day is not enough to download
all data even taking into account the large on-board memory buffer. Therefore, it is planned to increase the downlink capacity with a second ground station (New Norcia in Australia) during these times of
galactic plane scans.
The maximum stellar density, important for scans over globular clusters or Baade's window, where Gaia can perform measurements is 3 million stars down to $20^{\rm th}$ magnitude, but only 600,000 stars per square degree (or one star per six square arcseconds) can actually be measured. The technical reason for this limitation is that Gaia can read out only five active windows at each moment (TDI beat) on each CCD.
In principle it is possible to temporally activate a mode of reduced precession speed so that more frequent scans over e.g. Baade's windows would be possible. Since the data-loss of stars is random,
more stars have a good chance to be measured that way. However, it is not decided yet whether such a mode will be actually switched on.
Multi-colour photometry is provided by
two low-reso\-lotion fused-silica prisms dispersing all the
light entering the field of view in the along-scan direction prior to detection.
The Blue Photometer (BP) operates in the wavelength
range 3300--6800\,\AA; the Red Photometer (RP) covers the
wavelength range 6400--10500\,\AA.
The RVS is a near-
infrared (8470 -- 8740\,\AA), medium resolution spectrograph:
$\mathrm{R} = \lambda / \Delta \lambda = 11\,500$. It is illuminated by the same
two telescopes as the astrometric and photometric instruments.
\section{The astrometric solution}
The astrometric core solution will be based on about $10^8$ primary stars, which means to solve for
some $5\times 10^8$ astrometric parameters (positions, proper motions, and parallaxes). However, the attitude of the
satellite (parametrised into $\sim 10^8$ attitude parameters over five years) must also only be determined with high
accuracy from the measurements itself. Additionally, a few million calibrational parameters must be solved for to describe the geometry of the
instruments, and finally, deviations from general relativity are accounted for by solving for some
post-Newtonian parameters.
The number of observations for the $10^8$ primary stars is about $6\cdot 10^{10}$.
The condition equations
connecting the unknowns to the observed data are
non-linear but well linearised at the
sub-arcsec level. Direct solution of the corresponding least-squares
problem is infeasible, because
the large number of unknowns and their strong inter-connec\-tivity
prevents any useful decomposition of the problem into
manageable parts. The proposed method is a block-iterative scheme.
Intensive tests are currently under way and have already
demonstrated its feasibility with $10^6$ stars assuming realistic random and systematic
errors in the initial conditions.
The $l^{\rm th}$ time measurement on an SM or AF CCD $t_l$ (the read-out time of the along-scan image centroid) depends on the parameters of the source on the sky $\vec{s}_i$ (position $\alpha, \delta$, proper motion
$\mu_\alpha, \mu_\delta$, and the parallax $\varpi$), the attitude parameters (spline coefficients) $\vec{a}_j$, geometric calibration parameters (e.g. basic angle; position, distortion, and orientation
of the CCDs) $\vec{c}_k$, global parameters (e.g. the post-Newtonian parameter $\gamma$), and some auxiliary data (ephemeris of solar-system objects, Gaia's orbit, etc) $\vec{A}$ (Lindegren et al. in prep.):
$$t_l=f_{\rm AL}(\vec{s}_i.\vec{a}_j, \vec{c}_k, \vec{g}, \vec{A}) + \rm{noise}$$.
In an analogous way the across-scan position $p_L$ (pixel position of the along-scan centroid) is given by
$$p_l=f_{\rm AC}(\vec{s}_i.\vec{a}_j, \vec{c}_k, \vec{g}, \vec{A}) + \rm{noise}$$.
For Gaia, the light deflection from the Sun is very large all over the sky. In the direction perpendicular to the Sun, the angular change of position is
still 4 milliarcseconds, about 200 times Gaia's accuracy. In the Gaia project even the bending of light due to the major planets cannot be neglected. Moreover, the accuracy
of the Einstein's theory of general relativity to predict the light-deflection can be tested with an accuracy of some $10^{-7}$ (in the PPN parameter $\gamma$).
With provisional values of the source, attitude, calibration, and global parameters, the models for the functional along-scan $f{\rm AC}$ and
across-scan behaviour
$f_{\rm AC}$ of Gaia allows to predict values for $t_l$ and $p_l$. The difference between the observed and predicted values allows a correction of the
provisional parameter values with a least-squares method.
A linearisation leads to the observation equations
$$t_l^{\rm obs}-t_l^{\rm pred}=
\frac{\partial f_{\rm AL}}{\partial \vec{s}_i}
+\frac{\partial f_{\rm AL}}{\partial \vec{a}_j}
+\frac{\partial f_{\rm AL}}{\partial \vec{c}_k}
+\frac{\partial f_{\rm AL}}{\partial \vec{g}}+\rm{noise}$$
and
$$p_l^{\rm obs}-p_l^{\rm pred}=
\frac{\partial f_{\rm AC}}{\partial \vec{s}_i}
+\frac{\partial f_{\rm AC}}{\partial \vec{a}_j}
+\frac{\partial f_{\rm AC}}{\partial \vec{c}_k}
+\frac{\partial f_{\rm AC}}{\partial \vec{g}}+\rm{noise}.$$
All the parameters are strongly connected to each other, so that it cannot be easily broken into a number of independent solutions.
There is also no simple way to reduce the equations to a smaller structure and size using sparse matrix algebra. Therefore, a direct mathematical
solution seems unfeasible. Note, however, that the strong connectivity is a desired feature that makes the astrometric solution ``rigid'' and accurate.
Since no direct solution is possible, the practical approach is to calculate the
corrections (updates) separately for $\Delta \vec{s}$,
$\Delta \vec{a}$, $\Delta \vec{c}$, and $\Delta \vec{g}$, keeping the other values constant (block iteration).
Each partial problem is relatively easy to solve, because the connectivity is given up.
Cyclically the four partial problems are solved until convergence is reached. Due to the strong correlations
between the four block (particularly the two blocks between attitude and source parameters), a large number of iterations are needed.
However, it has been demonstrated that it is possible to reduce the computing time to a manageable level by convergence acceleration techniques.
This block-iterative scheme is called the Astrometric Global Iterative Solution (AGIS).
This mathematical approach leads to an astrometric ``self-calibration''. In order to reach this calibrational task, the instrument must be very stable over longer time scales.
Later on, with a good solution for the attitude and the geometric parameters based on the
measurements of the $10^8$ primary stars, the remaining $9\cdot 10^8$ stars can
be linked into the system.
Currently, the AGIS is tested with simulated data (bearing systematic and random errors) for $10^6$ primary sources (which means 1\%\ of the
full problem). A parallel computer cluster with 22 nodes is used on which 45 AGIS iterations take about 70 hours. The system will be successively upgraded to cope with the full problem in about 2012.
In the end, AGIS will determine about 5 billion astrometric parameters, about 150 million unknowns for the attitude, and 10-50 million other calibration parameters from $10^{12}$ individual
astrometric measurements.
The precision of the astrometric parameters of individual stars
depends on their magnitude and color, and to a lesser
extent on their location in the sky. Sky-averaged values
for the expected parallax precision are displayed in
Table\,\ref{tab:astrom-performance}. The corresponding figures for the
coordinates and for
the annual proper motions are similar but slightly smaller
(by about 15 and 25\%).
Note, that a parallax accuracy of 20 microarcseconds (the thickness of a human hair seen from a distance of 200 km) for a $15^{\rm th}$ magnitude star
translates into an accuracy of the distance determination of
0.1\%\ for 50\,pc, and 1\%\ for 500\,pc.
\begin{table}[htb]
\caption{End-of-mission parallax precision in microarcseconds.
Representative values are shown for unreddened stars of the indicated
spectral types and V~magnitudes.
The values are computed
using the actual Gaia design as input.
The performance calculation used does not include the effects of radiation
damage to the CCDs.
\label{tab:astrom-performance}}
\begin{center}
\begin{tabular}{|c|c|r|}
\hline
Star type & V magnitude & nominal \\
& & performance \\
\hline
& $<$ 10 & 5.2 \\
B1V & 15 & 20.6 \\
& 20 & 262.9 \\
\hline
& $<$ 10 & 5.1 \\
G2V & 15 & 19.4 \\
& 20 & 243.4 \\
\hline
& $<$ 10 & 5.2 \\
M6V & 15 & 8.1 \\
& 20 & 83.9 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{What does this accuracy mean?}
In order illustrate picture what an accuracy of 20\,$\mu$as means, consider the following examples:
\begin{itemize}
\item Proper motion:
\begin{itemize}
\item $M=+10$: A star with an absolute magnitude of $+10$ has $15^{\rm th}$ at 100\,pc. At this distance, an accuracy of the proper motion of
20\,$\mu$as/year corresponds to 10\,m/s, i.e. planets can be found around about half a million stars (Jupiter moves the sun by 15 m/s).
\item $M=0$: Stars with an absolute magnitude of zero could be measured down to 1\,km/s at 10\,kpc (i.e. that even the lowest-velocity stellar populations can be kinematically studied throughout the
entire galaxy).
\item $M=-3.5$: The motion of bright stars with $M=-3.5$ can be studied down to 5\,km/sec, i.e. that the internal kinematics of the Magellanic Clouds at 50\,kpc can be studied in as much detail as
the solar neighbourhood can be now (5\,km/s=2.5\,mas/a at 400\,pc).
\item $M=-10$: For the brightest stars velocities of 100 km/s can be still detected at 1\,Mpc, i.e. a handful of very luminous stars in M\,31 will show that galaxy's rotation.
\end{itemize}
\item Parallaxes:
\begin{itemize}
\item 20\,$\mu$as=1\%\ of 0.5\,kpc, i.e. the six-dimensional structure of the Orion complex can be studied with 5\,pc depth resolution.
\item 20\,$\mu$as=10\%\ of 5\,kpc, i.e. a direct high-precision distance determination is possible even for very small stellar groups throughout most of our Galaxy.
,\item 20\,$\mu$as=100\%\ of 50\,kpc, i.e. a direct distance determination of the Magellanic Clouds is at the edge.
\end{itemize}
\item Linear sizes:
\begin{itemize}
\item 20\,$\mu$as=1 solar diameter at 0.5\,kpc, i.e. normal sun\-spots just do not disturb the measurements, but Jupi\-ters do, what makes them
detectable to some extend.
\end{itemize}
\end{itemize}
\section{The spectro-photometric and RV observations}
The most precise photometry for each source will be based on
the unfiltered AF observations with a broad passband between 3500-10,000\,\AA.
Colour information will be available
from the red (RP) and blue (BP) spectrophotometric fields for all sources – these data
will be in the form of mission-averaged low-resolution spectra.
Although the astrometric instrument contains no refractive optics, the images are
slightly chromatic because of the wavelength dependence of diffraction. Therefore, the
photometric colour determinations are not only nice to have astrophysically, but are needed to correct for the
relative displacement between early-type and very red stars (chromaticity) which may be as large as 1 milliarcsecond.
The photometry provided by Gaia will be unprecedented in homogeneity and depth. The precision of the
brightness determination with the AF will be between 1 and 10 millimag at $19^{\rm th}$ magnitude.
Moreover, radial velocities are measured with a precision between 1\,km/sec (V=11.5) and
30\,km/sec (V=17.5).
The measurements of radial velocities are important to correct for perspective acceleration which is induced by the
motion along the line of sight.
The spectroscopic RVS measurements complement the astrometric ones, so that all three spatial and all three velocity components are available, at least for the brighter stars.
By analysing the RVS spectra and the BP/RP photometry, also the atmospheric parameters $T_{\rm eff}$, $\log g$, and the chemical composition can be determined for a large fraction of the
observed stars.
\section{Radiation-induced CCD damages}
At L2 Gaia is outside the Earth's magnetosphere and fully exposed to high-energy particles (mostly protons) from cosmic rays and the Sun. With its limited
weight budget for a Soyuz-Fregat launch and the large size of the CCD array, only minimal shielding for the detectors is possible.
The major effect on the CCDs comes from protons from the Sun during solar flares, which may be quite frequent during the Gaia mission, because Gaia will be launched close to the next solar maximum.
These protons can collide with the silicon atoms of the CCD and may generate a point defect in the
crystal lattice. As a result, charge traps are produced which can capture the electrons during the read-out in TDI mode.
These electrons are released again at a slightly late time.
By this re-distribution of charges within a recorded image, the point spread function, or one-dimensional line spread function (LSF) of a source is distorted and the position of the centroids (which are the main quantities in the astrometric measurements) is strongly shifted with respect to an undamaged CCD.
Since only a small window around each star is downlinked to the ground, the tail of trapped charges released outside the windows reduces the (photometric) flux measurements, additionally to the
astrometric effect.
Ground-based investigations of irradiated CCDs are currently undertaken in order to find out the exact behaviour of the CCDs under various conditions. Of particular importance is that
the effect on the LSF depends on the the history: If another image has passed the same region of the CCD seconds before, the traps are still partially filled. This interdependence
can strongly be reduced if large extra charges are periodically inserted into the CCDs to fill the traps (e.g. every few seconds).
Then fewer empty traps are encountered and the centroid shifts as well as the distortions of the LSFs are minimised. Additionally, only the CCD charge history between two charge injections
need to be taken into account for the correction of the effects.
The hope is that, from the ground-based measurement, the radiation damage effects can be so well understood that a full correction for the centroid shifts and the flux reduction is possible.
However, the data processing of Gaia
is strongly complicated, since a large number of extra parameters have to be taken into account: The density of the traps at each instance, the magnitude of the
source, the time since charge injection, and the time since previous sources moved through the same CCD column.
It is very probable, that the values for the astrometric end-of-mission performance are degraded somewhat, but a realistic estimation is not yet possible. Additionally, the limiting
magnitude for the RVS may be brighter than originally planned.
\section{Computational effort}
The total raw data volume is estimated to be of the order of 100 TB, the total amount of processed and archived data is of the order of 1 PB. Current estimated for the
computational volume is $1.5\cdot 10^{21}$ floating-point operations, but this may actually be a lower limit and will probably increase due to the correction tasks for the
radiation-induced CCD damages. If the processing of one star (with typically 1000 measurement per star)
would need 1 second, 30 years of the data analysis would be needed. However, with massively distributed computation and the faster computers of the time of Gaia's data analysis, this
task will be feasible.
\section{Conclusion}
Since distance determinations are of such fundamental importance in astronomy, it is clear that Gaia's high-accuracy parallax measurements will influence basically
all fields in astronomy. Together with the radial-velocity data, the proper motion determinations will be of particular importance for our understanding of the
stellar dynamics of the Milky Way. For details on the expected progress, see the following paper by N. Walton.
\acknowledgements
Work on Gaia data processing in Heidelberg is supported by the
DLR grant 50 QG 0501. The figures are courtesy of EADS Astrium and ESA. The author thanks U. Bastian for
the careful reading and valuable discussions and for a draft of Sect. 7.
|
2,877,628,088,612 | arxiv | \section{Introduction}
Assume that $\{y_t\dvtx t=0,\pm1,\pm2,\ldots\}$ is generated by the
ARMA--GARCH model
\begin{eqnarray}
\label{11}
y_t&=&\mu+\sum_{i=1}^p\phi_iy_{t-i}+\sum_{i=1}^q\psi_i\e_{t-i}+\e_t,
\\
\label{12}
\e_t&=&\eta_t\sqrt{h_t} \quad\mbox{and}\quad
h_t=\alpha_0+\sum_{i=1}^r\alpha_i\e_{t-i}^2+\sum_{i=1}^s\beta_ih_{t-i},
\end{eqnarray}
where $\alpha_0>0,\alpha_i\geq0$ $(i=1,\ldots,r),\beta_j\geq0$
$(j=1,\ldots,s)$, and $\eta_t$ is a sequence of i.i.d. random variables
with $E\eta_t=0$. As we all know, since \citet{r12} and \citet
{r5}, model (\ref{11})--(\ref{12}) has been widely used in economics and
finance; see \citet{r6}, \citet{r2},
\citet{r7} and \citet{r14}. The
asymptotic theory of the quasi-maximum likelihood estimator (QMLE)
was established by \citet{r26} and by \citet{r13} when $E\e
_t^4<\infty$. Under the strict stationarity
condition, the consistency and the asymptotic normality of the QMLE
were obtained by \citet{r21} and \citet{r29} for the
GARCH$(1,1)$ model, and by \citet{r4} and
\citet{r13} for the GARCH$(r, s)$ model. \citet{r15}
established the asymptotic theory of the QMLE for the GARCH
model when $E\e_t^{2}<\infty$, including both cases in which
$E\eta_t^{4}=\infty$ and $E\eta_t^{4}<\infty$. Under the geometric
ergodicity condition, \citet{r20} gave the asymptotic
properties of the modified QMLE for the first order AR--ARCH model.
Moreover, when $E|\e_t|^\iota<\infty$ for some $\iota>0$, the
asymptotic theory of the global self-weighted QMLE and the local
QMLE was established by \citet{r25} for model~\mbox{(\ref{11})--(\ref{12})}.
It is well known that the asymptotic normality of the QMLE requires
$E\eta_{t}^{4}<\infty$ and this property is lost when
$E\eta_{t}^{4}=\infty$; see \citet{r15}. Usually, the least
absolute deviation (LAD) approach can be used to reduce the moment
condition of $\eta_{t}$ and provide a robust estimator. The local
LAD estimator was studied by \citet{r31} and \citet{r22} for
the pure GARCH model, \citet{r9} for the double
AR(1) model, and \citet{r23} for the ARFIMA--GARCH model. The
global LAD estimator was studied by \citet{r16} for
the pure ARCH model and by \citet{r3} for the pure
GARCH model, and by \citet{r35} for the double AR($p$) model.
Except for the AR models studied by Davis, Knight and Liu (\citeyear{r11})
and \citet{r24} [see also
Knight (\citeyear{r18}, \citeyear{r19})], the nondifferentiable and
nonconvex objective function appears when one studies the LAD
estimator for the ARMA model with i.i.d. errors. By assuming the
existence of a $\sqrt{n}$-consistent estimator,
the asymptotic normality of the LAD estimator is established for the
ARMA model with i.i.d. errors
by \citet{r10} for the finite variance case and by \citet
{r30} for the infinite variance case;
see also Wu and Davis (\citeyear{r33}) for the noncausal or noninvertible
ARMA model. Recently, \citet{r36} proved the asymptotic
normality of the global LAD estimator for the finite/infinite variance
ARMA model with i.i.d. errors.
In this paper, we investigate the self-weighted quasi-maximum
exponential likelihood estimator (QMELE) for model (\ref{11})--(\ref{12}).
Under only a fractional moment condition of $\e_{t}$ with
$E\eta_{t}^{2}<\infty$, the strong consistency and the asymptotic
normality of the global self-weighted QMELE are obtained by using
the bracketing method in \citet{r32}. Based on this global
self-weighted QMELE, the local QMELE is showed to be asymptotically
normal for the ARMA--GARCH (finite variance) and --IGARCH models. A
formal comparison of two estimators is given for some cases.
\begin{figure}
\includegraphics{895f01.eps}
\caption{The Hill estimators $\{\hat{\alpha}_{\eta}(k)\}$ for
$\{\hat{\eta_{t}}^{2}\}$.} \label{figure1}
\end{figure}
To motivate our estimation procedure, we revisit the GNP deflator
example of \citet{r5}, in which the GARCH model was proposed
for the first time. The model he specified is an AR(4)--GARCH$(1,1)$
model for the quarterly data from 1948.2 to 1983.4 with a total of
143 observations. We use this data set and his fitted model to
obtain the residuals $\{\hat{\eta}_{t}\}$. The tail index of
$\{\eta_{t}^{2}\}$ is estimated by Hill's estimator
$\hat{\alpha}_{\eta}(k)$ with the largest $k$ data of
$\{\hat{\eta}_{t}^{2}\}$, that is,
\[
\hat{\alpha}_{\eta}(k)=\frac{k}{\sum_{j=1}^{k} (\log\tilde{\eta
}_{143-j}-\log\tilde{\eta}_{143-k})},
\]
where $\tilde{\eta}_{j}$ is the $j$th order statistic of
$\{\hat{\eta}_{t}^{2}\}$. The plot of
$\{\hat{\alpha}_{\eta}(k)\}_{k=1}^{70}$ is given in Figure~\ref
{figure1}. From
this figure, we can see that $\hat{\alpha}_{\eta}(k)>2$ when $k\leq
20$, and $\hat{\alpha}_{\eta}(k)<2$ when $k>20$. Note that Hill's
estimator is not so reliable when~$k$ is too small. Thus, the tail
of $\{\eta_{t}^{2}\}$ is most likely less than 2, that is,
$E\eta_{t}^{4}=\infty$. Thus, the setup that $\eta_{t}$ has a finite
forth moment may not be suitable, and hence the standard QMLE
procedure may not be reliable in this case. The estimation procedure
in this paper only requires $E\eta_{t}^{2}<\infty$. It may provide a
more reliable alternative to practitioners. To further illustrate
this advantage, a simulation study is carried out to compare the
performance of our estimators and the self-weighted/local QMLE in
\citet{r25}, and a~new real example on the world crude oil price is
given in this paper.
This paper is organized as follows. Section \ref{sec2} gives our results on
the global self-weighted QMELE. Section \ref{sec3} proposes a local QMELE
estimator and gives its limiting distribution. The simulation
results are reported in Section~\ref{sec4}. A real example is given in
Section \ref{sec5}. The proofs of two technical lemmas are provided in
Section \ref{sec6}. Concluding remarks are offered in Section~\ref{sec7}. The
remaining proofs are given in the \hyperref[app]{Appendix}.
\section{Global self-weighted QMELE}\label{sec2}
Let $\theta=(\gamma',\delta')'$ be the unknown parameter of model
(\ref{11})--(\ref{12}) and its true value be $\theta_{0}$, where
$\gamma=(\mu,\phi_1,\ldots,\phi_p,\allowbreak\psi_1,\ldots,\psi_q)'$ and
$\delta=(\alpha_0,\ldots,\alpha_r,\beta_1,\ldots,\beta_s)'$. Given the
observations $\{y_n,\ldots,\allowbreak y_1\}$ and the initial values
$Y_{0}\equiv\{y_0,y_{-1},\ldots\}$, we can rewrite the
parametric
model \mbox{(\ref{11})--(\ref{12})} as
\begin{eqnarray}\quad
\label{21}
\e_t(\gamma)&=&y_t-\mu-\sum_{i=1}^p\phi_iy_{t-i}-\sum_{i=1}^q\psi_i\e
_{t-i}(\gamma),\\
\label{22}
\eta_t(\theta)&=&\e_t(\gamma)/\sqrt{h_t(\theta)}
\quad\mbox{and}\nonumber\\[-8pt]\\[-8pt]
h_t(\theta)&=&\alpha_0+\sum_{i=1}^r\alpha_i\e_{t-i}^2(\gamma)+\sum
_{i=1}^s\beta_ih_{t-i}(\theta).\nonumber
\end{eqnarray}
Here, $\eta_t(\theta_0)=\eta_t$, $\e_t(\gamma_0)=\e_t$ and
$h_t(\theta_0)=h_t$.
The parameter space is $\Theta=\Theta_{\gamma} \times\Theta_{\delta
}$, where
$\Theta_{\gamma}\subset R^{p+q+1}$, $\Theta_{\delta}\subset
R^{r+s+1}_{0}$, $R=(-\infty, \infty)$ and $R_{0}=[0, \infty)$.
Assume that $\Theta_{\gamma}$ and $\Theta_{\delta}$ are compact and
$\theta_{0}$ is an interior point in $ \Theta$. Denote $\alpha(z)=
\sum_{i=1}^{r}\alpha_{i}z^{i}$, $\beta(z)= 1- \sum_{i=1}^{s}
\beta_{i}z^{i}$, $\phi(z)= 1-\sum_{i=1}^{p}\phi_{i}z^{i}$ and
$\psi(z)=1+ \sum_{i=1}^{q}\psi_{i}z^{i}$. We introduce the
following assumptions:
\begin{asm}\label{asm21}
For each
$\theta\in\Theta$, $\phi(z)\neq0$ and $\psi(z)\neq0$ when
$|z|\leq1$, and $\phi(z)$ and $\psi(z)$ have no common root with
$\phi_p\neq0$ or $\psi_q\neq0$.
\end{asm}
\begin{asm}\label{asm22}
For each $\theta\in\Theta$,
$\alpha(z)$ and $\beta(z)$ have no common root,
$\alpha(1)\neq1,\alpha_r+\beta_s\neq0$ and $\sum_{i=1}^s\beta_i<1$.
\end{asm}
\begin{asm}\label{asm23}
$\eta_t^{2}$ has a nondegenerate distribution with
$E\eta_t^{2}<\infty$.
\end{asm}
Assumption \ref{asm21} implies the stationarity, invertibility and
identifiability of mod\-el~(\ref{11}), and Assumption \ref{asm22} is the
identifiability condition for mo\-del~(\ref{12}).~Assumption \ref{asm23} is
necessary to ensure that $\eta_t^{2}$ is not almost surely (a.s.)
a~constant. When $\eta_{t}$ follows the standard double exponential
distribution, the weighted log-likelihood function (ignoring a
constant) can be written as follows:
\begin{equation}\label{23}
L_{sn}(\theta)=\f{1}{n}\sum_{t=1}^n w_tl_t(\theta) \quad\mbox{and}\quad
l_t(\theta)=\log\sqrt{h_t(\theta)}+\f{|\e_t(\gamma)|}{\sqrt{h_t(\theta)}},
\end{equation}
where $w_t=w(y_{t-1},y_{t-2},\ldots)$ and $w$ is a measurable, positive
and bounded function on $R^{Z_0}$ with $Z_0=\{0,1,2,\ldots\}$. We look\vadjust{\goodbreak}
for the minimizer,
$\hat{\theta}_{sn}=(\hat{\gamma}_{sn}',
\hat{\delta}_{sn}')'$, of $L_{sn}(\theta)$ on $\Theta$, that is,
\[
\hat{\theta}_{sn}=\mathop{\arg\min}_{\Theta}L_{sn}(\theta).
\]
Since the weight $w_{t}$ only depends on $\{y_{t}\}$ itself and we
do not assume that~$\eta_{t}$ follows the standard double exponential distribution,
$\hat{\theta}_{sn}$ is called the self-weighted quasi-maximum
exponential likelihood estimator (QMELE) of $\theta_{0}$. When
$h_{t}$ is a constant, the self-weighted QMELE reduces to the
weighted LAD estimator of the ARMA model in \citet{r30} and
\citet{r36}.
The weight $w_{t}$ is to reduce the moment condition of $\e_t$ [see
more discussions in \citet{r25}], and it satisfies the following
assumption:
\begin{asm} \label{asm24}
$E[(w_t+w_t^2)\xi_{\rho t-1}^3]<\infty$ for any
$\rho\in(0,1)$, where $\xi_{\rho
t}=1+\sum_{i=0}^\infty\rho^i|y_{t-i}|$.
\end{asm}
When $w_{t}\equiv1$, the $\hat{\theta}_{sn}$ is the
global QMELE and it needs the moment condition
$E|\varepsilon_{t}|^{3}<\infty$ for its asymptotic normality, which
is weaker than the moment condition $E\varepsilon_{t}^4<\infty$ as
for the QMLE of $\theta_0$ in \citet{r13}. It is
well known that the higher is the moment condition of $\e_t$, the
smaller is the parameter space. Figure \ref{figure2} gives the strict
\begin{figure}
\includegraphics{895f02.eps}
\caption{The regions bounded by the indicated curves are for the
strict stationarity and for $E|\e_t|^{2\iota}<\infty$ with
$\iota=0.05, 0.5, 1, 1.5$ and $2$, respectively.} \label{figure2}
\end{figure}
stationarity region and regions for $E|\e_t|^{2\iota}<\infty$ of the
GARCH$(1,1)$ model: $\e_t=\eta_t\sqrt{h_t}$ and
$h_t=\alpha_0+\alpha_1\e_{t-1}^{2}+\beta_1 h_{t-1}$, where
$\eta_t\sim \operatorname{Laplace}(0,1)$. From Figure \ref{figure2}, we can see that
the region
for $E|\e_t|^{0.1}<\infty$ is very close to the region for strict
stationarity of $\e_t$, and is much bigger than the region for
$E\e_t^{4}<\infty$.
Under Assumption \ref{asm24}, we only need a
fractional moment condition for the asymptotic property of
$\hat{\theta}_{sn}$ as follows:
\begin{asm} \label{asm25}
$E|\e_t|^{2\iota}<\infty$ for some $\iota>0$.
\end{asm}
The sufficient and necessary condition of Assumption \ref{asm25} is
given in
Theorem~2.1 of \citet{r25}. In practice, we can use Hill's estimator
to estimate the tail index of $\{y_{t}\}$ and its estimator may
provide some useful guidelines for the choice of $\iota$. For
instance, the quantity $2\iota$ can be any value less than the tail
index $\{y_{t}\}$. However, so far we do not know how to choose the
optimal $\iota$. As in \citet{r25} and \citet{r30}, we choose
the weight function $w_{t}$ according to $\iota$. When $\iota=1/2$
(i.e., $E|\e_{t}|<\infty$), we can choose the weight function as
\begin{equation}\label{24}
w_{t}=\Biggl(\max\Biggl\{1,C^{-1}\sum_{k=1}^{\infty}
\frac{1}{k^{9}}|y_{t-k}|I\{|y_{t-k}|>C\}\Biggr\}\Biggr)^{-4},
\end{equation}
where $C>0$ is a constant. In practice, it works well when we select
$C$ as the 90\% quantile of data $\{y_{1},\ldots,y_{n}\}$. When\vadjust{\goodbreak}
$q=s=0$ (AR--ARCH model), for any $\iota>0$, the weight can be
selected as
\[
w_{t}=\Biggl(\max\Biggl\{1,C^{-1}\sum_{k=1}^{p+r}
\frac{1}{k^{9}}|y_{t-k}|I\{|y_{t-k}|>C\}\Biggr\}\Biggr)^{-4}.
\]
When $\iota\in(0,1/2)$ and $q>0$ or $s>0$, the weight function need
to be modified as follows:
\[
w_{t}=\Biggl(\max\Biggl\{1,C^{-1}\sum_{k=1}^{\infty}
\frac{1}{k^{1+8/\iota}}|y_{t-k}|I\{|y_{t-k}|>C\}\Biggr\}\Biggr)^{-4}.
\]
Obviously, these weight functions satisfy Assumptions \ref{asm24} and
\ref{asm27}.
For more choices of $w_t$, we refer to \citet{r24} and \citet{r30}.
We first state the strong convergence of $\hat{\theta}_{sn}$
in the following theorem and its proof is given in the \hyperref[app]{Appendix}.
\begin{theorem}\label{theorem21}
Suppose $\eta_t$ has a median zero with $E|\eta_t|=1$. If
Assumptions \ref{asm21}--\ref{asm25} hold, then
\[
\hat{\theta}_{sn}\rightarrow\theta_0 \qquad\mbox{a.s., as } n\to
\infty.
\]
\end{theorem}
To study the rate of convergence of $\hat{\theta}_{sn}$, we
reparameterize the weighted log-likelihood function (\ref{23}) as
follows:
\[
L_n(u)\equiv nL_{sn}(\theta_0+u)-nL_{sn}(\theta_0),
\]
where $u\in\Lambda\equiv\{u=(u_1',u_2')':u+\theta_0\in\Theta\}$. Let
$\hat{u}_{n}=\hat{\theta}_{sn}-\theta_0$. Then, $\hat{u}_{n}$ is the
minimizer of $L_n(u)$ on $\Lambda$. Furthermore, we have
\begin{equation}\label{25}
L_n(u)=\sum_{t=1}^nw_tA_t(u)+\sum_{t=1}^nw_tB_t(u)+\sum_{t=1}^nw_tC_t(u),
\end{equation}
where
\begin{eqnarray*}
A_t(u)&=&\f{1}{\sqrt{h_t(\theta_0)}}[|\e_t(\gamma_0+u_1)|-
|\e_t(\gamma_0)|],\\
B_t(u)&=&\log\sqrt{h_t(\theta_0+u)}-\log\sqrt{h_t(\theta_0)}
+\f{|\e_t(\gamma_0)|}{\sqrt{h_t(\theta_0+u)}}
-\f{|\e_t(\gamma_0)|}{\sqrt{h_t(\theta_0)}},\\
C_t(u)&=&\biggl[\f{1}{\sqrt{h_t(\theta_0+u)}}
-\f{1}{\sqrt{h_t(\theta_0)}}\biggr][|\e_t(\gamma_0+u_1)|-|\e_t(\gamma_0)|].
\end{eqnarray*}
Let $I(\cdot)$ be the indicator function. Using the identity
\begin{eqnarray} \label{26}
|x-y|-|x|&=&-y[I(x>0)-I(x<0)]\nonumber\\[-8pt]\\[-8pt]
&&{}+2\int_{0}^y [I(x\leq s)-I(x\leq0)]\,ds\nonumber
\end{eqnarray}
for $x\neq0$, we can show that
\begin{equation} \label{27}
A_t(u)=q_t(u)[I(\eta_t>0)-I(\eta_t<0)]+2\int_{0}^{-q_t(u)}
X_t(s)\,ds,
\end{equation}
where $X_t(s)=I(\eta_t\leq s)-I(\eta_t\leq0)$,
$q_t(u)=q_{1t}(u)+q_{2t}(u)$ with
\[
q_{1t}(u)=\f{u'}{\sqrt{h_t(\theta_0)}}\,\f{\p\e_t(\gamma_0)}{\p\theta}
\quad\mbox{and}\quad
q_{2t}(u)=\f{u'}{2\sqrt{h_t(\theta_0)}}\,\f{\p^2\e_t(\xi^*)}{\p\theta\,\p
\theta'}u,
\]
and $\xi^*$ lies between $\gamma_0$ and $\gamma_0+u_1$. Moreover,
let $\mathcal{F}_{t}=\sigma\{\eta_k: k\leq t\}$ and
\[
\xi_t(u)=2w_t\int_{0}^{-q_{1t}(u)} X_t(s) \,ds.
\]
Then, from (\ref{27}), we have
\begin{equation}\label{28}
\sum_{t=1}^n w_tA_t(u) = u' T_{1n} +
\Pi_{1n}(u)+\Pi_{2n}(u)+\Pi_{3n}(u),
\end{equation}
where
\begin{eqnarray*}
T_{1n}&=&\sum_{t=1}^{n}
\f{w_t}{\sqrt{h_t(\theta_0)}}\,\f{\p\e_t(\gamma_0)}{\p\theta}
[I(\eta_t>0)-I(\eta_t<0)],\\
\Pi_{1n}(u)&=&\sum_{t=1}^n
\{\xi_t(u)-E[\xi_t(u)|\mathcal{F}_{t-1}]\},\\
\Pi_{2n}(u)&=&\sum_{t=1}^nE[\xi_t(u)|\mathcal{F}_{t-1}],\\
\Pi_{3n}(u)&=&\sum_{t=1}^n w_tq_{2t}(u)[I(\eta_t>0)-I(\eta_t<0)]\\
&&{} +2 \sum_{t=1}^n w_t\int_{-q_{1t}(u)}^{-q_t(u)} X_t(s) \,ds.
\end{eqnarray*}
By Taylor's expansion, we can see that
\begin{equation}\label{29}
\sum_{t=1}^n w_tB_t(u)=u'T_{2n}+\Pi_{4n}(u)+\Pi_{5n}(u),
\end{equation}
where
\begin{eqnarray*}
T_{2n}&=&\sum_{t=1}^n \f{w_t}{2h_t(\theta_0)}\,\f{\p
h_t(\theta_0)}{\p\theta}(1-|\eta_t|),\\
\Pi_{4n}(u)&=&u'\sum_{t=1}^n w_t
\biggl(\f{3}{8}\biggl|\f{\e_t(\gamma_0)}{\sqrt{h_t(\zeta^*)}}\biggr|
-\f{1}{4}\biggr)\f{1}{h_t^2(\zeta^*)} \,\f{\p h_t(\zeta^*)}{\p\theta}\,
\f{\p
h_t(\zeta^*)}{\p\theta'}u,\\
\Pi_{5n}(u)&=&u'\sum_{t=1}^n
w_t\biggl(\f{1}{4}-\f{1}{4}\biggl|\f{\e_t(\gamma_0)}{\sqrt{h_t(\zeta^*)}}\biggr|\biggr)
\f{1}{h_t(\zeta^*)}\,\f{\p^2 h_t(\zeta^*)}{\p\theta\,\p\theta'}u,
\end{eqnarray*}
and $\zeta^*$ lies between $\theta_0$ and $\theta_0+u$.
We further need one assumption and three lemmas. The first lemma is
directly from the central limit theorem for a martingale difference
sequence. The second- and third-lemmas give the expansions of
$\Pi_{in}(u)$ for $i=1,\ldots,5$ and $\sum_{t=1}^n C_t(u)$. The key
technical argument is for the second lemma for which we use the
bracketing method in \citet{r32}.
\begin{asm}\label{asm26}
$\eta_t$ has zero median with $E|\eta_t|=1$ and a
continuous density
function $g(x)$ satisfying $g(0)>0$ and $\sup_{x\in R}g(x)<\infty$.
\end{asm}
\begin{lem}\label{lemma21}
Let $T_n=T_{1n}+T_{2n}$. If Assumptions \ref{asm21}--\ref{asm26} hold, then
\[
\f{1}{\sqrt{n}}T_n\rightarrow_d N(0,\Omega_0) \qquad\mbox{as }
n\to\infty,
\]
where $\to_d$ denotes the convergence in distribution and
\[
\Omega_0=E\biggl(\f{w_t^{2}}{h_t(\theta_0)}\,\f{\p\e_t(\gamma_0)}{\p\theta}\,\f{\p
\e_t(\gamma_0)}{\p\theta'}\biggr)
+\f{E\eta_t^{2}-1}{4}E\biggl(\f{w_t^{2}}{h_t^{2}(\theta_0)}\,\f{\p
h_t(\theta_0)}{\p\theta}\,\f{\p h_t(\theta_0)}{\p\theta'}\biggr).
\]
\end{lem}
\begin{lem}\label{lemma22}
If Assumptions \ref{asm21}--\ref{asm26} hold, then for any sequence of random
variables $u_n$ such that $u_n=o_p(1)$, it follows that
\[
\Pi_{1n}(u_n)=o_p\bigl(\sqrt{n}\|u_n\|+n\|u_n\|^2\bigr),
\]
where $o_p(\cdot)\to0$ in probability as $n\to\infty$.
\end{lem}
\begin{lem}\label{lemma23}
If Assumptions \ref{asm21}--\ref{asm26} hold, then for any sequence of random
variables $u_n$ such that $u_n=o_p(1)$, it follows that:
\begin{eqnarray*}
\mbox{\textup{(i)}\quad\hspace*{10pt}} \Pi_{2n}(u_n)&=&\bigl(\sqrt{n}u_n\bigr)'\Sigma_1\bigl(\sqrt{n}u_n\bigr)+o_p(n\|u_n\|
^2),\\
\mbox{\textup{(ii)}\quad\hspace*{10pt}} \Pi_{3n}(u_n)&=&o_p(n\|u_n\|^2),\\
\mbox{\textup{(iii)}\quad\hspace*{10pt}} \Pi_{4n}(u_n)&=&\bigl(\sqrt{n}u_n\bigr)'\Sigma_2\bigl(\sqrt{n}u_n\bigr)+o_p(n\|u_n\|
^2),\\
\mbox{\textup{(iv)}\quad\hspace*{10pt}} \Pi_{5n}(u_n)&=&o_p(n\|u_n\|^2),\\
\mbox{\textup{(v)}\quad} \sum_{t=1}^n C_t(u_n)&=&o_p(n\|u_n\|^2),
\end{eqnarray*}
where
\[
\Sigma_1=g(0)E\biggl(\f{w_t}{h_t(\theta_0)}\,\f{\p\e_t(\gamma_0)}{\p\theta}\,\f{\p
\e_t(\gamma_0)}{\p\theta'}\biggr)
\]
and
\[
\Sigma_2=\frac{1}{8}E\biggl(\f{w_t}{h_t^2(\theta_0)}\,\f{\p
h_t(\theta_0)}{\p\theta}\,\f{\p h_t(\theta_0)}{\p\theta'}\biggr).
\]
\end{lem}
The proofs of Lemmas \ref{lemma22} and \ref{lemma23} are given in Section
\ref{sec6}. We now can state
one main result as follows:
\begin{theorem}\label{theorem22}
If Assumptions \ref{asm21}--\ref{asm26} hold, then:
\begin{eqnarray*}
\mbox{\textup{(i)}\quad} \sqrt{n}(\hat{\theta}_{sn}-\theta_0)&=&O_p(1),\\
\mbox{\textup{(ii)}\quad} \sqrt{n}(\hat{\theta}_{sn}-\theta_0)&\to_d& N\bigl(0,
\tfrac{1}{4}\Sigma_0^{-1}\Omega_0\Sigma_0^{-1}\bigr) \qquad\mbox{as }
n\to\infty,
\end{eqnarray*}
where $\Sigma_0=\Sigma_1+\Sigma_2$.
\end{theorem}
\begin{pf}
(i) First, we have $\hat{u}_n=o_p(1)$ by Theorem \ref{theorem21}. Furthermore,
by~(\ref{25}), (\ref{28}) and (\ref{29}) and Lemmas \ref{lemma22} and \ref
{lemma23}, we have
\begin{equation} \label{210}
\qquad L_n(\hat{u}_n)=\hat{u}_n' T_n + \bigl(\sqrt{n}\hat{u}_n\bigr)'\Sigma_0
\bigl(\sqrt{n}\hat{u}_n\bigr)+ o_p\bigl(\sqrt{n}\|\hat{u}_n\|+n\|\hat{u}_n\|^2\bigr).
\end{equation}
Let $\lambda_{\min}>0$ be the minimum eigenvalue of $\Sigma_0$. Then
\[
L_n(\hat{u}_n)\geq-\bigl\|\sqrt{n}\hat{u}_n\bigr\|
\biggl[\biggl\|\f{1}{\sqrt{n}}T_n\biggr\|+o_p(1)\biggr]+n\|\hat{u}_n\|^2[\lambda_{\min}+o_p(1)].
\]
Note that $L_n(\hat{u}_{n})\leq0$. By the previous inequality, it
follows that
\begin{equation}\label{211}
\sqrt{n}\| \hat{u}_n\|\le[\lambda_{\min}+o_p(1)]^{-1}
\biggl[\biggl\|\f{1}{\sqrt{n}}T_n\biggr\|+o_p(1)\biggr]=O_p(1),
\end{equation}
where the last step holds by Lemma \ref{lemma21}. Thus, (i) holds.
(ii) Let $u^*_n=-\Sigma_0^{-1}T_n/2n$. Then, by Lemma \ref{lemma21},
we have
\[
\sqrt{n}u^*_n\rightarrow_d
N\bigl(0,\tfrac{1}{4}\Sigma_0^{-1}\Omega_0\Sigma_0^{-1}\bigr)
\qquad\mbox{as } n\to\infty.
\]
Hence, it is sufficient to show that
$\sqrt{n}\hat{u}_n-\sqrt{n}u_n^*=o_p(1)$. By (\ref{210}) and~(\ref{211}), we have
\begin{eqnarray*}
L_n(\hat{u}_n)&=&\bigl(\sqrt{n}\hat{u}_n\bigr)'
\frac{1}{\sqrt{n}}T_n+\bigl(\sqrt{n}\hat
{u}_n\bigr)'\Sigma_0
\bigl(\sqrt{n}\hat{u}_n\bigr)+o_p(1)\\
&=&\bigl(\sqrt{n}\hat{u}_n\bigr)'\Sigma_0
\bigl(\sqrt{n}\hat{u}_n\bigr)-2\bigl(\sqrt{n}\hat{u}_n\bigr)'\Sigma_0
\bigl(\sqrt{n}u^*_n\bigr)+o_p(1).
\end{eqnarray*}
Note that (\ref{210}) still holds when $\hat{u}_n$ is replaced by
$u^*_n$. Thus,
\begin{eqnarray*}
L_n(u_n^*)&=&\bigl(\sqrt{n}u_n^*\bigr)'\frac{1}{\sqrt{n}}T_n+\bigl(\sqrt{n}u_n^*\bigr)'
\Sigma_0
\bigl(\sqrt{n}u_n^*\bigr)+o_p(1) \\
&=&-\bigl(\sqrt{n}u_n^*\bigr)'\Sigma_0 \bigl(\sqrt{n}u_n^*\bigr)+o_p(1).
\end{eqnarray*}
By the previous two equations, it follows that
\begin{eqnarray}\label{212}
\qquad L_n(\hat{u}_n)-L_n(u^*_n)&=&\bigl(\sqrt{n}\hat{u}_n-\sqrt{n}u_n^*\bigr)'\Sigma_0
\bigl(\sqrt{n}\hat{u}_n-\sqrt{n}u_n^*\bigr)+o_p(1)
\nonumber\\[-8pt]\\[-8pt]
\qquad&\ge&\lambda_{\min}\bigl\|\sqrt{n}\hat{u}_n-\sqrt{n}u_n^*\bigr\|^{2}+o_{p}(1).\nonumber
\end{eqnarray}
Since
$L_n(\hat{u}_n)-L_n(u^*_n)=n[L_{sn}(\theta_{0}+\hat{u}_n)-L_{sn}(\theta
_{0}+u^*_n)]
\le0$ a.s., by (\ref{212}), we have
$\|\sqrt{n}\hat{u}_n-\sqrt{n}u_n^*\|=o_{p}(1)$. This completes the
proof.
\end{pf}
\begin{rem}
When $w_{t}\equiv1$, the limiting distribution in Theorem \ref
{theorem22} is the
same as that in \citet{r23}. When $r=s=0$ (ARMA model), it
reduces to the case in \citet{r30} and \citet{r36}.
In general, it is not easy to compare the asymptotic efficiency of
the self-weighted QMELE and the self-weight QMLE in \citet{r25}.
However, for the pure ARCH model, a~formal comparison of these two
estimators is given in Section \ref{sec3}. For the general ARMA--GARCH model,
a comparison based on simulation is given in Section \ref{sec4}.
\end{rem}
In practice, the initial values $Y_{0}$ are unknown, and have to be
replaced by some constants. Let $\tilde{\e}_{t}(\theta)$,
$\tilde{h}_{t}(\theta)$ and $\tilde{w}_{t}$ be $\e_{t}(\theta)$,
$h_{t}(\theta)$ and $w_{t}$, respectively, when $Y_{0}$ are
constants not depending on parameters. Usually, $Y_{0}$ are taken to
be zeros. The objective function (\ref{23}) is modified as
\[
\tilde{L}_{sn}(\theta)=\frac{1}{n}\sum_{t=1}^{n}
\tilde{w}_{t}\tilde{l}_{t}(\theta) \quad\mbox{and}\quad
\tilde{l}_{t}(\theta)=\log\sqrt{\tilde{h}_t(\theta)}+\f{|\tilde{\e
}_t(\gamma)|}{\sqrt{\tilde{h}_t(\theta)}}.
\]
To make the initial values $Y_{0}$ ignorable, we need the following
assumption.
\begin{asm} \label{asm27}
$E|w_{t}-\tilde{w}_{t}|^{\iota_{0}/4}=O(t^{-2})$, where
$\iota_{0}=\min\{\iota,1\}$.
\end{asm}
Let $\tilde{\theta}_{sn}$ be the minimizer of
$\tilde{L}_{sn}(\theta)$, that is,
\[
\tilde{\theta}_{sn}=\mathop{\arg\min}_{\Theta}\tilde{L}_{sn}(\theta).
\]
Theorem \ref{theorem23} below shows that $\tilde{\theta}_{sn}$ and
$\hat{\theta}_{sn}$ have the same limiting property. Its proof is
straightforward and can be found in \citet{r34}.
\begin{theorem}\label{theorem23}
Suppose that Assumption \ref{asm27} holds. Then, as $n\to\infty$,
\begin{eqnarray*}
&&\mbox{\hphantom{i}\textup{(i)}\quad}\mbox{if the assumptions of Theorem \ref{theorem21} hold}\\
&&\qquad\qquad\tilde
{\theta}_{sn}\to\theta_{0} \qquad\mbox{a.s.},\\
&&\mbox{\textup{(ii)}\quad}\mbox{if the assumptions of Theorem \ref{theorem22} hold}\\
&&\qquad\qquad\sqrt{n}(\tilde{\theta}_{sn}-\theta_0)\rightarrow_d
N\bigl(0,\tfrac{1}{4}\Sigma_0^{-1}\Omega_0\Sigma_0^{-1}\bigr).
\end{eqnarray*}
\end{theorem}
\section{Local QMELE}\label{sec3}
The self-weighted QMELE in Section \ref{sec2} reduces the moment condition of
$\e_t$, but it may not be efficient. In this section, we propose a
local QMELE based on the self-weighted QMELE and derive its
asymptotic property. For some special cases, a formal comparison of
the local QMELE and the self-weighted QMELE is given.\vspace*{1pt}
Using $\hat{\theta}_{sn}$ in Theorem \ref{theorem22} as an initial
estimator of
$\theta_0$, we obtain the local QMELE $\hat{\theta}_n$ through the
following one-step iteration:
\begin{equation}\label{31}
\hat{\theta}_n=\hat{\theta}_{sn}-[2\Sigma_n^{*}(\hat{\theta
}_{sn})]^{-1}T_n^{*}(\hat{\theta}_{sn}),
\end{equation}
where
\begin{eqnarray*}
\Sigma_n^{*}(\theta)&=&\sum_{t=1}^n\biggl\{\f{g(0)}{h_t(\theta)}\,\f{\p\e
_t(\gamma)}{\p\theta}\,\f{\p\e_t(\gamma)}{\p\theta'}
+ \f{1}{8h_t^2(\theta)}\,\f{\p
h_t(\theta)}{\p\theta}\,\f{\p h_t(\theta)}{\p\theta'}\biggr\},\\
T_n^{*}(\theta)&=&\sum_{t=1}^n \biggl\{\f{1}{\sqrt{h_t(\theta)}}\,\f{\p\e
_t(\gamma)}{\p\theta} \bigl[I\bigl(\eta_t(\theta)>0\bigr)-I\bigl(\eta_t(\theta)<0\bigr)\bigr] \\
&&\hspace*{87.3pt}{} +\f{1}{2h_t(\theta)}\,\f{\p
h_t(\theta)}{\p\theta}\bigl(1-|\eta_t(\theta)|\bigr)\biggr\}.
\end{eqnarray*}
In order to get the asymptotic normality of $\hat{\theta}_n$, we
need one more assumption as follows:
\begin{asm} \label{asm31}
$E\eta_t^2\sum_{i=1}^{r} \alpha_{0i}+\sum_{i=1}^{s} \beta_{0i}<1$ or
\[
E\eta_t^2\sum_{i=1}^{r} \alpha_{0i}+\sum_{i=1}^{s} \beta_{0i}=1
\]
with $\eta_t$ having a positive density on $R$ such that
$E|\eta_t|^{\tau}<\infty$ for all $\tau<\tau_0$ and
$E|\eta_t|^{\tau_0}=\infty$ for some $\tau_0\in(0,\infty]$.
\end{asm}
Under Assumption \ref{asm31}, there exists a unique strictly stationary
causal solution to GARCH model (\ref{12}); see \citet{r8}
and \citet{r1}. The condition $E\eta_t^2\sum_{i=1}^{r}
\alpha_{0i}+\sum_{i=1}^{s} \beta_{0i}<1$ is necessary and sufficient
for $E\e_t^2<\infty$ under which model (\ref{12}) has a finite variance.
When $E\eta_t^2\sum_{i=1}^{r} \alpha_{0i}+\sum_{i=1}^{s}
\beta_{0i}=1$, model (\ref{12}) is called IGARCH model. The IGARCH model
has an infinite variance, but $E|\e_t|^{2\iota}<\infty$ for all
$\iota\in(0,1)$ under Assumption \ref{asm31}; see \citet{r25}. Assumption
\ref{asm31} is crucial for the ARMA--IGARCH model. From Figure \ref
{figure2} in Section
\ref{sec2}, we can see that the parameter\vspace*{1pt} region specified in Assumption \ref{asm31}
is much bigger than that for $E|\e_t|^{3}<\infty$ which is required
for the asymptotic normality of the global QMELE. Now, we give one
lemma as follows and its proof is straightforward and can be found
in \citet{r34}.
\begin{lem}\label{lemma31}
If Assumptions \ref{asm21}--\ref{asm23}, \ref{asm26} and \ref{asm31}
hold, then for any sequence of
random variables $\theta_n$ such that
$\sqrt{n}(\theta_n-\theta_0)=O_p(1)$, it follows that:
\begin{eqnarray*}
\mbox{\textup{(i)}\quad} \f{1}{n}[T_n^{*}(\theta_{n})-T_n^{*}(\theta_0)]&=&[2\Sigma
+o_p(1)](\theta_n-\theta_0)+o_p\biggl(\f{1}{\sqrt{n}}\biggr),\\
\mbox{\textup{(ii)}\hspace*{50pt}\quad} \f{1}{n}\Sigma_n^{*}(\theta_{n})&=&\Sigma+o_p(1),\\
\mbox{\textup{(iii)}\hspace*{42pt}\quad} \f{1}{\sqrt{n}}T_n^{*}(\theta_0)&\to_d&
N(0,\Omega) \qquad\mbox{as } n\to\infty,
\end{eqnarray*}
where
\begin{eqnarray*}
\Omega&=&E\biggl(\f{1}{h_t(\theta_0)}\,\f{\p\e_t(\gamma_0)}{\p\theta}\,\f{\p\e
_t(\gamma_0)}{\p\theta'}\biggr)
+\f{E\eta_t^{2}-1}{4}E\biggl(\f{1}{h_t^{2}(\theta_0)}\,\f{\p
h_t(\theta_0)}{\p\theta}\,\f{\p h_t(\theta_0)}{\p\theta'}\biggr),\\
\Sigma&=&g(0)E\biggl(\f{1}{h_t(\theta_0)}\,\f{\p\e_t(\gamma_0)}{\p\theta}\,\f{\p\e
_t(\gamma_0)}{\p\theta'}\biggr)
+\f{1}{8}E\biggl(\f{1}{h_t^2(\theta_0)}\,\f{\p
h_t(\theta_0)}{\p\theta}\,\f{\p h_t(\theta_0)}{\p\theta'}\biggr).
\end{eqnarray*}
\end{lem}
\begin{theorem}\label{theorem31}
If the conditions in Lemma \ref{lemma31} are satisfied, then
\[
\sqrt{n}(\hat{\theta}_n-\theta_0)\rightarrow_d
N\bigl(0,\tfrac{1}{4}\Sigma^{-1}\Omega\Sigma^{-1}\bigr) \qquad\mbox{as }
n\to\infty.
\]
\end{theorem}
\begin{pf}
Note that $\sqrt{n}(\hat{\theta}_{sn}-\theta_{0})=O_p(1)$. By (\ref{31})
and Lemma \ref{lemma31}, we have that
\begin{eqnarray*}
\hat{\theta}_{n}&=&\hat{\theta}_{sn}-\biggl[\f{2}{n}\Sigma_{n}^{*}(\hat{\theta
}_{sn})\biggr]^{-1}
\biggl[\f{1}{n}T_n^{*}(\hat{\theta}_{sn})\biggr]\\
&=&\hat{\theta}_{sn}-[2\Sigma+o_p(1)]^{-1} \biggl\{\f{1}{n}T_n^{*}(\theta
_0)+[2\Sigma+o_p(1)](\hat{\theta}_{sn}-\theta_0)+o_p\biggl(\f{1}{\sqrt{n}}\biggr)\biggr\}
\\
&=&\theta_0+\f{\Sigma^{-1}T_n^{*}(\theta_0)}{2n}+o_p\biggl(\f{1}{\sqrt{n}}\biggr).
\end{eqnarray*}
It follows that
\[
\sqrt{n}(\hat{\theta}_n-\theta_0)=\f{\Sigma^{-1}T_n^{*}(\theta
_0)}{2\sqrt{n}}+o_p(1).
\]
By Lemma \ref{lemma31}(iii), we can see that the conclusion holds. This
completes the proof.
\end{pf}
\begin{rem}
In practice, by using $\tilde{\theta}_{sn}$ in Theorem \ref{theorem23}
as an
initial estimator of $\theta_{0}$, the local QMELE has to be
modified as follows:
\[
\hat{\theta}_n=\tilde{\theta}_{sn}-[2\tilde{\Sigma}_n^{*}(\tilde{\theta
}_{sn})]^{-1}\tilde{T}_n^{*}(\tilde{\theta}_{sn}),
\]
where $\tilde{\Sigma}_{n}^{*}(\theta)$ and
$\tilde{T}_{n}^{*}(\theta)$ are defined in the same way as
$\Sigma_{n}^{*}(\theta)$ and $T_{n}^{*}(\theta)$, respectively, with
$\e_{t}(\theta)$ and
$h_{t}(\theta)$ being replaced by $\tilde{\e}_{t}(\theta)$ and
$\tilde{h}_{t}(\theta)$. However, this does not affect the
asymptotic property of $\hat{\theta}_{n}$; see Theorem 4.3.2 in
\citet{r34}.
\end{rem}
We now compare the asymptotic efficiency of the local QMELE and the
self-weighted QMELE. First, we consider the pure ARMA model, that is,
model (\ref{11})--(\ref{12}) with $h_t$ being a constant. In this case,
\begin{eqnarray*}
\Omega_0&=&E(w_t^{2}X_{1t}X_{1t}'), \qquad
\Sigma_0=g(0)E(w_t X_{1t}X_{1t}'),\\
\Omega&=&E(X_{1t}X_{1t}')
\quad\mbox{and}\quad \Sigma=g(0)\Omega,
\end{eqnarray*}
where $X_{1t}=h_t^{-1/2}\p\e_t(\gamma_0)/\p\theta$. Let b and c be
two any $m$-dimensional constant vectors. Then,
\begin{eqnarray*}
c'\Sigma_0bb'\Sigma_0c&=&\bigl\{E\bigl[\bigl(c'\sqrt{g(0)}w_tX_{1t}\bigr)\bigl(\sqrt
{g(0)}X_{1t}'b\bigr)\bigr]\bigr\}^2\\
&\leq& E\bigl(c'\sqrt{g(0)}w_tX_{1t}\bigr)^2E\bigl(\sqrt{g(0)}X_{1t}'b\bigr)^2\\
&=&[c'g(0)\Omega_0c][b'\Sigma b]=c'[g(0)\Omega_0b'\Sigma b]c.
\end{eqnarray*}
Thus, $g(0)\Omega_0b'\Sigma b'-\Sigma_0bb'\Sigma_0\geq0$ (a
positive semi-definite matrix) and hence $b'\Sigma_0\Omega_0^{-1}\Sigma
_0b=\operatorname{tr}(\Omega_0^{-1/2}\Sigma_0bb'\Sigma_0\Omega_0^{-1/2})\leq
\operatorname{tr}(g(0)b'\Sigma b)=g(0)b'\Sigma b$. It follows that
$\Sigma_0^{-1}\Omega_0\Sigma_0^{-1}\geq
[g(0)\Sigma]^{-1}=\Sigma^{-1}\Omega\Sigma^{-1}$. Thus, the local
QMELE is more efficient than the self-weighted QMELE. Similarly, we
can show that the local QMELE is more efficient than the
self-weighted QMELE for the pure GARCH model.
For the general model (\ref{11})--(\ref{12}), it is not easy to compare the
asymptotic efficiency of the self-weighted QMELE and the local
QMELE. However, when $\eta_t\sim \operatorname{Laplace}(0,1)$, we have
\begin{eqnarray*}
\Sigma_0&=&E\biggl(\f{w_t}{2}X_{1t}X_{1t}'+\f{w_t}{8}X_{2t}X_{2t}'\biggr),\\
\Omega_0&=&E\biggl(w_t^2X_{1t}X_{1t}'+\f{w_t^2}{4}X_{2t}X_{2t}'\biggr),\\
\Sigma&=&E\bigl(\tfrac{1}{2}X_{1t}X_{1t}'+\tfrac{1}{8}X_{2t}X_{2t}'\bigr)
\quad\mbox{and}\quad \Omega=2\Sigma,
\end{eqnarray*}
where $X_{2t}=h_t^{-1}\p h_t(\theta_0)/\p\theta$. Then, it is easy
to see that
\begin{eqnarray*}
&&c'\Sigma_0bb'\Sigma_0c\\
&&\qquad=\{
E[(c'2^{-1/4}w_tX_{1t})(2^{-3/4}X_{1t}'b)+(c'2^{-5/4}w_tX_{2t})(2^{-7/4}X_{2t}'b)]\}
^2\\
&&\qquad\leq\bigl\{\sqrt{E(c'2^{-1/4}w_tX_{1t})^2E(2^{-3/4}X_{1t}'b)^2}\\
&&\qquad\quad\hspace*{4pt}{}+\sqrt
{E(c'2^{-5/4}w_tX_{2t})^2E(2^{-7/4}X_{2t}'b)^2}\bigr\}^2\\
&&\qquad\leq
[E(c'2^{-1/4}w_tX_{1t})^2+E(c'2^{-5/4}w_tX_{2t})^2]\\
&&\qquad\quad{}\times[E(2^{-3/4}X_{1t}'b)^2+E(2^{-7/4}X_{2t}'b)^2]\\
&&\qquad=[c'2^{-1/2}\Omega_0c][b'2^{-1/2}\Sigma
b]=c'[2^{-1}\Omega_0b'\Sigma b]c.
\end{eqnarray*}
Thus, $2^{-1}\Omega_0b'\Sigma b'-\Sigma_0bb'\Sigma_0\geq0$ and hence
$b'\Sigma_0\Omega_0^{-1}\Sigma_0b=\operatorname{tr}(\Omega_0^{-1/2}
\Sigma_0bb'\times\Sigma
_0\Omega_0^{-1/2})
\leq \operatorname{tr}(2^{-1}b'\Sigma b)=2^{-1}b'\Sigma b$.
It follows that $\Sigma_0^{-1}\Omega_0\Sigma_0^{-1}\geq
2\Sigma^{-1}=\break\Sigma^{-1}\Omega\Sigma^{-1}$. Thus, the local QMELE is
more efficient than the global self-weighted QMELE.
In the end, we compare the asymptotic efficiency of the
self-weighted QMELE and the self-weighted QMLE in \citet{r25} for
the pure ARCH model, when $E\eta_t^4<\infty$. We reparametrize model
(\ref{12}) when $s=0$ as follows:
\begin{equation}\label{32}
y_t=\eta_t^{*}\sqrt{h_t^{*}} \quad\mbox{and}\quad
h_t^{*}=\alpha_0^{*}+\sum_{i=1}^{r} \alpha_i^{*}y_{t-i}^2,
\end{equation}
where $\eta_t^{*}=\eta_t/\sqrt{E\eta_t^2}$, $h_t^{*}=(E\eta_t^2)h_t$
and
$\theta^{*}=(\alpha_0^{*},\alpha_1^{*},\ldots,\alpha_r^{*})'=(E\eta
_t^2)\theta$.\break Let~$\tilde{\theta}_{sn}^{*}$ be the self-weighted QMLE of the true
parameter, $\theta_0^{*}$, in model~(\ref{32}). Then,
$\tilde{\theta}_{sn}=\tilde{\theta}_{sn}^{*}/E\eta_t^2$ is the
self-weighted QMLE of $\theta_0$, and its asymptotic covariance is
\[
\Gamma_1=\kappa_1
[E(w_tX_{2t}X_{2t}')]^{-1}E(w_t^2X_{2t}X_{2t}')[E(w_tX_{2t}X_{2t}')]^{-1},
\]
where $\kappa_1=E\eta_t^4/(E\eta_t^2)^2-1$. By Theorem \ref{theorem22}, the
asymptotic variance of the self-weighted QMELE is
\[
\Gamma_2=\kappa_2
[E(w_tX_{2t}X_{2t}')]^{-1}E(w_t^2X_{2t}X_{2t}')[E(w_tX_{2t}X_{2t}')]^{-1},
\]
where $\kappa_2=4(E\eta_t^2-1)$. When $\eta_t\sim \operatorname{Laplace}(0,1)$,
$\kappa_1=5$ and $\kappa_2=4$. Thus, $\Gamma_1>\Gamma_2$, meaning
that the self-weighted QMELE is more efficient than the
self-weighted QMLE. When
$\eta_{t}=\tilde{\eta}_t/E|\tilde{\eta}_t|$, with $\tilde{\eta}_t$
having the following mixing normal density:
\[
f(x)=(1-\varepsilon)\phi(x)+\f{\varepsilon}{\tau}\phi\biggl(\f{x}{\tau}\biggr),
\]
we have $E|\eta_{t}|=1$,
\[
E\eta^2_t=\f{\pi(1-\e+\e\tau^2)}{2(1-\e+\e\tau)^2}
\]
and
\[
E\eta_t^4=\f{3\pi(1-\e+\e\tau^4)}{2(1-\e+\e\tau)^{2}(1-\e+\e\tau^2)},
\]
where $\phi(x)$ is the pdf of standard normal, $0\leq\e\leq1$ and
$\tau>0$. The asymptotic efficiencies of the self-weighted QMELE and
the self-weighted QMLE depend on $\e$ and $\tau$. For example, when
$\e=1$ and $\tau=\sqrt{\pi/2}$, we have $\kappa_1=(6-\pi)/\pi$ and
$\kappa_2=2\pi-4$, and hence the self-weighted QMLE is more
efficient than the self-weighted QMELE since $\Gamma_1<\Gamma_2$.
When $\e=0.99$ and $\tau=0.1$, we have $\kappa_1=28.1$ and
$\kappa_2=6.5$, and hence the self-weighted QMELE is more efficient
than the self-weighted QMLE since $\Gamma_1>\Gamma_2$.
\section{Simulation}\label{sec4}
In this section, we compare the performance of the global
self-weighted QMELE ($\hat{\theta}_{sn}$), the global self-weighted
QMLE ($\bar{\theta}_{sn}$), the local QMELE $(\hat{\theta}_n)$ and
the local QMLE $(\bar{\theta}_n)$. The following AR(1)--GARCH$(1,1)$
model is used to generate data samples:
\begin{eqnarray}\label{41}
y_t&=&\mu+\phi_1 y_{t-1}+\e_t, \nonumber\\[-8pt]\\[-8pt]
\e_t&=&\eta_t\sqrt{h_t} \quad\mbox{and}\quad
h_t=\alpha_0+\alpha_1\e_{t-1}^2+\beta_1h_{t-1}.\nonumber
\end{eqnarray}
We set the sample size $n=1\mbox{,}000$ and use $1\mbox{,}000$ replications, and
study the cases when $\eta_t$ has $\operatorname{Laplace}(0,1)$, $N(0,1)$ and $t_3$
distribution. For the case with $E\e_{t}^{2}<\infty$ (i.e.,
$E\eta_{t}^{2} \alpha_{01}+\beta_{01}<1$), we take\vspace*{1pt}
$\theta_0=(0.0,0.5,0.1,0.18,0.4)$. For the IGARCH case (i.e.,
$E\eta_{t}^{2} \alpha_{01}+\beta_{01}=1$), we take
$\theta_0=(0.0,0.5,0.1,0.3,0.4)$ when $\eta_t\sim \operatorname{Laplace}(0,1)$,
$\theta_0=(0.0,0.5,0.1,0.6,0.4)$ when $\eta_t\sim N(0,1)$ and
$\theta_0=(0.0,0.5,0.1,0.2,0.4)$ when $\eta_t\sim t_3$. We
standardize the distribution of~$\eta_t$ to ensure that
$E|\eta_t|=1$ for the QMELE. Tables \ref{table1}--\ref{table3} list the
sample biases, the
sample standard deviations (SD)\vspace*{1pt} and the asymptotic standard
deviations (AD) of $\hat{\theta}_{sn}, \bar{\theta}_{sn}$,
$\hat{\theta}_n$ and $\bar{\theta}_{n}$. We choose $w_{t}$ as in
(\ref{24}) with $C$ being 90\% quantile of $\{y_{1},\ldots,y_{n}\}$ and
$y_{i}\equiv0$ for $i\leq0$. The ADs\vspace*{1pt} in Theorems~\ref{theorem22}
and \ref{theorem31} are estimated by
$\hat{\chi}_{sn}=1/4\hat{\Sigma}_{sn}^{-1}\hat{\Omega}_{sn}\hat{\Sigma
}_{sn}^{-1}$
and
$\hat{\chi}_{n}=1/4\hat{\Sigma}_{n}^{-1}\hat{\Omega}_{n}\hat{\Sigma}_{n}^{-1}$,
respectively, where
{\small\begin{eqnarray*}
\hat{\Sigma}_{sn}&=&\f{1}{n}\sum_{t=1}^{n}
\biggl\{\f{g(0)w_t}{h_t(\hat{\theta}_{sn})}\,\f{\p\e_t(\hat{\gamma}_{sn})}{\p
\theta}\,\f{\p\e_t(\hat{\gamma}_{sn})}{\p\theta'}
+\f{w_t}{8h_t^{2}(\hat{\theta}_{sn})}\,\f{\p
h_t(\hat{\theta}_{sn})}{\p\theta}\,\f{\p
h_t(\hat{\theta}_{sn})}{\p\theta'}\biggr\},\\
\hat{\Omega}_{sn}&=&\f{1}{n}\sum_{t=1}^{n}
\biggl\{\f{w_t^{2}}{h_t(\hat{\theta}_{sn})}\,\f{\p\e_t(\hat{\gamma}_{sn})}{\p
\theta}\,\f{\p\e_t(\hat{\gamma}_{sn})}{\p\theta'}
+\f{E\eta_t^{2}-1}{4}\f{w_t^{2}}{h_t^{2}(\hat{\theta}_{sn})}\,\f{\p
h_t(\hat{\theta}_{sn})}{\p\theta}\,\f{\p
h_t(\hat{\theta}_{sn})}{\p\theta'}\biggr\},\\
\hat{\Sigma}_{n}&=&\f{1}{n}\sum_{t=1}^{n}
\biggl\{\f{g(0)}{h_t(\hat{\theta}_{n})}\,\f{\p\e_t(\hat{\gamma}_{n})}{\p\theta
}\,\f{\p\e_t(\hat{\gamma}_{n})}{\p\theta'}
+\f{1}{8h_t^{2}(\hat{\theta}_{n})}\,\f{\p
h_t(\hat{\theta}_{n})}{\p\theta}\,\f{\p
h_t(\hat{\theta}_{n})}{\p\theta'}\biggr\},\\
\hat{\Omega}_{n}&=&\f{1}{n}\sum_{t=1}^{n}
\biggl\{\f{1}{h_t(\hat{\theta}_{n})}\,\f{\p\e_t(\hat{\gamma}_{n})}{\p\theta}\,\f
{\p\e_t(\hat{\gamma}_{n})}{\p\theta'}
+\f{E\eta_t^{2}-1}{4}\f{1}{h_t^{2}(\hat{\theta}_{n})}\,\f{\p
h_t(\hat{\theta}_{n})}{\p\theta}\,\f{\p
h_t(\hat{\theta}_{n})}{\p\theta'}\biggr\}.
\end{eqnarray*}}
\begin{table}
\tabcolsep=0pt
\caption{Estimators for model (\protect\ref{41}) when $\eta_t\sim
\operatorname{Laplace}(0,1)$} \label{table1}
{\fontsize{8.5pt}{11pt}\selectfont{\begin{tabular*}{\tablewidth}
{@{\extracolsep{\fill}}ld{2.4}d{2.4}d{1.4}d{1.4}d{2.4}d{1.4}d{2.4}d{1.4}d{2.4}d{2.4}@{}}
\hline
&\multicolumn{5}{c}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.18, 0.4)}$}
&\multicolumn{5}{c@{}}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.3, 0.4)}$}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{5}{c}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}} & \multicolumn{5}{c@{}}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat
{\alpha}_{0sn}}$} & \multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1sn}}$}\\
\hline
Bias&0.0004&-0.0023&0.0034&0.0078&-0.0154&0.0003&-0.0049&0.0031&0.0054&-0.0068\\
SD&0.0172&0.0317&0.0274&0.0548&0.1125&0.0195&0.0318&0.0219&0.0640&0.0673\\
AD&0.0166&0.0304&0.0255&0.0540&0.1061&0.0192&0.0311&0.0218&0.0624&0.0664\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1n}}$}\\
\hline
Bias&0.0008&-0.0019&0.0027&0.0002&-0.0094&0.0010&-0.0044&0.0024&-0.0008&-0.0025\\
SD&0.0170&0.0253&0.0249&0.0400&0.0989&0.0192&0.0261&0.0203&0.0502&0.0591\\
AD&0.0162&0.0245&0.0234&0.0407&0.0920&0.0190&0.0258&0.0206&0.0499&0.0591\\
\hline
&\multicolumn{5}{c}{\textbf{Self-weighted QMLE (}$\bolds{\bar{\theta
}_{sn}}$\textbf{)}}&\multicolumn{5}{c}{\textbf{Self-weighted QMLE
(}$\bolds{\bar{\theta}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1sn}}$}\\
\hline
Bias&-0.0003&-0.0016&0.0041&0.0114&-0.0227&0.0005&-0.0039&0.0031&0.0104&-0.0127\\
SD&0.0243&0.0451&0.0301&0.0624&0.1237&0.0283&0.0458&0.0242&0.0750&0.0755\\
AD&0.0240&0.0443&0.0285&0.0607&0.1184&0.0283&0.0461&0.0243&0.0704&0.0741\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1n}}$}\\
\hline
Bias&0.0007&-0.0034&0.0026&0.0037&-0.0144&0.0022&-0.0045&0.0020&0.0044&-0.0081\\
SD&0.0243&0.0368&0.0279&0.0461&0.1115&0.0282&0.0377&0.0227&0.0579&0.0674\\
AD&0.0236&0.0361&0.0261&0.0459&0.1026&0.0281&0.0384&0.0230&0.0564&0.0659\\
\hline
\end{tabular*}}}
\vspace*{-3pt}
\end{table}
\begin{table}
\tabcolsep=0pt
\caption{Estimators for model (\protect\ref{41}) when $\eta_t\sim N(0,1)$}\label{table2}
\vspace*{-3pt}
{\fontsize{8.5pt}{11pt}\selectfont{\begin{tabular*}{\tablewidth}
{@{\extracolsep{\fill}}ld{2.4}d{2.4}d{1.4}d{1.4}d{2.4}d{2.4}d{2.4}d{1.4}d{2.4}d{2.4}@{}}
\hline
&\multicolumn{5}{c}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.18, 0.4)}$}
&\multicolumn{5}{c@{}}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.6, 0.4)}$}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{5}{c}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}} & \multicolumn{5}{c@{}}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat
{\alpha}_{0sn}}$} & \multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1sn}}$}\\
\hline
Bias&0.0003&-0.0042&0.0075&0.0065&-0.0372&-0.0008&-0.0034&0.0029&-0.0019&-0.0028\\
SD&0.0192&0.0457&0.0366&0.0600&0.1738&0.0255&0.0437&0.0204&0.0815&0.0512\\
AD&0.0189&0.0443&0.0379&0.0604&0.1812&0.0257&0.0424&0.0202&0.0809&0.0491\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1n}}$}\\
\hline
Bias&0.0006&-0.0051&0.0061&0.0019&-0.0268&0.0000&-0.0040&0.0029&-0.0048&-0.0015\\
SD&0.0184&0.0372&0.0357&0.0487&0.1674&0.0252&0.0364&0.0197&0.0671&0.0472\\
AD&0.0183&0.0370&0.0350&0.0488&0.1652&0.0252&0.0359&0.0194&0.0685&0.0453\\
\hline
&\multicolumn{5}{c}{\textbf{Self-weighted QMLE (}$\bolds{\bar{\theta
}_{sn}}$\textbf{)}}&\multicolumn{5}{c}{\textbf{Self-weighted QMLE
(}$\bolds{\bar{\theta}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1sn}}$}\\
\hline
Bias&-0.0001&-0.0039&0.0069&0.0089&-0.0361&-0.0006&-0.0016&0.0024&0.0027&-0.0045\\
SD&0.0151&0.0366&0.0333&0.0566&0.1599&0.0196&0.0337&0.0189&0.0770&0.0481\\
AD&0.0150&0.0352&0.0345&0.0568&0.1658&0.0200&0.0329&0.0188&0.0757&0.0459\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1n}}$}\\
\hline
Bias&0.0009&-0.0048&0.0055&0.0038&-0.0252&0.0004&-0.0031&0.0024&-0.0019&-0.0027\\
SD&0.0145&0.0300&0.0322&0.0454&0.1535&0.0195&0.0287&0.0183&0.0633&0.0442\\
AD&0.0145&0.0294&0.0320&0.0460&0.1517&0.0197&0.0279&0.0181&0.0644&0.0424\\
\hline
\end{tabular*}}}
\vspace*{-6pt}
\end{table}
\begin{table}
\tabcolsep=0pt
\caption{Estimators for model (\protect\ref{41}) when $\eta_t\sim t_3$}
\label{table3}
{\fontsize{8.5pt}{11pt}\selectfont{\begin{tabular*}{\tablewidth}
{@{\extracolsep{\fill}}ld{2.4}d{2.4}d{1.4}d{1.4}d{2.4}d{2.4}d{2.4}d{1.4}d{2.4}d{2.4}@{}}
\hline
&\multicolumn{5}{c}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.18, 0.4)}$}
&\multicolumn{5}{c@{}}{$\bolds{\theta_0=(0.0, 0.5, 0.1, 0.2, 0.4)}$}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{5}{c}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}} & \multicolumn{5}{c@{}}{\textbf{Self-weighted QMELE (}$\bolds{\hat{\theta
}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\hat
{\alpha}_{0sn}}$} & \multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1sn}}$}&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1sn}}$}\\
\hline
Bias&0.0004&-0.0037&0.0059&0.0081&-0.0202&-0.0005&-0.0026&0.0032&0.0088&-0.0158\\
SD&0.0231&0.0416&0.0289&0.0600&0.1084&0.0221&0.0404&0.0252&0.0619&0.0968\\
AD&0.0233&0.0393&0.0282&0.0620&0.1101&0.0238&0.0393&0.0266&0.0637&0.1001\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMELE (}$\bolds{\hat{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\hat{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\hat{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\hat{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\hat{\beta}_{1n}}$}\\
\hline
Bias&0.0011&-0.0039&0.0041&0.0011&-0.0115&0.0001&-0.0028&0.0019&0.0029&-0.0092\\
SD&0.0229&0.0328&0.0256&0.0429&0.0955&0.0218&0.0325&0.0226&0.0450&0.0842\\
AD&0.0228&0.0314&0.0252&0.0461&0.0918&0.0233&0.0317&0.0243&0.0483&0.0851\\
\hline
&\multicolumn{5}{c}{\textbf{Self-weighted QMLE (}$\bolds{\bar{\theta
}_{sn}}$\textbf{)}}&\multicolumn{5}{c}{\textbf{Self-weighted QMLE
(}$\bolds{\bar{\theta}_{sn}}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{sn}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0sn}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1sn}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1sn}}$}\\
\hline
Bias&-0.0056&-0.0151&0.0029&0.0503&-0.0594&0.0036&-0.0141&0.0115&0.0442&-0.0543\\
SD&0.9657&0.1045&0.0868&0.2521&0.1740&0.1827&0.1065&0.3871&0.2164&0.1605\\
AD&0.0536&0.0907&33.031&0.1795&34.498&0.0517&0.0876&138.38&0.1875&11.302\\
\hline
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}
&\multicolumn{5}{c}{\textbf{Local QMLE (}$\bolds{\bar{\theta}_n}$\textbf{)}}\\[-4pt]
& \multicolumn{5}{c}{\hrulefill} & \multicolumn{5}{c}{\hrulefill}\\
&\multicolumn{1}{c}{$\bolds{\bar{\mu}_n}$}&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\beta}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\mu}_{n}}$}
&\multicolumn{1}{c}{$\bolds{\bar{\phi}_{1n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{0n}}$}&\multicolumn{1}{c}{$\bolds{\bar{\alpha}_{1n}}$}
&\multicolumn{1}{c@{}}{$\bolds{\bar{\beta}_{1n}}$}\\
\hline
Bias&-0.0048&-0.0216&-2.1342&0.0185&3.7712&-0.0010&-0.0203&1.3241&0.0253&-0.1333\\
SD&0.0517&0.1080&38.535&0.3596&83.704&0.0521&0.1318&42.250&0.2524&3.4539\\
AD&0.0508&0.0661&55.717&0.1447&45.055&0.0520&0.0707&13.761&0.1535&1.1343\\
\hline
\end{tabular*}}}
\end{table}
From Table \ref{table1}, when $\eta_t\sim \operatorname{Laplace}(0,1)$, we can see
that the
self-weighted QMELE has smaller AD and SD than those of both the
self-weighted QMLE and the local QMLE. When $\eta_t\sim N(0,1)$, in
Table \ref{table2}, we can see that the self-weighted QMLE has smaller
AD and
SD than those of both the self-weighted QMELE and the local QMELE.
From Table \ref{table3}, we note that the SD and AD of both the self-weighted
QMLE and the local QMLE are not close to each other since their
asymptotic variances are infinite, while the SD and AD of the
self-weighted QMELE and the local QMELE are very close to each
other. Except $\bar{\theta}_n$ in Table \ref{table3}, we can see that
all four
estimators in Tables \ref{table1}--\ref{table3} have very small biases,
and the local QMELE and local QMLE always have the smaller SD and AD than
those of the self-weighted QMELE and self-weighted QMLE,
respectively. This conclusion holds no matter with GARCH errors
(finite variance) or IGARCH errors. This coincides with what we
expected. Thus, if the tail index of the data is greater than 2 but
$E\eta_t^{4}=\infty$, we suggest to use the local QMELE in practice;
see also \citet{r25} for a~discussion.
Overall, the simulation results show that the self-weighted QMELE
and the local QMELE have a good performance in the finite sample,
especially for the heavy-tailed innovations.
\section{A real example}\label{sec5}
In this section, we study the weekly world crude oil price (dollars
per barrel) from January 3, 1997 to August 6, 2010, which has in
total 710 observations; see Figure \ref{figure3}(a). Its 100 times log-return,
denoted by $\{y_{t}\}_{t=1}^{709}$, is plotted in Figure \ref{figure3}(b). The
classic method based on the Akaike's information criterion (AIC)
leads to the following model:
\begin{eqnarray} \label{51}
y_{t}&=& 0.2876\e_{t-1}+0.1524\e_{t-3}+\e_{t},\nonumber\\[-8pt]\\[-8pt]
&&(0.0357) \hspace*{24pt}(0.0357)\nonumber
\end{eqnarray}
where the standard errors are in parentheses, and the estimated
value of $\sigma^{2}_{\e}$ is 16.83. Model (\ref{51}) is stationary, and
none of the first ten autocorrelations or partial autocorrelations
of the residuals $\{\hat{\e}_{t}\}$ are significant at the 5\%
level. However, looking at the autocorrelations of
$\{\hat{\e}_{t}^{2}\}$, it turns out that the 1st, 2nd and 8th all
exceed two asymptotic standard errors; see Figure \ref{figure4}(a). Similar
results hold for the partial autocorrelations of
$\{\hat{\e}_{t}^{2}\}$ in Figure \ref{figure4}(b). This shows that
$\{\e_{t}^{2}\}$ may be highly correlated, and hence there may exist
ARCH effects.
\begin{figure}
\begin{tabular}{@{}cc@{}}
\includegraphics{895f03a.eps}
& \includegraphics{895f03b.eps}\\
(a) & (b)
\end{tabular}
\caption{\textup{(a)} The weekly world crude oil prices (dollars per barrel)
from January 3, 1997 to August~6, 2010 and \textup{(b)} its 100 times log
return.}\label{figure3}
\end{figure}
\begin{figure}
\begin{tabular}{@{}cc@{}}
\includegraphics{895f04a.eps}
& \includegraphics{895f04b.eps}\\
(a) & (b)
\end{tabular}
\caption{\textup{(a)} The autocorrelations for $\{\hat{\e}_{t}^{2}\}$ and
\textup{(b)} the partial autocorrelations for $\{\hat{\e}_{t}^{2}\}$.}
\label{figure4}
\end{figure}
Thus, we try to use a MA(3)--GARCH$(1,1)$ model to fit the data set~$\{y_{t}\}$.
To begin with our estimation, we first estimate the
tail index of $\{y_{t}^{2}\}$ by using Hill's estimator
$\{\hat{\alpha}_{y}(k)\}$ with $k=1,\ldots,180$, based on
$\{y_{t}^{2}\}_{t=1}^{709}$. The plot of $\{\hat{\alpha}_{y}(k)\}_{k=1}^{180}$
is given in Figure \ref{figure5}, from which we can see that the tail
index of
$\{y_{t}^{2}\}$ is between 1 and 2, that is, $Ey_{t}^{4}=\infty$. So,
the standard QMLE procedure is not suitable. Therefore, we
first use the self-weighted QMELE to estimate the MA(3)--GARCH$(1,1)$
model, and then use the one-step iteration as in Section \ref{sec3} to obtain
its local QMELE. The fitted model is as follows:
\begin{eqnarray}\label{52}
y_{t}&=&0.3276\e_{t-1}+0.1217\e_{t-3}+\e_{t},\nonumber\\[-2pt]
&&(0.0454)\hspace*{23.4pt} (0.0449)\nonumber\\[-10pt]\\[-10pt]
h_{t}&=&0.5147+0.0435\e_{t-1}^{2}+0.8756h_{t-1},\nonumber\\[-2pt]
&&(0.3248)\hspace*{4.2pt} (0.0159)\hspace*{24.5pt} (0.0530)\nonumber
\end{eqnarray}
where the standard errors are in parentheses. Again model (\ref{52}) is
stationary, and none of first ten autocorrelations or partial\vadjust{\eject}
autocorrelations of the residuals $\hat{\eta}_{t}\triangleq
\hat{\e}_{t}\hat{h}_{t}^{-1/2}$ are significant at the 5\% level.
Moreover, the first ten
autocorrelations and partial autocorrelations of $\{\hat{\eta
}_{t}^{2}\}$
are also within two asymptotic standard errors; see Figure \ref
{figure6}(a) and
(b). All these results suggest that model (\ref{52}) is adequate for the
data set $\{y_{t}\}$.
\begin{figure}
\includegraphics{895f05.eps}
\caption{Hill estimators $\{\hat{\alpha}_{y}(k)\}$ for
$\{y_{t}^{2}\}$.}
\label{figure5}
\end{figure}
\begin{figure}
\begin{tabular}{@{}cc@{}}
\includegraphics{895f06a.eps}
& \includegraphics{895f06b.eps}\\
(a) & (b)
\end{tabular}
\caption{\textup{(a)} The autocorrelations for $\{\hat{\eta}_{t}^{2}\}$ and
\textup{(b)} the partial autocorrelations for $\{\hat{\eta}_{t}^{2}\}$.}
\label{figure6}
\end{figure}
Finally, we estimate the tail index of $\eta_{t}^{2}$ in model (\ref{52})
by using Hill's estimator $\hat{\alpha}_{\eta}(k)$ with
$k=1,\ldots,180$, base on $\{\hat{\eta}_{t}^{2}\}$. The plot of
$\{\hat{\alpha}_{\eta}(k)\}_{k=1}^{180}$ is given in Figure \ref
{figure7}, from
\begin{figure}
\includegraphics{895f07.eps}
\caption{The Hill estimators $\{\hat{\alpha}_{\eta}(k)\}$ for
$\{\hat{\eta}_{t}^{2}\}$ of model (\protect\ref{52}).}
\label{figure7}
\end{figure}
which we can see that $E\eta_{t}^{2}$ is most likely finite, but~$E\eta_{t}^{4}$ is infinite.
Furthermore, the estimator of
$E\eta_{t}^{2}$ is $\sum_{t=1}^{n} \hat{\eta}_{t}^{2}/n=1.6994$, and
it turns out that $\hat{\alpha}_{1n}(\sum_{t=1}^{n}
\hat{\eta}_{t}^{2}/n)+\hat{\beta}_{1n}=0.9495$. This means
that \mbox{$E\e_{t}^{2}<\infty$}. Therefore, all the assumptions of Theorem
\ref{theorem31} are most likely satisfied. In particular, the estimated tail
indices of $\{y_{t}^{2}\}$ and $\{\hat{\eta}_{t}^{2}\}$ show the
evidence that the self-weighted/local QMELE is necessary in
modeling the crude oil price.
\section{\texorpdfstring{Proofs of Lemmas \protect\ref{lemma22} and \protect\ref{lemma23}}{Proofs of Lemmas 2.2 and 2.3}}
\label{sec6}
In this section, we give the proofs of Lemmas \ref{lemma22} and \ref
{lemma23}. In the rest
of this paper, we denote $C$ as a universal constant, and $G(x)$ be the
distribution function of $\eta_t$.
\begin{pf*}{Proof of Lemma \ref{lemma22}}
A direct calculation gives
\[
\xi_t(u)=-u'\f{2w_t}{\sqrt{h_t}}\,\f{\p\e_t(\gamma_0)}{\p\theta}
M_t(u),
\]
where $M_t(u)=\int_{0}^1 X_t( -q_{1t}(u)s)\,ds$. Thus, we
have
\[
|\Pi_{1n}(u)|\leq2\|u\|\sum_{j=1}^m
\Biggl|\f{w_t}{\sqrt{h_t}}\,\f{\p\e_t(\gamma_0)}{\p\theta_j}\sum_{t=1}^n\{
M_t(u)-E[M_t(u)|\mathcal{F}_{t-1}]\}
\Biggr|.
\]
It is sufficient to show that
\begin{equation}\label{61}\quad
\qquad \Biggl|\f{w_t}{\sqrt{h_t}}\,\f{\p\e_t(\theta_0)}{\p\theta_j}\sum_{t=1}^n\{
M_t(u_n)-E[M_t(u_n)|\mathcal{F}_{t-1}]\}\Biggr|=o_p\bigl(\sqrt{n}+n\|u_n\|\bigr),
\end{equation}
for each $1\leq j\leq m$.
Let $m_t=w_t h_t^{-1/2}\p\e_t(\theta_0)/\p\theta_j$, $
f_t(u)=m_tM_t(u)$ and
\[
D_n(u)=\f{1}{\sqrt{n}}\sum_{t=1}^n
\{f_t(u)-E[f_t(u)|\mathcal{F}_{t-1}]\}.
\]
Then, in order to prove (\ref{61}), we only need to show that for any
$\eta>0$,
\begin{equation} \label{62}
\sup_{\|u\|\leq\eta}\f{|D_n(u)|}{1+\sqrt{n}\|u\|}=o_p(1).
\end{equation}
Note that $m_t=\max\{m_t, 0\}-\max\{-m_t, 0\}$. To make it simple,
we only prove the case when $m_t\ge0$.
We adopt the method in Lemma 4 of \citet{r32}. Let
$\mathfrak{F}=\{f_t(u): \|u\|\leq\eta\}$ be a collection of
functions indexed by $u$. We first verify that $\mathfrak{F}$
satisfies the bracketing condition in Pollard (\citeyear{r32}), page
304. Denote
$B_r(\zeta)$ be an open neighborhood of $\zeta$ with radius $r>0$.
For any fix $\e>0$ and $0<\delta\leq\eta$, there is a sequence of
small cubes $\{B_{\e\delta/C_1}(u_i)\}_{i=1}^{K_\e}$ to cover
$B_\delta(0)$, where $K_{\varepsilon}$ is an integer less than
$c_{0}\varepsilon^{-m}$ and $c_{0}$ is a constant not depending on
$\varepsilon$ and $\delta$; see Huber (\citeyear{r17}), page 227. Here,
$C_1$ is
a constant to be selected later. Moreover, we can choose
$U_i(\delta)\subseteq B_{\e\delta/C_1}(u_i)$ such that
$\{U_i(\delta)\}_{i=1}^{K_\e}$ be a partition of $B_\delta(0)$. For
each $u\in U_i(\delta)$, we define the bracketing functions as
follows:
\[
f^{\pm}_{t}(u)= m_t\int_{0}^1 X_t\biggl(-q_{1t}(u)s\pm\f{\e
\delta}{C_1\sqrt{h_t}} \biggl\|\f{\p\e_t(\gamma_0)}{\p\theta}\biggr\|
\biggr)\,ds.
\]
Since the indicator function is nondecreasing and $m_t\geq0$, we
can see that, for any $u\in U_i(\delta)$,
\[
f^{-}_{t}(u_i)\leq f_t(u)\leq f^{+}_{t}(u_i).
\]
Note that $\sup_{x\in R}g(x)<\infty$. It is straightforward to see
that
\begin{equation}\label{63}
\qquad E[f^{+}_{t}(u_i)-f^{-}_{t}(u_i)|\mathcal{F}_{t-1}]\leq
\f{2\e\delta}{C_1}\sup_{x\in R}g(x)
\f{w_t}{h_t}\biggl\|\f{\p\e_t(\gamma_0)}{\p\theta}\biggr\|^2\equiv
\f{\e\delta\Delta_t}{C_1}.
\end{equation}
Setting $C_1=E(\Delta_t)$, we have
\[
E[f^{+}_{t}(u_i)-f^{-}_{t}(u_i)]=E\{
E[f^{+}_{t}(u_i)-f^{-}_{t}(u_i)|\mathcal{F}_{t-1}]\}\leq
\e\delta.
\]
Thus, the family $\mathfrak{F}$ satisfies
the bracketing condition.
Put $\delta_k=2^{-k}\eta$. Define $B(k)\equiv B_{\delta_k}(0)$, and
$A(k)$ to be the annulus $B(k)/\allowbreak B(k+1)$. Fix $\e>0$, for each $1\leq
i\leq K_\e$, by the bracketing condition, there exists a partition
$\{U_i(\delta_k)\}_{i=1}^{K_\e}$ of $B(k)$.
We first consider the upper tail. For $u\in U_i(\delta_k)$, by (\ref{63})
with $\delta=\delta_k$, we have
\begin{eqnarray*}
D_n(u)&\leq&\f{1}{\sqrt{n}}\sum_{t=1}^n \{
f^{+}_t(u_i)-E[f^{-}_t(u_i)|\mathcal{F}_{t-1}]\} \\
&=& D^{+}_n(u_i)+\f{1}{\sqrt{n}} \sum_{t=1}^n
E[f^{+}_t(u_i)-f^{-}_t(u_i)|\mathcal{F}_{t-1}]\\
&\leq& D^{+}_n(u_i)+\sqrt{n}\e\delta_k \Biggl[\f{1}{nC_1}\sum_{t=1}^n
\Delta_t\Biggr],
\end{eqnarray*}
where
\[
D^{+}_n(u_i)=\f{1}{\sqrt{n}}\sum_{t=1}^n \{
f^{+}_t(u_i)-E[f^{+}_t(u_i)|\mathcal{F}_{t-1}]\}.
\]
Denote the event
\[
E_n=\Biggl\{\omega: \f{1}{nC_1}\sum_{t=1}^n \Delta_t(\omega)<2 \Biggr\}.
\]
On $E_n$ with $u\in U_i(\delta_k)$, it follows that
\begin{equation}\label{64}
D_n(u)\leq D^{+}_n(u_i)+2\sqrt{n}\e\delta_k.
\end{equation}
On $A(k)$, the divisor $1+\sqrt{n}\|u\|>
\sqrt{n}\delta_{k+1}=\sqrt{n}\delta_k/2$. Thus, by (\ref{64}) and
Chebyshev's inequality, it follows that
\begin{eqnarray} \label{65}
&&P\biggl(\sup_{u\in A(k)}\f{D_n(u)}{1+\sqrt{n}\|u\|}>6\e, E_n\biggr) \nonumber\\
&&\qquad \leq P\Bigl(\sup_{u\in A(k)}D_n(u)>3\sqrt{n}\e\delta_k, E_n\Bigr) \nonumber\\
&&\qquad \leq P\Bigl(\max_{1\leq i\leq K_\e} \sup_{u\in
U_i(\delta_k)\cap A(k)} D_n(u)>3\sqrt{n}\e\delta_k, E_n\Bigr)
\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq P\Bigl(\max_{1\leq i\leq K_\e} D^{+}_n(u_i)>\sqrt{n}\e
\delta_k, E_n\Bigr) \nonumber\\
&&\qquad \leq K_\e\max_{1\leq i\leq K_\e}
P\bigl(D^{+}_n(u_i)>\sqrt{n}\e\delta_k\bigr) \nonumber\\
&&\qquad \leq K_\e\max_{1\leq i\leq K_\e}
\f{E[(D^{+}_n(u_i))^2]}{n\e^2\delta_k^2}.\nonumber
\end{eqnarray}
Note that $|q_{1t}(u_i)|\leq C\delta_k\xi_{\rho t-1}$ and $m^2_t\leq
Cw_t^2\xi_{\rho t-1}^2$ for some $\rho\in(0,1)$ by Lem\-ma~\ref{lemmaA1}(i), and $\sup_{x\in
R}g(x)<\infty$ by Assumption \ref{asm26}. By Taylor's expansion, we have
\begin{eqnarray*}
E[(f^{+}_t(u_i))^2]&=&E\{E[(f^{+}_t(u_i))^2|\mathcal{F}_{t-1}]\}\\
&\leq& E\biggl[m^2_t\int_{0}^1 E\biggl[\biggl|X_t\biggl(-q_{1t}(u_i)s+\f{\e\delta_k}{C_1\sqrt
{h_t}} \biggl\|\f{\p\e_t(\gamma_0)}{\p\theta}\biggr\| \biggr) \biggr|\Big|\mathcal{F}_{t-1}\biggr]\,ds \biggr] \\
&\leq& CE\Bigl[\sup_{|x|\leq\delta_k C\xi_{\rho t-1}}|G(x)-G(0)|w_t^2\xi
^2_{\rho t-1}\Bigr] \\
&\leq& \delta_kCE(w_t^2\xi_{\rho t-1}^{3}).
\end{eqnarray*}
Since $f^{+}_t(u_i)-E[f^{+}_t(u_i)|\mathcal{F}_{t-1}]$ is a
martingale difference sequence, by the previous inequality, it
follows that
\begin{eqnarray}\label{66}
E[(D^{+}_n(u_i))^2]&=&\f{1}{n}\sum_{t=1}^n E\{
f^{+}_t(u_i)-E[f^{+}_t(u_i)|\mathcal{F}_{t-1}] \}^2 \nonumber\\
&\leq&\f{1}{n} \sum_{t=1}^n E[(f^{+}_t(u_i))^2] \nonumber\\[-8pt]\\[-8pt]
&\leq&\f{\delta_k}{n} \sum_{t=1}^n CE(w_t^2\xi_{\rho t-1}^{3}) \nonumber
\\
&\equiv&\pi_n(\delta_k).\nonumber
\end{eqnarray}
Thus, by (\ref{65}) and (\ref{66}), we have
\[
P\biggl(\sup_{u\in A(k)}\f{D_n(u)}{1+\sqrt{n}\|u\|}>6\e, E_n\biggr)\leq K_\e\f{\pi
_n(\delta_k)}{n\e^2\delta_k^2}.
\]
By a similar argument, we can get the same bound for the lower tail.
Thus, we can show that
\begin{equation} \label{67}
P\biggl(\sup_{u\in A(k)}\f{|D_n(u)|}{1+\sqrt{n}\|u\|}>6\e,
E_n\biggr)\leq2K_\e\f{\pi_n(\delta_k)}{n\e^2\delta_k^2}.
\end{equation}
Since $\pi_n(\delta_k)\rightarrow0$ as $k\to\infty$, we can choose
$k_\e$ so that
\[
2\pi_n(\delta_k)K_\e/(\e\eta)^2<\e
\]
for $k\geq k_\e$. Let $k_n$ be an integer so that $n^{-1/2}\leq
2^{-k_n}< 2n^{-1/2}$. Split $\{u:\|u\|\leq\eta\}$ into two sets
$B(k_n+1)$ and $B(k_n+1)^c=\bigcup_{k=0}^{k_n} A(k)$.
By (\ref{67}), since $\pi_n(\delta_k)$ is bounded, we have
\begin{eqnarray} \label{68}
&& P\biggl(\sup_{u\in
B(k_n+1)^c}\f{|D_n(u)|}{1+\sqrt{n}\|u\|}>6\e\biggr) \nonumber\\
&&\qquad \leq\sum_{k=0}^{k_n} P\biggl(\sup_{u\in A(k)}\f{|D_n(u)|}{1+\sqrt{n}\|u\|
}>6\e, E_n\biggr)+P(E_n^c) \\
&&\qquad \leq\f{1}{n}\sum_{k=0}^{k_\e-1} \f{C K_\e}{\e^2\eta^2}2^{2k}+\f{\e
}{n}\sum_{k=k_\e}^{k_n} 2^{2k}+P(E_n^c)\nonumber\\
&&\qquad \leq O\biggl(\f{1}{n}\biggr)+4\e\f{2^{2k_n}}{n}+P(E_n^c) \nonumber\\
&&\qquad \leq O\biggl(\f{1}{n}\biggr)+ 4\e+P(E_n^c).\nonumber
\end{eqnarray}
Since $1+\sqrt{n}\|u\|>1$ and $\sqrt{n}\delta_{k_n+1}<1$, using a
similar argument as for~(\ref{65}) together with (\ref{66}), we have
\begin{eqnarray*}
P\biggl(\sup_{u\in B(k_n+1)} \f{D_n(u)}{1+\sqrt{n}\|u\|}>3\e, E_n\biggr) &\leq&
P\Bigl(\max_{1\leq i\leq K_\e} D^{+}_n(u_i)>\e, E_n\Bigr) \\
&\leq&\f{K_\e\pi_n(\delta_{k_n+1})}{\e^2}.
\end{eqnarray*}
We can get the same bound for the lower tail. Thus, we have
\begin{eqnarray}\label{69}
&& P\biggl(\sup_{u\in B(k_n+1)} \f{|D_n(u)|}{1+\sqrt{n}\|u\|}>3\e\biggr) \nonumber\\
&&\qquad =P\biggl(\sup_{u\in B(k_n+1)} \f{|D_n(u)|}{1+\sqrt{n}\|u\|}>3\e,
E_n\biggr)+P(E_n^c)\\
&&\qquad \le\f{2K_\e\pi_n(\delta_{k_n+1})}{\e^2}+P(E_n^c).\nonumber
\end{eqnarray}
Note that $\pi_n(\delta_{k_n+1})\rightarrow0$ as $n\to\infty$.
Furthermore, $P(E_n)\rightarrow1$ by the ergodic theorem. Hence,
\[
P(E_n^c)\to0 \qquad\mbox{as } n\to\infty.
\]
Finally, (\ref{62}) follows by (\ref{68}) and (\ref{69}). This completes
the proof.
\end{pf*}
\begin{pf*}{Proof of Lemma \ref{lemma23}}
(i). By a direct calculation, we
have
\begin{eqnarray}\label{610}
\Pi_{2n}(u)&=&2\sum_{t=1}^n w_t\int_0^{-q_{1t}(u)} G(s)-G(0) \,ds \nonumber\\
&=& 2\sum_{t=1}^n w_t\int_0^{-q_{1t}(u)} sg(\varsigma^*) \,ds \\
&=& \bigl(\sqrt{n}u\bigr)' [K_{1n}+K_{2n}(u)]\bigl(\sqrt{n}u\bigr),\nonumber
\end{eqnarray}
where $\varsigma^*$ lies between 0 and $s$, and
\begin{eqnarray*}
K_{1n}&=&\f{g(0)}{n} \sum_{t=1}^n \f{w_t}{h_t(\theta_0)} \,\f{\p\e_t(\gamma
_0)}{\p\theta} \,\f{\p\e_t(\gamma_0)}{\p\theta'},\\
K_{2n}(u)&=&\f{2}{n\|u\|^2} \sum_{t=1}^n w_t\int_0^{-q_{1t}(u)}
s[g(\varsigma^*)-g(0)]\,ds.
\end{eqnarray*}
By the ergodic theorem, it is easy to see that
\begin{equation}\label{611}
K_{1n}=\Sigma_1+o_p(1).
\end{equation}
Furthermore, since $|q_{1t}(u)|\leq C\|u\|\xi_{\rho t-1}$ for some
$\rho\in(0,1)$ by Lemma \ref{lemmaA1}(i), it is straightforward to see that
for any $\eta>0$,
\begin{eqnarray*}
\sup_{\|u\|\leq\eta}|K_{2n}(u)|&\leq&\sup_{\|u\|\leq
\eta}\f{2}{n\|u\|^2} \sum_{t=1}^n
w_t\int_{-|q_{1t}(u)|}^{|q_{1t}(u)|}
s|g(\varsigma^*)-g(0)|\,ds\\
&\leq&\f{1}{n} \sum_{t=1}^n \Bigl[\sup_{|s|\leq C\eta\xi_{\rho t-1}}
|g(s)-g(0)| w_t\xi_{\rho t-1}^2\Bigr].
\end{eqnarray*}
By Assumptions \ref{asm24} and \ref{asm26}, $E(w_t\xi_{\rho
t-1}^2)<\infty$ and $\sup_{x\in R} g(x)<\infty$. Then, by the dominated
convergence
theorem, we have
\[
\lim_{\eta\to0}E\Bigl[\sup_{|s|\leq C\eta\xi_{\rho t-1}}
|g(s)-g(0)|w_t\xi_{\rho t-1}^2 \Bigr]=0.
\]
Thus, by the stationarity of $\{y_{t}\}$ and Markov's theorem, for
$\forall\e,\delta>0$, $\exists\eta_{0}(\e)>0$, such that
\begin{equation}\label{612}
P\Bigl(\sup_{\|u\|\leq
\eta_{0}}|K_{2n}(u)|>\delta\Bigr)<\frac{\e}{2}
\end{equation}
for all $n\geq1$. On the other hand, since $u_{n}=o_{p}(1)$, it
follows that
\begin{equation}\label{613}
P(\|u_{n}\|>\eta_{0})<\frac{\e}{2}
\end{equation}
as $n$ is large enough. By (\ref{612}) and (\ref{613}), for $\forall\e
,\delta>0$, we have
\begin{eqnarray*}
P\bigl(|K_{2n}(u_{n})|>\delta\bigr)&\leq&
P\bigl(|K_{2n}(u_{n})|>\delta, \|u_{n}\|\leq
\eta_{0}\bigr)+
P(\|u_{n}\|>\eta_{0}) \\
&<& P\Bigl(\sup_{\|u\|\leq\eta_{0}}|K_{2n}(u)|>\delta\Bigr)+\frac{\e}{2}\\
&<&\e
\end{eqnarray*}
as $n$ is large enough, that is, $K_{2n}(u_{n})=o_{p}(1)$. Furthermore,
combining (\ref{610}) and (\ref{611}), we can see that (i) holds.
(ii) Let $\Pi_{3n}(u)=(\sqrt{n}u)'K_{3n}(\xi^{*})(\sqrt
{n}u)+K_{4n}(u)$, where
\begin{eqnarray*}
K_{3n}(\xi^{*})&=&\f{1}{n}\sum_{t=1}^n \f{w_t}{\sqrt{h_t}}\,\f{\p^2\e_t(\xi
^*)}{\p\theta\,\p\theta'}[I(\eta_t>0)-I(\eta_t<0)],\\
K_{4n}(u)&=&2\sum_{t=1}^n w_t\int_{-q_{1t}(u)}^{-q_t(u)} X_t(s) \,ds.
\end{eqnarray*}
By Assumption \ref{asm24} and Lemma \ref{lemmaA1}(i), there exists a constant
$\rho\in(0,1)$ such that
\[
E\biggl(\sup_{\xi^{*}\in\Lambda} \f{w_t}{\sqrt{h_t}}\biggl|\f{\p^2\e_t(\xi^*)}{\p
\theta\,\p\theta'}
[I(\eta_t>0)-I(\eta_t<0)]\biggr|\biggr)\leq CE(w_t\xi_{\rho t-1})<\infty.
\]
Since $\eta_{t}$ has median 0, the conditional expectation property
gives
\[
E\biggl(\f{w_t}{\sqrt{h_t}}\,\f{\p^2\e_t(\xi^*)}{\p\theta\,\p\theta'}[I(\eta
_t>0)-I(\eta_t<0)]\biggr)=0.
\]
Then, by Theorem 3.1 in \citet{r28}, we have
\[
\sup_{\xi^{*}\in\Lambda} |K_{3n}(\xi^{*})|=o_p(1).
\]
On the other hand,
\begin{eqnarray*}
\f{K_{4n}(u)}{n\|u\|^2}&=&\f{2}{n}\sum_{t=1}^n w_t
\int_0^{-q_{2t}(u)/\|u\|^2} X_t\bigl(\|u\|^2s-q_{1t}(u)\bigr)\,ds\\
&\equiv&\f{2}{n}\sum_{t=1}^{n} J_{1t}(u).
\end{eqnarray*}
By Lemma \ref{lemmaA1}, we have $|\|u\|^{-2}q_{2t}(u)|\leq C\xi_{\rho t-1}$
and $|q_{1t}(u)|\leq C\|u\|\xi_{\rho t-1}$ for some $\rho\in(0,1)$.
Then, for any $\eta>0$, we have
\begin{eqnarray*}
\sup_{\|u\|\leq\eta}|J_{1t}(u)| &\leq& w_t \int_{-C\xi_{\rho
t-1}}^{C\xi_{\rho t-1}} \{X_t(C\eta^2\xi_{\rho
t-1}+C\eta\xi_{\rho t-1})
\\
&&\hspace*{52.4pt}{} -X_t(-C\eta^2\xi_{\rho t-1}-C\eta\xi_{\rho
t-1})\}\,ds
\\
&\leq& 2Cw_t\xi_{\rho t-1} \{X_t(C\eta^2\xi_{\rho
t-1}+C\eta\xi_{\rho t-1})
\\
&&\hspace*{52.4pt}{} -X_t(-C\eta^2\xi_{\rho t-1}-C\eta\xi_{\rho
t-1})\}.
\end{eqnarray*}
By Assumptions \ref{asm24} and \ref{asm26} and the double expectation
property, it
follows that
\begin{eqnarray*}
E\Bigl[\sup_{\|u\|\leq\eta}|J_{1t}(u)|\Bigr]&\leq&
2CE[w_t\xi_{\rho t-1} \{G(C\eta^2\xi_{\rho
t-1}+C\eta\xi_{\rho t-1})
\\
&&\hspace*{65pt}{} -G(-C\eta^2\xi_{\rho
t-1}-C\eta\xi_{\rho t-1})\}] \\
&\leq& C(\eta^2+\eta)\sup_{x}g(x)E(w_t\xi_{\rho t-1}^{2})\to0
\end{eqnarray*}
as $\eta\to0$. Thus, as for (\ref{612}) and (\ref{613}), we can show that
$K_{4n}(u_{n})=o_{p}(n\|u_{n}\|^{2})$. This completes the proof
of\vspace*{2pt}
(ii).
(iii) Let $\Pi_{4n}(u)=(\sqrt{n}u)'[n^{-1}\sum_{t=1}^n
J_{2t}(\zeta^{*})](\sqrt{n}u)$, where
\[
J_{2t}(\zeta^*)=w_t
\biggl(\f{3}{8}\biggl|\f{\e_t(\gamma_0)}{\sqrt{h_t(\zeta^*)}}\biggr|
-\f{1}{4}\biggr)\f{1}{h_t^2(\zeta^*)} \,\f{\p h_t(\zeta^*)}{\p\theta}\,
\f{\p h_t(\zeta^*)}{\p\theta'}.
\]
By Assumption \ref{asm24} and Lemma \ref{lemmaA1}(ii)--(iv), there
exists a constant
$\rho\in(0,1)$ and a neighborhood $\Theta_{0}$ of $\theta_{0}$ such
that
\[
E\Bigl[\sup_{\zeta^{*}\in\Theta_{0}} |J_{2t}(\zeta^*)|
\Bigr]\leq CE[w_t\xi_{\rho t-1}^2(|\eta_t|\xi_{\rho
t-1}+1)]<\infty.
\]
Then, by Theorem 3.1 of \citet{r28}, we have
\[
\sup_{\zeta^{*}\in\Theta_{0}}\Biggl|\f{1}{n}\sum_{t=1}^n
J_{2t}(\zeta^{*})-E[J_{2t}(\zeta^{*})]\Biggr|=o_p(1).
\]
Moreover, since $\zeta^{*}_{n}\to\theta_{0}$ a.s., by the dominated
convergence theorem, we have
\[
\lim_{n\to\infty} E[J_{2t}(\zeta^{*}_{n})]=E[J_{2t}(\theta_0)]=\Sigma_{2}.
\]
Thus, (iii) follows from the previous two equations. This completes
the proof of~(iii).
(iv) Since $E|\eta_t|=1$, a similar argument as for part (iii)
shows that (iv) holds.
\mbox{}\phantom{i}(v) By Taylor's expansion, we have
\[
\f{1}{\sqrt{h_t(\theta_0+u)}}-\f{1}{\sqrt{h_t(\theta_0)}}=\f
{-u'}{2(h_t(\zeta^{*}))^{3/2}}\,\f{\p
h_t(\zeta^*)}{\p\theta},
\]
where $\zeta^*$ lies between $\theta_0$ and $\theta_0+u$. By
identity (\ref{26}), it is easy to see that
\begin{eqnarray*}
|\e_t(\gamma_0+u_1)|-|\e_t(\gamma_0)|&=&u'\,\f{\p\e_t(\xi^*)}{\p\theta
}[I(\eta_t>0)-I(\eta_t<0)]\\
&&{} +2u'\,\f{\p\e_t(\xi^*)}{\p\theta}\int_{0}^{1}
X_t\biggl(-\f{u'}{\sqrt{h_t}}\,\f{\p\e_t(\xi^*)}{\p\theta}s\biggr)\,ds,
\end{eqnarray*}
where $\xi^*$ lies between $\gamma_0$ and $\gamma_0+u_1$. By the
previous two equations, it follows that
\[
\sum_{t=1}^n
w_tC_t(u)=\bigl(\sqrt{n}u\bigr)'[K_{5n}(u)+K_{6n}(u)]\bigl(\sqrt{n}u\bigr),
\]
where
\begin{eqnarray*}
K_{5n}(u)&=&\f{1}{n}\sum_{t=1}^n \f{w_t}{2h_t^{3/2}(\zeta^{*})}\,\f{\p
h_t(\zeta^*)}{\p\theta} \,\f{\p\e_t(\xi^*)}{\p\theta'}
[I(\eta_t<0)-I(\eta_t>0)],\\
K_{6n}(u)&=&-\f{1}{n}\sum_{t=1}^n \f{w_t}{h_t^{3/2}(\zeta^{*})}\,\f{\p
h_t(\zeta^*)}{\p\theta} \,\f{\p\e_t(\xi^*)}{\p\theta'}\int_{0}^{1}
X_t\biggl(-\f{u'}{\sqrt{h_t}}\,\f{\p\e_t(\xi^*)}{\p\theta}s\biggr)\,ds.
\end{eqnarray*}
By Lemma \ref{lemmaA1}(i), (iii), (iv) and a similar argument as for part
(ii), it is easy to see that $K_{5n}(u_n)=o_p(1)$ and
$K_{6n}(u_n)=o_p(1)$. Thus, it follows that (v) holds. This
completes all of the proofs.
\end{pf*}
\section{Concluding remarks}\label{sec7}
In this paper, we first propose a self-weighted QMELE for the
ARMA--GARCH model. The strong consistency and asymptotic normality of
the global self-weighted QMELE\vspace*{1pt} are established under a~fractional
moment condition of $\varepsilon_{t}$ with $E\eta_{t}^{2}<\infty$.
Based on this estimator, the local QMELE is showed to be asymptotically
normal for the ARMA--GARCH (finite variance) and --IGARCH models. The
empirical study shows that the self-weighted/local QMELE has a better
performance than the self-weighted/local QMLE when $\eta_{t}$ has a
heavy-tailed distribution, while the local QMELE is more efficient than
the self-weighted QMELE for the cases with a finite variance and
--IGARCH errors. We also give a real example to illustrate that our new
estimation procedure is necessary. According to our limit experience,
the estimated tail index of most of data sets lies in $[2,4)$ in
economics and finance. Thus, the local QMELE may be the most suitable
in practice if there is a further evidence to show that
$E\eta_{t}^{4}=\infty$.
\begin{appendix}\label{app}
\section*{Appendix}
The Lemma \ref{lemmaA1} below is from \citet{r25}.
\begin{lem}\label{lemmaA1}
Let $\xi_{\rho t}$ be defined as in Assumption \ref{asm24}.
If Assumptions~\ref{asm21} and~\ref{asm22} hold, then there exists a constant $\rho\in (0,1)$ and a
neighborhood~$\Theta_0$ of $\theta_0$ such that:
\begin{eqnarray*}
\sup_{\Theta}|\e_{t-1}(\gamma)|&\leq& C\xi_{\rho t-1},\\
\mbox{\textup{(i)}\hspace*{37.5pt}\quad}\sup_{\Theta}
\biggl\|\f{\p\e_t(\gamma)}{\p\gamma}\biggr\|&\leq& C\xi_{\rho t-1}\quad \mbox{and}\\
\sup_{\Theta}\biggl\|\f{\p^2\e_t(\gamma)}{\p\gamma\,\p\gamma'}\biggr\|&\leq&
C\xi_{\rho t-1},\\
\mbox{\textup{(ii)}\hspace*{55pt}\quad} \sup_{\Theta} h_t(\theta)&\leq& C\xi^2_{\rho t-1}, \\
\mbox{\textup{(iii)}\quad\hspace*{11pt}} \sup_{\Theta_0}\biggl\|\f{1}{h_t(\theta)}\,\f{\p
h_t(\theta)}{\p\delta}\biggr\|&\leq& C\xi^{\iota_1}_{\rho
t-1}\qquad\mbox{for any } \iota_1\in(0,1),\\
\mbox{\textup{(iv)}\quad} \sup_{\Theta}\biggl\|\f{1}{\sqrt{h_t(\theta)}}\,\f{\p
h_t(\theta)}{\p\gamma}\biggr\|&\leq& C\xi_{\rho t-1}.
\end{eqnarray*}
\end{lem}
\begin{lem}\label{lemmaA2}
For any $\theta^*\in\Theta$, let
$B_\eta(\theta^*)=\{\theta\in\Theta:\|\theta-\theta^*\|<\eta\}$ be
an open neighborhood of $\theta^*$ with radius $\eta>0$. If
Assumptions \ref{asm21}--\ref{asm25} hold, then:
\begin{eqnarray*}
&&\mbox{\hphantom{ii}\textup{(i)}\quad} E\Bigl[\sup_{\theta\in\Theta} w_tl_t(\theta)\Bigr]<\infty,\\
&&\mbox{\hphantom{i}\textup{(ii)}\quad} E[w_tl_t(\theta)] \qquad\mbox{has a unique minimum at } \theta_0,\\
&&\mbox{\textup{(iii)}\quad} E\Bigl[\sup_{\theta\in
B_\eta(\theta^*)}w_t|l_t(\theta)-l_t(\theta^*)|\Bigr]\to0 \qquad\mbox{as } \eta\to0.
\end{eqnarray*}
\end{lem}
\begin{pf}
First, by (A.13) and (A.14) in \citet{r25} and Assumptions~\ref{asm24} and~\ref{asm25}, it follows that
\[
E\biggl[\sup_{\theta\in\Theta}
\f{w_t|\e_t(\gamma)|}{\sqrt{h_t(\theta)}}\biggr]\leq
CE[w_t\xi_{\rho t-1}(1+|\eta_t|)]<\infty
\]
for some
$\rho\in(0,1)$, and
\[
E\Bigl[\sup_{\theta\in\Theta}w_t\log
\sqrt{h_t(\theta)}\Bigr]<\infty;
\]
see Ling (\citeyear{r25}), page 864. Thus, (i)
holds.
Next, by a direct calculation, we have
\begin{eqnarray*}
E[w_tl_t(\theta)]&=&E\biggl[w_t\log\sqrt{h_t(\theta)}+\frac{w_t|\e_t(\gamma
_0)+(\gamma-\gamma_0)'({\p\e_t(\xi^*)}/{\p\theta})|}{\sqrt{h_t(\theta
)}}\biggr]\\
&=&E\biggl[w_t\log\sqrt{h_t(\theta)}+\frac{w_t}{\sqrt{h_t(\theta)}}E\biggl\{\!\biggl|\e
_t(\gamma_0)+(\gamma-\gamma_0)'\,\f{\p\e_t(\xi^*)}{\p\theta}\biggr|\Big|\mathcal
{F}_{t-1}\!\biggr\}\!\biggr]\\
&\geq& E\biggl[w_t\log\!\sqrt{h_t(\theta)}+\frac{w_t}{\sqrt{h_t(\theta)}}E(|\e
_t||\mathcal{F}_{t-1})\biggr]\\
&=&E\Biggl[w_t\Biggl(\log\sqrt{\f{h_t(\theta)}{h_t(\theta_0)}}+\sqrt{\f{h_t(\theta
_0)}{h_t(\theta)}}\Biggr)\Biggr]+E
\bigl[w_t\log\sqrt{h_t(\theta_0)}\bigr],
\end{eqnarray*}
where the last inequality holds since $\eta_t$ has a unique median
0, and obtains the minimum if and only if $\gamma=\gamma_0$ a.s.;
see \citet{r25}. Here, $\xi^*$ lies between~$\gamma$ and $\gamma_0$.
Considering the function $f(x)=\log x+a/x$ when $a\geq0$, it
reaches the minimum at $x=a$. Thus, $E[w_tl_t(\theta)]$ reaches the
minimum if and only if $\sqrt{h_t(\theta)}=\sqrt{h_t(\theta_0})$
a.s., and hence $\theta=\theta_0$; see \citet{r25}. Thus, we can
claim that $E[w_tl_t(\theta)]$ is uniformly minimized at $\theta_0$,
that is, (ii) holds.
Third, let $\theta^*=({\gamma^{*}}', {\delta^{*}}')'\in\Theta$. For
any $\theta\in B_\eta(\theta^*)$, using Taylor's expansion, we can
see that
\[
\log\sqrt{h_t(\theta)}-\log\sqrt{h_t(\theta^*)}=\f{(\theta-\theta
^*)'}{2h_t(\theta^{**})}\,\f{\p
h_t(\theta^{**})}{\p\theta},
\]
where $\theta^{**}$ lies between $\theta$ and $\theta^{*}$. By Lemma
\ref{lemmaA1}(iii)--(iv) and Assumption~\ref{asm24}, for some $\rho\in
(0,1)$, we have
\[
E\Bigl[\sup_{\theta\in B_{\eta}(\theta^*)}
w_t\bigl|\log\sqrt{h_t(\theta)}-\log\sqrt{h_t(\theta^*)}\bigr|\Bigr]\leq
C\eta E(w_t\xi_{\rho t-1})\rightarrow0
\]
as $\eta\rightarrow0$. Similarly,
\begin{eqnarray*}
E\biggl[\sup_{\theta\in B_{\eta}(\theta^*)}\f{w_t}{\sqrt{h_t(\theta)}}\bigl||\e
_t(\gamma)|-|\e_t(\gamma^*)|\bigr|\biggr]&\rightarrow&0 \qquad\mbox{as } \eta\rightarrow
0,\\
E\biggl[\sup_{\theta\in B_{\eta}(\theta^*)}
w_t|\e_t(\gamma^*)|\biggl|\f{1}{\sqrt{h_t(\theta)}}-\f{1}{\sqrt{h_t(\theta
^*)}}\biggr|\biggr]&\rightarrow&
0\qquad \mbox{as } \eta\rightarrow0.
\end{eqnarray*}
Then, it follows that (iii) holds. This completes all of the proofs
of Lemma~\ref{lemmaA2}.
\end{pf}
\begin{pf*}{Proof of Theorem \ref{theorem21}}
We use the method in \citet{r17}.
Let $V$ be any open neighborhood of $\theta_0\in\Theta$. By Lemma
\ref{lemmaA2}(iii), for any $\theta^*\in V^c=\Theta/V$ and $\e>0$, there
exists an $\eta_0>0$ such that
\begin{equation}\label{A1}
E\Bigl[\inf_{\theta\in B_{\eta_0}(\theta^*)} w_tl_t(\theta)\Bigr]
\geq E[w_tl_t(\theta^*)]-\e.
\end{equation}
From Lemma \ref{lemmaA2}(i), by the ergodic theorem, it follows that
\begin{equation}\label{A2}
\f{1}{n}\sum_{t=1}^n \inf_{\theta\in B_{\eta_0}(\theta^*)}
w_tl_t(\theta) \geq E\Bigl[\inf_{\theta\in B_{\eta_0}(\theta^*)}
w_t l_t(\theta)\Bigr]-\e
\end{equation}
as $n$ is large enough. Since $V^c$ is compact, we can choose
$\{B_{\eta_{0}}(\theta_i)\dvtx\theta_i\in V^c, i=1,2,\ldots, k\}$ to be a
finite covering of $V^c$. Thus, from (\ref{A1}) and (\ref{A2}), we have
\begin{eqnarray}\label{A3}
\inf_{\theta\in V^c} L_{sn}(\theta)&=&\min_{1\leq i\leq k}\inf_{\theta
\in B_{\eta_0}(\theta_i)} L_{sn}(\theta) \nonumber\\
&\geq&\min_{1\leq i\leq k} \f{1}{n} \sum_{t=1}^n \inf_{\theta\in B_{\eta
_0}(\theta_i)} w_t l_t(\theta) \\
&\geq&\min_{1\leq i\leq k} E\Bigl[\inf_{\theta\in
B_{\eta_0}(\theta_i)} w_t l_t(\theta)\Bigr]-\e\nonumber
\end{eqnarray}
as $n$ is large enough. Note that the infimum on the compact set
$V^c$ is attained. For each $\theta_i\in V^c$, from Lemma \ref{lemmaA2}(ii),
there exists an $\e_0>0$ such that
\begin{equation}\label{A4}
E\Bigl[\inf_{\theta\in B_{\eta_0}(\theta_i)} w_t
l_t(\theta)\Bigr]\geq E[w_t l_t(\theta_0)]+3\e_0.
\end{equation}
Thus, from (\ref{A3}) and (\ref{A4}), taking $\e=\e_0$, it follows that
\begin{equation}\label{A5}
\inf_{\theta\in V^c} L_{sn}(\theta)\geq E[w_t l_t(\theta_0)]+2\e_0.
\end{equation}
On the other hand, by the ergodic theorem, it follows that
\begin{equation} \label{A6}
\inf_{\theta\in V} L_{sn}(\theta)\leq
L_{sn}(\theta_0)=\f{1}{n}\sum_{t=1}^n w_t l_t(\theta_0)\leq E[w_t
l_t(\theta_0)]+\e_0.
\end{equation}
Hence, combining (\ref{A5}) and (\ref{A6}), it gives us
\[
\inf_{\theta\in V^c} L_{sn}(\theta)\geq E[w_t
l_t(\theta_0)]+2\e_0>E[w_t l_t(\theta_0)]+\e_0\geq\inf_{\theta\in
V} L_{sn}(\theta),
\]
which implies that
\[
\hat{\theta}_{sn}\in V\qquad\mbox{ a.s. for }\forall V\mbox{, as } n \mbox
{ is large enough.}
\]
By the arbitrariness of $V$, it yields $\hat{\theta}_{sn}\rightarrow
\theta_0$ a.s. This completes the proof.
\end{pf*}
\end{appendix}
\section*{Acknowledgments}
The authors greatly appreciate the very helpful comments of two
anonymous referees, the Associate Editor and the Editor T.~Tony, Cai.
|
2,877,628,088,613 | arxiv |
\section*{Acknowledgements}
The ALICE collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex.
The ALICE collaboration acknowledges the following funding agencies for their support in building and
running the ALICE detector:
\begin{itemize}
\item{}
Calouste Gulbenkian Foundation from Lisbon and Swiss Fonds Kidagan, Armenia;
\item{}
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq), Financiadora de Estudos e Projetos (FI\-N\-EP),
Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP);
\item{}
National Natural Science Foundation of China (NSFC), the Chinese Ministry of Education (CMOE)
and the Ministry of Science and Technology of China (MSTC);
\item{}
Ministry of Education and Youth of the Czech Republic;
\item{}
Danish Natural Science Research Council, the Carlsberg Foundation and the Danish National Research Foundation;
\item{}
The European Research Council under the European Community's Seventh Framework Programme;
\item{}
Helsinki Institute of Physics and the Academy of Finland;
\item{}
French CNRS-IN2P3, the `Region Pays de Loire', `Region Alsace', `Region Auvergne' and CEA, France;
\item{}
German BMBF and the Helmholtz Association;
\item{}
Hungarian OTKA and National Office for Research and Technology (NKTH);
\item{}
Department of Atomic Energy and Department of Science and Technology of the Government of India;
\item{}
Istituto Nazionale di Fisica Nucleare (INFN) of Italy;
\item{}
MEXT Grant-in-Aid for Specially Promoted Research, Ja\-pan;
\item{}
Joint Institute for Nuclear Research, Dubna;
\item{}
Korea Foundation for International Cooperation of Science and Technology (KICOS);
\item{}
CONACYT, DGAPA, M\'{e}xico, ALFA-EC and the HELEN Program (High-Energy physics Latin-American--European Network);
\item{}
Stichting voor Fundamenteel Onderzoek der Materie (FOM) and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Netherlands;
\item{}
Research Council of Norway (NFR);
\item{}
Polish Ministry of Science and Higher Education;
\item{}
National Authority for Scientific Research - NASR (Autontatea Nationala pentru Cercetare Stiintifica - ANCS);
\item{}
Federal Agency of Science of the Ministry of Education and Science of Russian Federation, International Science and
Technology Center, Russian Academy of Sciences, Russian Federal Agency of Atomic Energy, Russian Federal Agency for Science and Innovations and CERN-INTAS;
\item{}
Ministry of Education of Slovakia;
\item{}
CIEMAT, EELA, Ministerio de Educaci\'{o}n y Ciencia of Spain, Xunta de Galicia (Conseller\'{\i}a de Educaci\'{o}n),
CEA\-DEN, Cubaenerg\'{\i}a, Cuba, and IAEA (International Atomic Energy Agency);
\item{}
Swedish Reseach Council (VR) and Knut $\&$ Alice Wallenberg Foundation (KAW);
\item{}
Ukraine Ministry of Education and Science;
\item{}
United Kingdom Science and Technology Facilities Council (STFC);
\item{}
The United States Department of Energy, the United States National
Science Foundation, the State of Texas, and the State of Ohio.
\end{itemize}
\end{acknowledgement}
\section*{Collaboration institutes}
}
\section*{Introduction}
We present the pseudorapidity density and the multiplicity distribution for primary charged particles\footnote{Primary particles are defined as prompt particles produced in the collision and all decay products, except products from weak decays of strange particles.} from a sample of $3 \times 10^5$ proton--proton events at a centre-of-mass energy $\sqrt{s} = 7$~TeV collected with the ALICE detector~\cite{ALICEdet} at the LHC~\cite{LHC}, and compare them with our previous results at $\sqrt{s} = 0.9$~TeV and $\sqrt{s} = 2.36$~TeV~\cite{ALICEfirst,ALICEsecond}.
The present study is for the central pseudorapidity region \etain{1}.
In the previous measurements, the main contribution to systematic uncertainties came from the limited knowledge of cross sections and kinematics of diffractive processes. At 7~TeV, there is no experimental information available about these processes; therefore, we do not attempt to normalize our results to the classes of events used in our previous publications (inelastic events and non-single-diffractive events). Instead, we chose an event class requiring at least one charged particle in the pseudorapidity interval \etain{1} (INEL$>$0$_{|\eta|<1}$), minimizing the model dependence of the corrections. We re-analyzed the data already published at 0.9~TeV and 2.36~TeV in order to normalize the results to this event class. These measurements have been compared to calculations with several commonly used models~\cite{Pythia,Pythia1,D6Ttune,CSCtune,Perugiatune,PhoJet} which will allow a better tuning to accurately simulate minimum-bias and underlying-event effects. Currently, the expectations for 7~TeV differ significantly from one another, both for the average multiplicity and for the multiplicity distribution (see e.g.~\cite{Grosse}).
\section*{ALICE detector and data collection}
The ALICE detector is described in~\cite{ALICEdet}. This analysis uses data from the Silicon Pixel Detector (SPD) and the VZERO counters, as described in~\cite{ALICEfirst,ALICEsecond}. The SPD detector comprises two cylindrical layers (radii 3.9~cm and 7.6~cm) surrounding the central beam pipe, and covers the pseudorapidity ranges $|\eta| < 2$ and $|\eta| < 1.4$, for the inner and outer layers, respectively.
The two VZERO scintillator hodoscopes are placed on either side of the interaction region at $z = 3.3$~m and $z = -0.9$~m, covering the pseudorapidity regions $2.8 < \eta< 5.1$ and $-3.7 < \eta < -1.7$, respectively.
Data were collected at a magnetic field of 0.5~T. The typical bunch intensity for collisions at 7~TeV was\linebreak $1.5 \times 10^{10}$ protons resulting in a luminosity around\linebreak $10^{27}$~cm$^{-2}$s$^{-1}$. There was only one bunch per beam colliding at the ALICE interaction point.
The probability that a recorded event contains more than one collision was estimated to be around $2 \times 10^{-3}$. A consistent value was measured by counting the events where more than one distinct vertex could be reconstructed.
We checked that pileup events did not introduce a significant bias using a simulation.
The data at 0.9~TeV and 7~TeV were collected with a trigger requiring a hit in the SPD or in either one of the VZERO counters; i.e. essentially at least one charged particle anywhere in the 8 units of pseudorapidity. At 2.36~TeV, the VZERO detector was turned off; the trigger required at least one hit in the SPD (\etain{2}). The events were in coincidence with signals from two beam pick-up counters, one on each side of the interaction region, indicating the passage of proton bunches. Control triggers taken (with the exception of the 2.36~TeV data) for various combinations of beam and empty-beam buckets
were used to measure beam-induced and accidental backgrounds. Most backgrounds were removed as described in~\cite{ALICEsecond}.
The remaining background in the sample is of the order of $10^{-4}$ to $10^{-5}$ and can be neglected.
\section*{Event selection and analysis}
The position of the interaction vertex was reconstructed by correlating hits in the two silicon-pixel layers. The vertex resolution achieved depends on the track multiplicity, and is typically 0.1--0.3~mm in the longitudinal ($z$) and 0.2--0.5~mm in the transverse direction.
The analysis is based on using hits in the two SPD layers to form short track segments, called tracklets. A tracklet is defined by a hit combination, one hit in the inner and one in the outer SPD layer, pointing to the reconstructed vertex. The tracklet algorithm is described in~\cite{ALICEfirst,ALICEsecond}.
Events used in the analysis were required to have a reconstructed vertex and at least one SPD tracklet with \etain{1}.
We restrict the $z$-vertex range to $|z|<5.5$~cm to ensure that the $\eta$-interval is entirely within the SPD acceptance.
After this selection, 47\,000, 35\,000, and 240\,000 events remain for analysis, at 0.9~TeV, 2.36~TeV, and 7~TeV, respectively.
The selection efficiency was studied using two different Monte Carlo event generators, PYTHIA 6.4.21~\cite{Pythia,Pythia1} tune Perugia-0~\cite{Perugiatune} and PHOJET~\cite{PhoJet}, with detector simulation and reconstruction.
The number of primary charged particles is estimated by counting the number of SPD tracklets, corrected for:
\begin{itemize}
\item{geometrical acceptance and detector and reconstruction efficiencies;}
\item{contamination from weak-decay products of strange particles, gamma conversions, and secondary interactions;}
\item{undetected particles below the 50~MeV/$c$ transverse-momentum cut-off, imposed by absorption in the material;}
\item{combinatorial background in tracklet reconstruction.}
\end{itemize}
The total number of collisions corresponding to our data is obtained from the number of events selected for the analysis, applying corrections for trigger and selection efficiencies. This leads to overall corrections of 7.8\,\%, 7.2\,\%, and 5.7\,\% at 0.9~TeV, 2.36~TeV, and 7~TeV, respectively.
The multiplicity distributions were measured for \etain{1} at each energy.
For the 0.9~TeV and 2.36~TeV data we did not repeat the multiplicity-distribution analysis, we use the results from~\cite{ALICEsecond} while removing the zero-multiplicity bin.
At 7~TeV, we used the same method as described in~\cite{ALICEsecond,JanFiete} to correct the raw measured distributions for efficiency, acceptance, and other detector effects, which is based on unfolding using a detector response matrix from Monte Carlo simulations. The unfolding procedure applies $\chi^2$ minimization with regularization~\cite{blobel_unfolding}.
Consistent results were found when changing the regularization term and the convergence criteria within reasonable limits, and when using a different unfolding method based on Bayes's theorem~\cite{agostini_bayes,agostini_yellowreport}.
\section*{Systematic uncertainties}
Only events with at least one tracklet in $|\eta|<1$ have been selected for analysis in order to reduce sensitivity to model-dependent corrections.
However, a fraction of diffractive reactions also falls into this event category and influences the correction factors at low multiplicities.
In order to evaluate this effect, we varied the fractions of single-diffractive and double-diffractive events produced by the event generators by $\pm 50$\,\% of their nominal values at 7~TeV, and for the other energies we used the variations described in~\cite{ALICEsecond}. The resulting contributions to the systematic uncertainties are estimated to be 0.5\,\%, 0.3\,\%, and 1\,\% for the data at 0.9~TeV, 2.36~TeV, and 7~TeV, respectively. For the same reason, the event selection efficiency is sensitive to the differences between models used to calculate this correction. Therefore, we used the two models which have the largest difference in their multiplicity distributions at very low multiplicities (see below): PYTHIA tune Perugia-0 and PHOJET. The first one was used to calculate the central values for all our results, and the second for asymmetric systematic uncertainties. The values obtained for this contribution are $+0.8$\,\%, $+1.5$\,\%, and $+2.8$\,\% for the three energies considered.
Other sources of systematic uncertainties, e.g. the particle composition, the $p_T$ spectrum and the detector efficiency, are described in~\cite{ALICEsecond}, and their contributions were estimated in the same way.
As a consequence of the smaller uncertainties on the event selection corrections the total systematic uncertainties are significantly smaller than in our previous analyses, which use as normalization inelastic and non-single-diffractive collisions.
Many of the systematic uncertainties cancel when the ratios between the different energies are calculated, in particular the dominating ones, such as the detector efficiency and the event generator dependence. The systematic uncertainty related to diffractive cross sections was assumed to be uncorrelated between energies.
\begin{table*}[htb]
\centering
\caption{Charged-particle pseudorapidity densities at central pseudorapidity ($|\eta|<1$), for inelastic collisions having at least one charged particle in the same region (INEL$>$0$_{|\eta|<1}$), at three centre-of-mass energies. For ALICE, the first uncertainty is statistical and the second is systematic. The relative increases between the 0.9~TeV and 2.36~TeV data, and between the 0.9~TeV and 7~TeV data, are given in percentages. The experimental measurements are compared to the predictions from models. For PYTHIA the tune versions are given in parentheses. The correspondence is as follows: D6T tune (109), ATLAS-CSC tune (306), and Perugia-0 tune (320).}
\label{multab}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Energy & ALICE & \multicolumn{3}{c}{PYTHIA~\cite{Pythia,Pythia1}} & PHOJET~\cite{PhoJet} \\
\noalign{\smallskip}\cline{3-5}\noalign{\smallskip}
(TeV)& & (109)~\cite{D6Ttune} & (306)~\cite{CSCtune} & (320)~\cite{Perugiatune} & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{5}{c}{Charged-particle pseudorapidity density}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0.9 & $3.81 \pm 0.01 ^{+0.07}_{-0.07}$ & 3.05 & 3.92 & 3.18 & 3.73 \\
\noalign{\smallskip}
2.36 & $4.70 \pm 0.01 ^{+0.11}_{-0.08}$ & 3.58 & 4.61 & 3.72 & 4.31 \\
\noalign{\smallskip}
7 & $6.01 \pm 0.01 ^{+0.20}_{-0.12}$ & 4.37 & 5.78 & 4.55 & 4.98 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{5}{c}{Relative increase (\%)}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0.9--2.36 & $23.3 \pm 0.4 ^{+1.1}_{-0.7}$ & 17.3 & 17.6 & 17.3 & 15.4 \\
\noalign{\smallskip}
0.9--7 & $57.6 \pm 0.4 ^{+3.6}_{-1.8}$ & 43.0 & 47.6 & 43.3 & 33.4 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
\begin{figure}[hbt]
\centering
\includegraphics[width=\linewidth]{figure8/growth.eps}
\caption{Relative increase of the charged-particle pseudorapidity density, for inelastic collisions having at least one charged particle in \etain{1}, between $\sqrt{s} =0.9$~TeV and 2.36~TeV (open squares) and between $\sqrt{s} = 0.9$~TeV and 7~TeV (full squares), for various models. Corresponding ALICE measurements are shown with vertical dashed and solid lines; the width of shaded bands correspond to the statistical and systematic uncertainties added in quadrature.}
\label{increasefig}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=\linewidth]{figure8/dndeta_vs_sqrt.eps}
\caption{Charged-particle pseudorapidity density in the central pseudorapidity region \etain{0.5}
for inelastic and non-single-diffractive collisions~\cite{ALICEsecond, CMS_first, UA5, ITS_dNdEta, R210,R210p, RHICRef, RHICRef1,UA5Rep, UA1, CDF_dNdEta}, and in \etain{1} for inelastic collisions with at least one charged particle in that region (INEL$>$0$_{|\eta|<1}$), as a function of the centre-of-mass energy. The lines indicate the
fit using a power-law dependence on energy. Note that data points at the same energy have been slightly shifted horizontally for visibility.}
\label{energyfig}
\end{figure}
\section*{Results}
The pseudorapidity density of primary charged particles in the central pseudorapidity region \etain{1} are presented in Table~\ref{multab} and compared to models. The measured values are higher than those from the models considered, except for PYTHIA tune ATLAS-CSC for the 0.9~TeV and 2.36~TeV data, and PHOJET for the 0.9~TeV data, which are consistent with the data. At 7~TeV, the data are significantly higher than the values from the models considered, with the exception of PYTHIA tune ATLAS-CSC, for which the data are only two standard deviations higher. We have also studied the relative increase of pseudorapidity densities of charged particles (Table~\ref{multab}) between the measurement at 0.9~TeV and the measurements at 2.36~TeV and 7~TeV. We observe an increase of $57.6\,\% \pm 0.4\,\%(\emph{stat.}) ^{+3.6}_{-1.8}\,\%(\emph{syst.})$ between the 0.9~TeV and 7~TeV data, compared with an increase of 47.6\,\% obtained from the closest model, PYTHIA tune ATLAS-CSC (Fig.~\ref{increasefig}). The 7~TeV data confirm the observation made in~\cite{ALICEsecond,CMS_first} that the measured multiplicity density increases with increasing energy significantly faster than in any of the models considered.
\begin{figure*}[tb]
\centering
\includegraphics[width=\columnwidth]{figure8/mult_energies.eps}
\includegraphics[width=\columnwidth]{figure8/data_vs_mc_7000_INEL_1.0.eps}
\caption{Measured multiplicity distributions in \etain{1} for the INEL$>$0$_{|\eta|<1}$ event class. The error bars for data points represent statistical uncertainties, the shaded areas represent systematic uncertainties. Left: The data at the three energies are shown with the NBD fits (lines). Note that for the 2.36~TeV and 7~TeV data the distributions have been scaled for clarity by the factors indicated. Right: The data at 7~TeV are compared to models: PHOJET (solid line), PYTHIA tunes D6T (dashed line), ATLAS-CSC (dotted line) and Perugia-0 (dash-dotted line). In the lower part, the ratios between the measured values and model calculations are shown with the same convention. The shaded area represents the combined statistical and systematic uncertainties.}
\label{distfig}
\end{figure*}
In Fig.~\ref{energyfig}, we compare the centre-of-mass energy dependence of the pseudorapidity density of charged particles for the INEL$>$0$_{|\eta|<1}$ class to the evolution for other event classes (inelastic and non-single-diffractive events), which have been measured at lower energies. Note that INEL$>$0$_{|\eta|<1}$ values are higher than inelastic and non-single-diffractive values, as expected, because events with no charged particles in \etain{1} are removed.
The increase in multiplicity from 0.9~TeV to 2.36~TeV and 7~TeV was studied by measuring the multiplicity distributions for the event class, INEL$>$0$_{|\eta|<1}$ (Fig.~\ref{distfig} left).
Small wavy fluctuations are seen at multiplicities above 25. While visually they may appear to be significant, one should note that the errors in the deconvoluted distribution are correlated over a range comparable to the multiplicity resolution and the uncertainty bands should be seen as one-standard-deviation envelopes of the deconvoluted distributions (see also~\cite{ALICEsecond}).
The unfolded distributions at 0.9~TeV and 2.36~TeV are described well by the Negative Binomial Distribution (NBD). At 7~TeV, the NBD fit slightly underestimates the data at low multiplicities ($N_\mathrm{ch} < 5$) and slightly overestimates the data at high multiplicities ($N_\mathrm{ch} > 55$).
A comparison of the 7~TeV data with models (Fig.~\ref{distfig} right) shows that only the PYTHIA tune ATLAS-CSC is close to the data at high multiplicities ($N_\mathrm{ch} > 25$). However, it does not reproduce the data in the intermediate multiplicity region ($8 < N_\mathrm{ch} < 25$). At low multiplicities, ($N_\mathrm{ch} < 5$), there is a large spread of values between different models: PHOJET is the lowest and PYTHIA tune Perugia-0 the highest.
\section*{Conclusion}
We have presented measurements of the pseudorapidity density and multiplicity distributions of primary charged particles produced in proton--proton collisions at the LHC, at a centre-of-mass energy $\sqrt{s} = 7$~TeV. The measured value of the pseudorapidity density at this energy is significantly higher than that obtained from current models, except for PYTHIA tune ATLAS-CSC. The increase of the pseudorapidity density with increasing centre-of-mass energies is significantly higher than that obtained with any of the used models and tunes.
The shape of our measured multiplicity distribution is not reproduced by any of the event generators considered. The discrepancy does not appear to be concentrated in a single region of the distribution, and varies with the model.
|
2,877,628,088,614 | arxiv | \section{Introduction}
A consideration of both relativistic and many-body theories is necessary to sufficiently describe heavy atomic and molecular systems~\cite{DasRev, HandbookRQC}. Although there are many theoretical approaches to the evaluation of heavy atomic and molecular wave functions, the relativistic coupled-cluster (RCC) method is to date the method of choice for high-accuracy calculations~\cite{Sato}. The RCC method takes the Dirac-Fock (DF) wave function as its starting point, then considers particle-hole excitations from it, to take electron correlation effects into consideration, and is able to simultaneously account for relativistic and many-body effects. It is a powerful many-body method which has been applied in a variety of fields, including nuclear physics~\cite{Dean, Hagen} and condensed-matter physics~\cite{Bishop}. The RCC method adapted to lattices~\cite{Bishop} could also be applied to optical physics and quantum information. Its advantage over other post-DF methods comes from its ability to include correlation effects to all orders perturbatively for all levels of particle-hole excitation. It also has the property that it is size-extensive~\cite{Bishop}. However, the computational cost scales rapidly with system size, especially compared with other, more approximate post-DF methods. Nevertheless, the power that the coupled-cluster method in general provides in determining many-body wave functions to high accuracy has earned it the reputation as the ``gold standard of many-body methods''~\cite{Thom}. There have been many works~\cite{Sahoo, Sahoo2, Sahoo3, Sakurai, Prasannaa, Abe, Cheng, Shee} which have applied various implementations of the coupled-cluster method to the calculations of different atomic and molecular properties, but all have had to introduce approximations to reduce the unfeasible computational cost associated with a full calculation~\cite{HandbookRQC}. In this work, we apply an improved RCC method in the evaluation of a physical quantity of significance in fundamental physics: the enhancement factor of the electric dipole moment of atomic francium, towards the search for an electron electric dipole moment. In addition, calculations of selected measured properties, such as the magnetic dipole hyperfine constants and the electric dipole transition amplitudes, have been performed. There have been high-precision spectroscopic measurements made on various properties of francium~\cite{Grossman, Simsarian, Sansonetti}, and the application of our new RCC method to the calculation of these measured quantities will allow us to test the strength of the method in giving reliable results.
For all its successes in describing fundamental particle interactions, the Standard Model (SM) of particle physics is unable to account for some outstanding observations in the universe. One of these is the so-called baryon asymmetry in the universe, where the matter-antimatter ratio observed in the current universe is off by several orders of magnitude, compared to the value predicted using SM~\cite{Canetti}. One of the proposed reasons for this discrepancy is the need for additional sources of CP-violation, which is the combined violation of charge conjugation (C) and parity (P) symmetries~\cite{Canetti}. A signature of CP-violation not yet observed is the intrinsic electric dipole moment (EDM) of the electron. For a particle to possess an intrinsic EDM, both P and time-reversal (T) symmetries must be independently violated~\cite{Landau, Ballentine, Sandars1, Kriplovich}. From the CPT theorem, T-violation implies CP-violation.
The SM predicts an electron EDM value, denoted by $d_e$, of $\vert d_e \vert \sim 10^{-38}$ e-cm~\cite{Bernreuther, Chupp}, which falls far outside the range of values current experiments can probe. However, several beyond the SM (BSM) paradigms predict values of the electron EDM that are many orders of magnitude larger, such as variants of the Supersymmetric (SUSY) model and the left-right symmetric model, which, depending on the parameters, can predict a value of the electron EDM as large as $\vert d_e \vert \sim10^{-27}$ e-cm~\cite{Bernreuther, Cirigliano}, which is within reach of current experiments. Thus, a successful measurement of a nonzero electron EDM would provide direct evidence for BSM physics~\cite{Cesarotti}, as well as shed insight into the observed baryon asymmetry in the universe~\cite{Fuyuto}.
Even if the electron EDM is not observed, imposing upper limits on its magnitude can constrain BSM models, which predict different ranges of possible electron EDM values~\cite{Fukuyama}.
The observation of the electron EDM has eluded experimentalists for over half a century, but not without significant improvements to the upper limits. Currently, heavy open-shell atoms and polar molecules are the most promising systems with which to determine the upper bounds on the magnitude of the electron EDM, with the best limit to date set by experiments on thorium oxide (ThO), at $\vert d_e \vert < 1.1\times10^{-29}$ e-cm with 90\% confidence~\cite{ACME}. The best experimental limit using atoms comes from $^{205}$Tl, at $\vert d_e \vert \leq 1.6 \times 10^{-27}$ e-cm with 90\% confidence~\cite{Regan}. Because the electron EDM is known to be extremely small, high precision is required for these experiments. In the case of atomic systems, the presence of a permanent electron EDM can induce an atomic EDM. This atomic EDM can be many times larger than the magnitude of the electron EDM for some systems~\cite{Sandars2}. It is this enhanced EDM that is exploited in electron EDM experiments using atoms. The energy shift due to the atomic EDM is measured in experiments. To obtain an upper limit for the magnitude of the electron EDM from this quantity, the enhancement factor $R$, defined as the ratio of the atomic EDM to the electron EDM, must be theoretically evaluated.
In this work, we calculate $R$ of the atomic EDM of $^{210}$Fr in the ground state. In many respects, Fr is a suitable candidate for an EDM experiment. Fr is the heaviest alkali atom, which means that it is a highly relativistic system and that it has a single valence electron, making it the atom with the highest predicted EDM enhancement factor out of all atomic candidates on which electron EDM search experiments are currently being performed. Its projected sensitivity is about two orders of magnitude better than the limit given by $^{205}$Tl~\cite{HaradaFPUA}. The greatest advantage of Fr over the other electron EDM search candidates explored in the past is that many isotopes can be prepared~\cite{Kawamura1}, on which EDM experiments can be done separately. This allows for the detection of signatures of CP-violating sources other than the electron EDM, in particular, the scalar-pseudoscalar (S-PS) interaction. In order to comprehensively study BSM physics, the S-PS interaction term must also be considered when performing EDM experiments.
So far, there have not been as many comprehensive studies on the contribution of the S-PS interaction to atomic EDM, even though it must be considered if the electron EDM is measured. As a more general point, the advantage of atomic systems over molecular candidates is the ability for theoretical calculations to be evaluated with a higher accuracy due to its simpler electronic structure, and thus the ability to obtain limits to higher accuracy. Furthermore, using different isotopes of the same atom to evaluate the coupling constants for the S-PS interaction allows systematic errors from experiments to be reduced, compared to performing the same measurements on different molecular species. For these reasons, a re-investigation into EDM enhancement of Fr will be of value to the ongoing search for the electron EDM. Electron EDM search experiments using Fr are currently in progress at the University of Tokyo, further motivating this study~\cite{CNS,Sakemi}. Another electron EDM search experiment using $^{211}$Fr has also been proposed by Wundt et al.~\cite{Wundt} and Munger et al.~\cite{Munger}, at TRIUMF in Canada. In this work we focus on the properties of $^{210}$Fr, because the experiments at the University of Tokyo will be using this isotope for the electron EDM search experiments.
The enhancement factor is calculated by numerically evaluating the wave function of the many-body electronic state of the atom using an improved RCC method. The wave function is then used to evaluate the appropriate expectation value. The contribution of individual RCC terms are analyzed and discussed, particularly in relation to the specific many-body effects they contain. The application of the RCC method to the calculation of atomic EDM was first proposed in 1994 by Shukla et al.~\cite{Shukla}, and was implemented for open-shell atoms for the first time in 2008 by Nataraj et al.~\cite{Nataraj}. Calculations on Fr have been performed in the past, first by Sandars in 1966~\cite{Fr3}, using a one electron central force potential approximation, and later by Byrnes et al. in 1999~\cite{Fr2}, using a sum-over-states approach, and by Mukherjee et al. in 2009~\cite{Fr1}, using an approximate RCC method. Our work aims to advance these past results, by using an improved RCC method that addresses the weaknesses of the previous RCC calculation by Mukherjee et al.~\cite{Fr1}. The implemented upgrades include an improved basis set and the inclusion of terms that were omitted in previous calculations~\cite{Fr1,Fr2} due to computational cost limitations. In view of the recent progress in ongoing Fr EDM experiments~\cite{HaradaFPUA}, evaluating an improved theoretical result is of importance in yielding a limit for the electron EDM value and related quantities. High-performance computing is utilized in these calculations to include as many terms as possible for improving accuracy, as well as to include correction terms due to physical effects, such as the Breit interaction~\cite{Grant} and quantum electrodynamic (QED) effects~\cite{Shabaev}, which have not been considered in previous Fr EDM calculations. A subset of triple excitation terms were also evaluated perturbatively and its contribution added to the result. Finally, the accuracies of the results are assessed by comparing various physical quantities evaluated using the calculated wave function with their corresponding experimental values, to ensure that the quality of the RCC state is sufficiently good. In particular, we have calculated the hyperfine structure constants, electric dipole (E1) transition matrix elements, and excitation energies of selected states of $^{210}$Fr. For the hyperfine structure constants and the E1 transition matrix elements, we have included various correction terms to enhance the accuracies of the results and compared these against available experimental and other theoretical values. These results serve to highlight the strengths of our improved RCC method, implemented on open-shell atoms for the first time in this work.
\section{Theory}
We start with the Dirac-Coulomb (DC) Hamiltonian~\cite{Dirac} of an atomic system, given in atomic units as
\begin{equation}
\hat{H}_0 = \sum_i \big(c \bm{\alpha} \cdot \bm{p}_i + (\beta-1) c^2 + V_\mathrm{nuc}(r_i)\big) + \sum_{i<j} \frac{1}{r_{ij}},
\end{equation}
where summations are taken over electrons $i$ and pairs of electrons $i,j$ in the atom, respectively, $c$ is the speed of light, $\bm{\alpha}$ and $\beta$ are Dirac matrices, $\bm{p}_i$ is the momentum operator for electron $i$, and $V_\mathrm{nuc}$ is the potential due to the atomic nucleus. $\frac{1}{r_{ij}}$ is the Coulomb operator, where $r_{ij}$ refers to the distance between electrons $i$ and $j$.
If we assume that a nonzero electron EDM exists, a term corresponding to the interaction of the electron EDM with the internal electric field of the atom must be added to the DC Hamiltonian, and the resulting atomic Hamiltonian $\hat{H}$ becomes
\begin{equation}
\hat{H} = \hat{H}_0 + d_e\hat{H}',
\end{equation}
where $H'$ is a P- and T-violating perturbation to the Hamiltonian, and has the expression
\begin{equation}
\hat{H}' = -\beta\bm{\Sigma}\cdot \bm{\mathcal{E}}^\mathrm{int}, \label{eq:Hpert}
\end{equation}
with $\bm{\mathcal{E}}^\mathrm{int}$ denoting the internal electric field of the atom and $\bm{\Sigma}$ defined by
\begin{equation}
{\bm \Sigma} = \begin{pmatrix}
{\bm \sigma} & 0 \\
0 & {\bm \sigma}
\end{pmatrix}\label{eq:SigmaDef}
\end{equation}
where $\bm{\sigma}$ are the Pauli spin matrices.
$d_e$ is small, so the electron EDM interaction term can be treated as a perturbation to the DC Hamiltonian, with $d_e$ taken as the perturbation parameter.
The atomic wave function $\vert \Psi_\alpha \rangle$ is then expressed as a first order perturbed wave function whose unperturbed component satisfies the many-electron Dirac equation using the DC Hamiltonian $\hat{H}_0$, and the first order perturbation term is evaluated through standard perturbation theory by treating the electron EDM interaction operator as the perturbation {to the DC Hamiltonian}. That is,
\begin{equation}
\vert \Psi_\alpha \rangle \approx \vert \Psi_\alpha^{(0)} \rangle + d_e \vert \Psi_\alpha^{(1)} \rangle, \label{eq:perturbedWF}
\end{equation}
where the equation
\begin{equation}
\hat{H}_0\vert\Psi_\alpha^\mathrm{(0)}\rangle = E_0^{(0)}\vert \Psi_\alpha^\mathrm{(0)}\rangle
\end{equation}
is satisfied.
Note that, in Eq.~(\ref{eq:perturbedWF}), the first order perturbed wave function is $d_e\vert \Psi_\alpha^{(1)} \rangle$, not $\vert \Psi_\alpha^{(1)} \rangle$.
Our aim is to obtain an expression for the enhancement factor $R$ of the atomic EDM due to the existence of a nonzero electron EDM. $R$ is defined like
\begin{equation}
R = \frac{\langle D_a \rangle}{d_e}, \label{eq:R}
\end{equation}
where $\langle D_a \rangle$ is the magnitude of the atomic EDM, and $d_e$ is the magnitude of the electron EDM. By definition, an EDM induces an energy shift $\Delta E$ that is linear in the magnitude of the applied electric field $\mathcal{E}$:
\begin{equation}
\Delta E = -\langle D_a \rangle \mathcal{E}. \label{eq:shiftE}
\end{equation}
We derive an expression for the atomic EDM induced by an electron EDM and an external electric field. This is given as the normalized expectation value of the atomic dipole operator $\bm{D}_a$ with respect to the atomic state of interest $\vert \Psi \rangle$, like
\begin{equation}
\langle \bm{D}_a \rangle = \frac{\langle \Psi \vert \bm{D}_a \vert \Psi \rangle}{\langle \Psi \vert \Psi \rangle}. \label{eq:aEDMexpr}
\end{equation}
In this case, we are interested in the atomic ground state which we denote as $\vert \Psi_\alpha \rangle$.
In the presence of an external electric field and a nonzero electron EDM, the dipole operator takes the form
\begin{eqnarray}
\bm{D}_a &=& e\bm{r} + d_e \beta \bm{\Sigma} \\
&=& \bm{D} + d_e \beta \bm{\Sigma},
\end{eqnarray}
where the first term is due to the EDM induced by the external field, and the second term due to the electron EDM.
$\beta$ is the Dirac matrix, and $\bm{\Sigma}$ is defined in Eq.~(\ref{eq:SigmaDef}). The summation over each electron is suppressed.
Substituting Eq.~(\ref{eq:perturbedWF}) into the numerator of Eq.~(\ref{eq:aEDMexpr}),
\begin{eqnarray}
\langle \Psi_\alpha \vert \bm{D}_a \vert \Psi_\alpha \rangle &=& (\langle \Psi_\alpha^{(0)} \vert + d_e \langle \Psi_\alpha^{(1)} \vert)\bm{D}_a(\vert \Psi_\alpha^{(0)} \rangle + d_e \vert \Psi_\alpha^{(1)} \rangle) \nonumber \\
&=& \langle \Psi_\alpha^{(0)} \vert \bm{D}_a \vert \Psi_\alpha^{(0)} \rangle +2 d_e \langle \Psi_\alpha^{(0)} \vert \bm{D}_a \vert \Psi_\alpha^{(1)} \rangle \label{eq:aEDMexpand}
\end{eqnarray}
keeping only terms linear in $d_e$.
Now, we note that atomic wave functions have well-defined parities. The DC Hamiltonian is parity conserving, but the perturbation introduced, which is the electron EDM interaction term, is parity violating. Thus, $\vert \Psi_\alpha^{(1)} \rangle$ and $\vert \Psi_\alpha^{(0)} \rangle$ have opposite parities. Noting the fact that $\bm{D}$ is an odd parity operator and $d_e \beta \bm{\Sigma}$ is an even parity operator, Eq.~(\ref{eq:aEDMexpand}) can be simplified into
\begin{equation}
\langle \Psi_\alpha \vert \bm{D}_a \vert \Psi_\alpha \rangle = \langle \Psi_\alpha^\mathrm{(0)} \vert d_e \beta \bm{\Sigma} \vert \Psi_\alpha^\mathrm{(0)} \rangle +
2 d_e \langle \Psi_\alpha^\mathrm{(1)} \vert \bm{D} \vert \Psi_\alpha^\mathrm{(0)} \rangle \label{eq:aEDMsimplified}
\end{equation}
using parity selection rules. This is the non-normalized expression for the atomic EDM. Dividing this through by $d_e$ gives us the expression for $R$:
\begin{equation}
R = \frac {\langle \Psi_\alpha^\mathrm{(0)} \vert \beta \bm{\Sigma} \vert \Psi_\alpha^\mathrm{(0)} \rangle +
2 \langle \Psi_\alpha^\mathrm{(1)} \vert \bm{D} \vert \Psi_\alpha^\mathrm{(0)} \rangle}{\langle \Psi_\alpha \vert \Psi_\alpha \rangle},
\end{equation}
where $\vert \Psi_\alpha^{(1)} \rangle$ can be expressed like
\begin{equation}
\vert \Psi_\alpha^\mathrm{(1)} \rangle = \sum_{\nu \neq \alpha} \frac{\vert\Psi_\nu^\mathrm{(0)}\rangle \langle\Psi_\nu^\mathrm{(0)}\vert -\beta\bm{\Sigma}\cdot \bm{\mathcal{E}}^\mathrm{int} \vert\Psi_\alpha^\mathrm{(0)}\rangle }{E_\alpha^\mathrm{(0)} - E_\nu^\mathrm{(0)}}
\end{equation}
from perturbation theory, where $k$ denotes all intermediate states.
This expression can be simplified using Dirac algebra like~\cite{Das1}
\begin{eqnarray}
R &=& \frac{2 i c}{\langle \Psi_\alpha \vert \Psi_\alpha \rangle} \times \notag \\
&& \sum_{\nu \neq \alpha} \left( \! \frac{\langle \Psi_\alpha^\mathrm{(0)}\vert \beta\gamma_5{p}^2 \vert \Psi_\nu^\mathrm{(0)}\rangle \langle \Psi_\nu^\mathrm{(0)}\vert D \vert \Psi_\alpha^\mathrm{(0)}\rangle}{{E}_\alpha^\mathrm{(0)} - {E}_\nu^\mathrm{(0)}} \! + \! \mathrm{h.c.} \! \right) \! \label{eq:R-expr} \\
&=& \frac{2 \langle \Psi_\alpha^{\mathrm{(1)}'} \vert D \vert \Psi_\alpha^\mathrm{(0)} \rangle}{\langle \Psi_\alpha \vert \Psi_\alpha \rangle} \label{eq:R-expr-short}
\end{eqnarray}
where h.c. denotes the Hermitian conjugate of the preceding term, and
\begin{equation}
\vert \Psi_\alpha^{\mathrm{(1)}'} \rangle = \sum_{\nu \neq \alpha} \frac{\vert\Psi_\nu^\mathrm{(0)}\rangle \langle\Psi_\nu^\mathrm{(0)}\vert 2ic\beta\gamma_5 p^2 \vert\Psi_\alpha^\mathrm{(0)}\rangle }{E_\alpha^\mathrm{(0)} - E_\nu^\mathrm{(0)}}.
\end{equation}
We see from Eq.~(\ref{eq:R}) and Eq.~(\ref{eq:shiftE}) that
\begin{equation}
\Delta E = -d_e R \mathcal{E},
\end{equation}
which shows that we can obtain an upper limit for $d_e$ through the combination of experimental $\Delta E$ value and theoretical calculation of $R$.
\section{Method of calculation}
The RCC method takes as its starting point the Dirac-Fock (DF) state $\vert \Phi_v \rangle$, constructed as a Slater determinant of single-electron wave functions~\cite{Grant}.
The relativistic single-electron wave functions $\vert \phi \rangle$ have the form
\begin{equation}
\vert \phi \rangle = \begin{pmatrix} P(r)\chi_{j m_j l_L} \\ i Q(r) \chi_{j m_j l_S} \end{pmatrix}
\end{equation}
where the upper and lower components indicate the large and small components of the relativistic wave function, respectively, $P(r)$ and $Q(r)$ denote the radial parts of each component, and the $\chi$'s denote the spin angular parts of each component which depend on the quantum numbers $j$, $m_j$, and $l$. $l_L$ denotes $l$ for the large component, while $l_S$ denotes $l$ for the small component. The spin quantum number $s$ is fixed at $s=1/2$, because we are only considering electrons here.
The radial parts of the large and small components of the relativistic single-electron wave functions are constructed as a sum of Gaussian functions, called Gaussian type orbitals (GTO's). For a given $l$ and $j$, the large and small components of the radial wave functions have the form
\begin{eqnarray}
P(r) &=& \sum^N_{i=1} c_i^L g_i^L(r) \\
\text{and} \quad Q(r) &=& \sum^N_{i=1} c_i^S g_i^S(r),
\end{eqnarray}
respectively, where $c_i$ is the coefficient for orbital $i$, and the superscript $L$($S$) refers to the large (small) component of the relativistic wave function. The small component is evaluated from the large component using the kinetic balance condition~\cite{Dyall1}, like
\begin{equation}
(\bm{\sigma}\cdot\bm{p})g_i^L = g_i^S.
\end{equation}
The GTO of the large component for a given $l$ and $j$ is given as
\begin{equation}
g_i^{L}(r) = r^{l} e^{-\alpha_i r^2},
\end{equation}
where the exponent $\alpha_i$ is given by
\begin{equation}
\alpha_i = \alpha_0 \beta^{i-1}.
\end{equation}
This condition on the exponent is called the even-tempered condition~\cite{Quiney}. The parameters $\alpha_0$ and $\beta$ are optimized for each angular symmetry, and the values used in this calculation are as shown in Table~\ref{tb:params}. The optimization was performed so that the bound orbital energies and the expectation values of $r$, $1/r$, and $1/r^2$ matched those obtained through a direct differential equation method using the \texttt{GRASP} code~\cite{Dyall2}. The differential equation DF calculation employed in \texttt{GRASP} does not rely on any external parameters, but is unable to provide continuum orbitals, while the matrix DF calculation that we employ is able to do this. Therefore, by ensuring that the bound orbitals we obtain for some choice of parameters give similar expectation values to those obtained using \texttt{GRASP}, we ensure that our choice of parameters is reasonable. The value of $R$ in the DF atomic ground state and the values of the magnetic dipole hyperfine interaction constants in selected DF atomic states were also ensured to match the values evaluated using DF states constructed from \texttt{GRASP} orbitals. The range of orbitals calculated at the DF level, and the range of active orbitals used in the RCC calculation, are also shown in the same table, for each symmetry. \\
\begin{table}[h!]
\caption{The optimized values of $\alpha_0$ and $\beta$, the range of the principal quantum numbers of the orbitals calculated using the DF method ($n_\mathrm{DF}$), and the range of the principal quantum numbers of the DF orbitals used in the RCC calculation ($n_\mathrm{RCC}$) for each angular symmetry used in this work.}
\begin{tabularx}{8.6cm}{X | X X X X X X}
\hline
\hline
& $s$ & $p$ & $d$ & $f$ & $g$ \\ [0.5ex]
\hline
$\alpha_0$ & 0.0009 & 0.0008 & 0.001 & 0.004 & 0.005 \\
$\beta$ & 2.25 & 2.20 & 2.15 & 2.25 & 2.35 \\
$n_\mathrm{DF}$ & 1-40 & 2-40 & 3-40 & 4-40 & 5-40 \\
$n_\mathrm{RCC}$ & 1-20 & 2-21 & 3-22 & 4-20 & 5-20 \\
\hline
\hline
\end{tabularx}
\label{tb:params}
\end{table}
The RCC wave function $\vert \Psi_\alpha^{(0)} \rangle$ of a particular atomic state is constructed as a linear combination of $n$ particle-$n$ hole excitations of the DF state and the DF state, and expressed like
\begin{equation}
\vert \Psi_\alpha^{(0)} \rangle = e^{T^{(0)}}(1+S^{(0)})\vert \Phi_v \rangle, \label{eq:RCCwfn}
\end{equation}
where $T^{(0)}$ is the sum of all possible $n$ particle-$n$ core orbital excitation operators ($T^{(0)} = \sum_n T_n^{(0)}$), $S^{(0)}$ is the sum of all possible $n$ particle-$n$ valence orbital excitation operators ($S^{(0)} = \sum_n S_n^{(0)}$), and $\vert \Phi_v \rangle$ is the DF state, where the valence orbital is treated like a particle orbital; that is, $\vert \Phi_v \rangle = a_v^\dagger \vert \Phi_0 \rangle$, where $\vert \Phi_0 \rangle$ consists of occupied core orbitals. This expression for $\vert \Psi_\alpha^{(0)} \rangle$ means that in general, it is not normalized. It should be noted that in the DF calculation, the orbital wave functions are evaluated assuming a $V_{N-1}$ potential, outlined in Ref.~\cite{Kelly} and Ref.~\cite{Das2} and references therein, where $N$ is the atomic number, which is 87 for Fr. It should also be noted that it is these particle-hole excited states that account for the electron-electron correlation effects within the atom, which were neglected in the DF state.
In the presence of the P- and T-violating perturbation to the Hamiltonian, the RCC wave function should take the form given in Eq.~(\ref{eq:perturbedWF}), which should match Eq.~(\ref{eq:RCCwfn}) when the following substitutions are made:
\begin{eqnarray}
T^{(0)} &\rightarrow& T^{(0)} + d_e T^{(1)} \\
\text{and} \quad S^{(0)} &\rightarrow& S^{(0)} + d_e S^{(1)}.
\end{eqnarray}
Equating terms that are of the same order in $d_e$, we retrieve the expressions for the unperturbed and the first order perturbed RCC wave functions
\begin{eqnarray}
\vert \Psi_\alpha^{(0)} \rangle &=& e^{T^{(0)}}(1+S^{(0)})\vert \Phi_v \rangle \\
\text{and} \quad \vert \Psi_\alpha^{(1)'} \rangle &=& e^{T^{(0)}}(S^{(1)} + T^{(1)} + T^{(1)} S^{(0)})\vert \Phi_v \rangle.
\end{eqnarray}
These satisfy the unperturbed and the first order perturbed many-electron Dirac equations, given respectively as
\begin{eqnarray}
\hat{H}_0\vert\Psi_\alpha^\mathrm{(0)}\rangle &=& E_0\vert \Psi_\alpha^\mathrm{(0)}\rangle \\
\text{and} \quad (\hat{H}_0-E_0)\vert\Psi_\alpha^\mathrm{(1)'}\rangle &=& (E^\mathrm{(1)'} - \hat{H}')\vert \Psi_\alpha^\mathrm{(0)}\rangle.
\end{eqnarray}
The amplitudes for $T^{(0)}$ and $T^{(1)}$ are evaluated by solving the unperturbed and the first order perturbed many-electron Dirac equations that hold for the core electrons, and $S^{(0)}$ and $S^{(1)}$ are evaluated by solving the above two equations in the form given in Ref.~\cite{Sahoo} and references therein. We note here that the perturbed wave function $\vert \Psi_\alpha^{(1)} \rangle$ is evaluated directly by solving the first order perturbed Dirac equation, as opposed to evaluating it as a sum over explicitly constructed intermediate states, as was done by Byrnes et al. in Ref.~\cite{Fr2}. This gives a more accurate expression for the perturbed state, since the approximation introduced by truncating the infinite sum of intermediate states when evaluating this computationally is not necessary in this approach.
Substituting the RCC wave functions in the numerator of Eq.~(\ref{eq:R-expr-short}) and expanding yields
\begin{widetext}
\begin{equation}
\langle \Psi_\alpha^{(1)'} \vert D \vert \Psi_\alpha^{(0)} \rangle = d_e \langle \Phi_v \vert \left( \bar{D} S^{(1)} + \bar{D} T^{(1)} + \bar{D} T^{(1)} S^{(0)} + {S^{(0)}}^\dagger \bar{D} T^{(1)} + {S^{(0)}}^\dagger \bar{D} S^{(1)} + {S^{(0)}}^\dagger \bar{D} T^{(1)} S^{(0)} \right) + \text{h.c.} \vert \Phi_v \rangle. \label{eq:longD}
\end{equation}
\end{widetext}
where $\bar{D} = e^{{T^{(0)}}^\dagger} D e^{T^{(0)}}$, and taking parity selection rules into account.
Of the resulting six terms of the numerator and their h.c., the most important terms are $\bar{D}S_1^{(1)}$+ h.c., $\bar{D}S_2^{(1)}$+ h.c., and $\bar{D}T_1^{(1)}$ + h.c. The dominant Goldstone diagrams~\cite{Lindgren} in each of these three terms are shown in Figure~\ref{fig:diagrams}.
\begin{figure}[h]
\centering
\caption{Goldstone diagrams of the (a) ${D}S_1^{(1)}$, (b) ${D}S_2^{(1)}$, and (c) ${D}T_1^{(1)}$ contributions to the RCCSD expression of the atomic EDM. The h.c. diagrams and the exchange term diagrams are not shown.}
\includegraphics[width=8.6cm]{EDM_diagrams}
\label{fig:diagrams}
\end{figure}
The advantage of this RCC method, sometimes called the expectation value RCC (XRCC) method, is its ability to include perturbations to all orders in the residual interaction, which is the difference between the exact two-body interaction and the DF approximation of the two-body interaction. As an example, we take diagram (a) of Fig.~\ref{fig:diagrams} and expand it in terms of perturbations of the residual interaction in Fig.~\ref{fig:aExpand}. The first diagram on the right hand side of the equality shows the first order perturbed term in the residual Coulomb interaction, and the second diagram a second order perturbed term. Other second order terms and higher order terms are not shown, and are indicated by the ellipsis. Fig.~\ref{fig:aExpand} shows that the RCC term $DS_1^{(1)}$ contains terms of all orders of perturbation in the residual Coulomb interaction. Generally speaking, all RCC terms contain terms of all orders of perturbation in the residual interaction similarly.
{It can thus be seen that the XRCC approach makes the physical effects transparent through the use of RCC diagrams, which are a compact way of representing larger classes of many-body perturbation theory (MBPT) diagrams.}
These MBPT diagrams show the particular many-body interactions occurring in the excitation process. For example, the Coulomb interactions in the second order perturbed term in Fig.~\ref{fig:aExpand} is known as the Brueckner pair correlation (BPC), which is one type of many-body interaction contained within the post-DF residual interaction~\cite{Brueckner, Nesbet}. Other many-body interaction processes can be similarly identified through the MBPT diagrams.
\begin{figure}[h]
\centering
\caption{Goldstone diagram representation of the ${D}S_1^{(1)}$ term, expanded in terms of perturbations in the residual interaction. The first term on the right hand side of the equality shows the first order perturbed term in the residual Coulomb interaction, and the second term shows a second order perturbed term in the residual interaction.}
\includegraphics[width=8.6cm]{DS1pert_diagrams}
\label{fig:aExpand}
\end{figure}
In this calculation, we employ the RCC singles and doubles (RCCSD) approximation, where we consider only one and two-particle excitations, so that the excitation operators are defined like
\begin{eqnarray}
T^{(0/1)} &=& T^{(0/1)}_1 + T^{(0/1)}_2,\\
\text{and} \quad S^{(0/1)} &=& S^{(0/1)}_1 + S^{(0/1)}_2.
\end{eqnarray}
We have considered excitations of all electrons from the core orbitals in this calculation.
\section{Results}
The results of our RCC calculation of $R$ of $^{210}$Fr using the DC Hamiltonian, and the DF and leading RCC terms contributing to it, are shown in Table~\ref{tb:DC-210}. Note here that the DF terms are not included in the total sum of the contributions listed, because the DF terms are actually already embedded within the RCC terms listed. Table~\ref{tb:DC-210} also compares our results with the results of a previous RCC calculation of the same atom, and with previous calculations using other many-body methods. We obtain $R = 812$, which is about 10\% less than the value calculated by Mukherjee et al.~\cite{Fr1}.
\begin{table}[h!]
\caption{DF and the leading RCC contributions to $R$ of the atomic EDM of $^{210}$Fr using the DC Hamiltonian, calculated using the RCCSD method. These values are compared against previous calculations by Mukherjee et al.~\cite{Fr1} (also calculated using the RCCSD method), by Byrnes et al.~\cite{Fr2}, and by Sandars~\cite{Fr3}. ``Norm.'' refers to the correction due to normalization of the RCC wave function, and ``Extra'' refers to contributions due to terms not listed in the table, which have been calculated.}
\begin{tabularx}{8.6cm}{X X X X}
\hline
\hline
\multicolumn{2}{l}{Terms from RCC theory} & This work & Other~\cite{Fr1}\\ [0.5ex]
\hline
\multicolumn{2}{l}{DF (core)} & 24.85 & 25.77 \\
\multicolumn{2}{l}{DF (valence)} & 702.39 & 695.44 \\
\multicolumn{2}{l}{$\bar{D}T_1^{(1)} +$ h.c.} & 44.05 & 43.39 \\
\multicolumn{2}{l}{$\bar{D}S_1^{(1)} +$ h.c.} & 889.18 & 1000.19 \\
\multicolumn{2}{l}{$\bar{D}S_2^{(1)} +$ h.c.} & -49.46 & -64.94 \\
\multicolumn{2}{l}{${S_1^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c.} & -14.15 & -18.07 \\
\multicolumn{2}{l}{${S_2^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c.} & -48.45 & -59.18 \\
\multicolumn{2}{l}{${S_1^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c.} & -3.84 & -2.80 \\
\multicolumn{2}{l}{${S_2^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c.} & 11.99 & 19.26 \\
Extra & & 4.98 & 1.51 \\
Norm. & & -22.11 & -24.42 \\
\hline
Total & & 812.19 & 894.93 \\
\hline
\hline
\multicolumn{3}{l}{Total from other many-body methods} & \\
\hline
Ref.~\cite{Fr2}& & & 910(46) \\
Ref.~\cite{Fr3}& & & 1150 \\
\hline
\hline
\end{tabularx}
\label{tb:DC-210}
\end{table}
We note that while the values of each contribution differ between our result and the previous RCC result, the trend of the relative magnitudes of each contribution remain similar.
Of the contributing terms listed, the largest term is given by $\bar{D}S_1^{(1)} +$ h.c., whose diagram is indicated in Fig.~\ref{fig:diagrams} (a). This term predominantly contains contributions due to the BPC.
The next largest term is given by $\bar{D}S_2^{(1)} +$ h.c. This term predominantly contains contributions due to electron core polarization (ECP) interactions which are mediated through the Coulomb interaction. This interaction is less important than the BPC, and thus gives a smaller contribution.
Another comparable large term is the ${S_2^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c. term. This has a large contribution because it contains both BPC and ECP contributions.
Finally, $\bar{D}T_1^{(1)} +$ h.c. gives the next leading contribution. This term contains mainly ECP interactions which are mediated through the EDM interaction, unlike the terms in $\bar{D}S_2^{(1)} +$ h.c. $\bar{D}T_1^{(1)} +$ h.c. also contains the core DF contribution. From these results we see the relative importance of BPC and ECP effects on the $R$ of Fr.
The result given by Sandars in Ref.~\cite{Fr3} is evaluated by considering a central force potential experienced by the valence electron due to the nucleus and the core electrons. Thus, his result is very approximate and the differences between his result and later results are not surprising. The discrepancy between our result and that of Byrnes et al.~\cite{Fr2} comes from two differences in the methodology. The first is that Byrnes et al. have used a combination of {\it ab initio} and semi-empirical methods to obtain their result~\cite{Fr2}, while our result is evaluated completely {\it ab initio}. The second is that Byrnes et al. have used an explicit sum over states approach, as mentioned earlier. What is more, they have only calculated singly excited valence intermediate states from $7P_\frac{1}{2}$ to $10P_\frac{1}{2}$
and included contributions due to higher lying $P_\frac{1}{2}$ states in an approximate manner~\cite{Fr2}.
In our calculation, we have implicitly considered all intermediate states in the configuration space spanned by our basis, which includes singly excited valence states up to $21P_{1/2}$, and also core excited states.
Therefore, their results are closer to our $\bar{D}S_1^{(1)} +$ h.c contribution rather than the total result.
The reason for the discrepancy between our results and those of Mukherjee et al~\cite{Fr1} can be attributed to the differences in the calculation methodology and computational details. There are three main differences to note.
The first is the number of basis orbitals used in the RCC calculation. While we used at most 20 orbitals per symmetry, as shown in Table~\ref{tb:params}, the calculation in Ref.~\cite{Fr1} used only (at most) 14 active orbitals per symmetry. Furthermore, we have employed an even-tempered set of GTO basis functions, whose parameters $\alpha_0$ and $\beta$ differ between different orbital symmetries ($s$, $p$, $d$, $\cdots$) as given in Table~\ref{tb:params}, while Ref.~\cite{Fr1} has used a universal basis set, where the parameters are common to all orbitals. That is, while our current work has specified ten parameters for the basis functions, two for each orbital symmetry, Ref.~\cite{Fr1} has only specified two. This allows us to optimize $\alpha_0$ and $\beta$ for each orbital symmetry, which should behave differently from each other.
The large difference in the $\bar{D}S_1^{(1)} +$ h.c. term, of about 111.01, is mainly due to this difference in the calculation.
By introducing high-lying virtual states in the $s_\frac{1}{2}$ and the $p_\frac{1}{2}$ symmetries, which have large densities near the nucleus, significant contributions from the EDM matrix element have been added to the single valence excitation term. Note that {this increase in the contribution} is counterbalanced by the energy denominator term in the expression for $R$ given in Eq.~(\ref{eq:R-expr}), so that, at a certain point, the contributions due to high-lying states become negligible.
The second difference is that our calculation includes amplitudes of all multipoles, as opposed to just amplitudes of multipoles satisfying the even parity channel condition, as done in Ref.~\cite{Fr1}. This is explained below.
For general orbitals $p,q,r,s$, the matrix element of the Coulomb operator $\frac{1}{r_{12}}$, which is a two-body operator, can be expressed like
\begin{eqnarray}
\langle j_p m_p \, j_q m_q &\vert& \frac{1}{r_{12}} \vert j_r m_r \, j_s m_s \rangle \\
= \langle j_p m_p \, j_q m_q &\vert& \sum_{kq} \frac{4\pi}{2k+1} Y_{kq}^*(\theta_1, \, \phi_1) Y_{kq}(\theta_2, \, \phi_2) \nonumber \\
&& \times \frac{r_<^k}{r_>^{k+1}} \vert j_r m_r \, j_s m_s \rangle
\end{eqnarray}
in the $jm$ basis, where $Y_{kq}$ refers to the spherical harmonic functions, $r_<^k$ refers to the smaller of $r_1$ and $r_2$, $r_>^{k+1}$ refers to the larger of $r_1$ and $r_2$, and $k$ refers to the rank.
From this expression it can be seen that $k$ must satisfy the triangular conditions for $p$ and $r$, and for $q$ and $s$.
For the Coulomb operator, additional constraints
\begin{eqnarray}
(-1)^{l_p + l_r + k} &=& 1 \label{eq:EPC1}\\
\text{and} \quad (-1)^{l_q + l_s + k} &=& 1 \label{eq:EPC2}
\end{eqnarray}
are derived, where $l$ refers to the orbital angular momentum quantum number. This is due to the fact that the Coulomb interaction is a parity conserving operator. In conjunction with the overall parity selection rule
\begin{equation}
(-1)^{l_p + l_q + l_r + l_s} = 1, \label{eq:paritySelection}
\end{equation}
it can be seen that only a subset of values of $p$, $q$, $r$, $s$, and $k$ satisfy all three equations Eq.~(\ref{eq:EPC1}), Eq.~(\ref{eq:EPC2}), and Eq.~(\ref{eq:paritySelection}), and so for each individual Coulomb interaction, the nonzero contributions come only from this subset.
Now consider the excitation operators $T^{(0)}$ and $S^{(0)}$. The one particle-one hole excitation operators $T^{(0)}_1$ and $S^{(0)}_1$ are rank 0 operators, as seen from the fact that their matrix elements are uniquely identified scalar values~\cite{Geetha}. On the other hand, the two particle-two hole excitation operators $T^{(0)}_2$ and $S^{(0)}_2$ can take on nonzero rank values, since the combination of the angular momenta of two particles allows the state to have multiple $k$ values. For all RCC excitation operators, which each contain Coulomb interactions to all orders, Eq.~(\ref{eq:EPC1}) and Eq.~(\ref{eq:EPC2}) do not apply, and so the values of $p$, $q$, $r$, $s$, and $k$ for which the contribution is nonzero is not restricted to those that are nonzero for the Coulomb interaction. Furthermore, for the perturbed terms of the two-body excitation operators, $T_2^{(1)}$ and $S_2^{(1)}$, Eq.~(\ref{eq:paritySelection}) does not hold as well, because these are parity-violating terms, introduced due to the P- (and T-) violating perturbation to the Hamiltonian. Therefore, in general, all combinations of $p$, $q$, $r$, $s$, and $k$ could give a nonzero contribution to the overall result, including the combinations for which the Coulomb interaction contributions would be zero.
In Ref.~\cite{Fr1}, the even parity channel approximations were employed, which considers only the set of orbitals and $k$ values for which Eq.~(\ref{eq:EPC1}) and Eq.~(\ref{eq:EPC2}) are satisfied, for the $T_2^{(0)}$ and $S_2^{(0)}$ operators, i.e. those for which the contributions to the Coulomb interactions are nonzero. This is on the grounds that these combinations of orbitals and $k$ give the dominant contributions, as discussed in Ref.~\cite{Liu}. However, these approximations were introduced because of limitations in computational resources. In the absence of computational restrictions, there is no compelling reason for terms that do not satisfy the even parity channel condition to be omitted. Therefore, in this work, we considered contributions due to all combinations of orbitals and $k$.
Finally, our calculation included nonlinear terms in $\bar{D}$, which the calculation in Ref.~\cite{Fr1} had not included. Recall that
\begin{eqnarray}
\bar{D} &=& {e^{T^{(0)}}}^\dagger D e^{T^{(0)}} \\
&=& D + {T^{(0)}}^\dagger D + D T^{(0)} + {T^{(0)}}^\dagger D T^{(0)} + \cdots, \label{eq:Dbarexpand}
\end{eqnarray}
where each $T^{(0)} = T_1^{(0)} + T_2^{(0)}$ and ${T^{(0)}}^\dagger = {T_1^{(0)}}^\dagger + {T_2^{(0)}}^\dagger$. It can be seen that terms in $\bar{D}$ can be grouped into powers of ${T^{(0)}}$ and ${T^{(0)}}^\dagger$. In our work, terms to the $n$th power of ${T^{(0)}}$ and ${T^{(0)}}^\dagger$ were calculated iteratively for increasing $n$, until the difference in the total sum between up to the $n$th and the $(n+1)$th terms was less than a threshold value. This self-consistent method of evaluating $\bar{D}$ ensures that the series given in Eq.~(\ref{eq:Dbarexpand}) converges numerically, and therefore effectively terminates.
In Ref.~\cite{Fr1}, terms nonlinear in ${T^{(0)}}$ and ${T^{(0)}}^\dagger$ in Eq.~(\ref{eq:Dbarexpand}) were not calculated, unlike in our work.
Contributions due to the linear dominant term of three selected RCC terms are given in Table~\ref{tb:lin-DC}. This takes the contributions in $\bar{D}T_1^{(1)}$, $\bar{D}S_1^{(1)}$, and $\bar{D}S_2^{(1)}$ for which $\bar{D} = D$.
\begin{table}[h!]
\caption{RCC calculations of linear contributions to $R$ of $^{210}$Fr of three selected RCC terms, compared against other RCC results given by Ref.~\cite{Fr1}.}
\begin{tabularx}{8.6cm}{X X X}
\hline
\hline
Terms & This work & Other~\cite{Fr1}\\ [0.5ex]
\hline
${D}T_1^{(1)}$ & 45.35 & 44.82 \\
${D}S_1^{(1)}$ & 889.27 & 1000.69 \\
${D}S_2^{(1)}$ & -57.29 & -61.28 \\
\hline
\hline
\end{tabularx}
\label{tb:lin-DC}
\end{table}
It can be seen that the linear term accounts for the majority of the contribution to each term. $DS_2^{(1)}$ shows a slightly larger deviation from $\bar{D}S_2^{(1)}$ compared to the other two terms. This shows that the self-consistent method of evaluation discussed above makes a difference for some leading contributions of $R$. We note that this work is the first to apply this technique on an open-shell atomic system.
To analyze the accuracy of our results, we use our calculated RCC wave functions to evaluate other physical quantities which can be compared against experimental results, and which resemble the terms that comprise the expression for $R$. This analysis gives a quantitative insight into the errors contributing to the result.
The first quantity to compare is the magnetic dipole hyperfine constant $A$, which can be used to estimate the error in the EDM matrix element in Eq.~(\ref{eq:R-expr}). The error in the EDM matrix element can be approximated as the error in the quantity $\sqrt{A_{7S_\frac{1}{2}}A_{7P_\frac{1}{2}}}$. This can be seen from the following reasoning. The hyperfine constant $A$ is expressed like
\begin{equation}
A = \frac{\langle \Psi \vert \hat{H}_\mathrm{hf} \vert \Psi \rangle}{IJ}
\end{equation}
where $\hat{H}_\mathrm{hf}$ is the hyperfine interaction Hamiltonian:
\begin{equation}
\langle \hat{H}_\mathrm{hf} \rangle = \langle \sum_e \bm{j}_e \cdot \bm{A}_N \rangle = A \langle \bm{I} \cdot \bm{J}\rangle .
\end{equation}
Thus,
\begin{eqnarray}
\sqrt{A_{7S_\frac{1}{2}}A_{7P_\frac{1}{2}}} &\propto& \sqrt{\langle {7S_\frac{1}{2}} \vert \hat{H}_\mathrm{hf} \vert {7S_\frac{1}{2}} \rangle \langle {7P_\frac{1}{2}} \vert \hat{H}_\mathrm{hf} \vert {7P_\frac{1}{2}} \rangle}. \nonumber
\end{eqnarray}
The dominant contribution to the EDM matrix element is due to the transition between the $7S_\frac{1}{2}$ and $7P_\frac{1}{2}$ states, given as
\begin{equation}
\langle 7S_\frac{1}{2} \vert \beta\gamma_5{p}^2 \vert 7P_\frac{1}{2} \rangle.
\end{equation}
If we note that both $\hat{H}_\mathrm{hf}$ and $\beta\gamma_5{p}^2$ are one-body operators, and that both are sensitive to contributions from orbitals with a dominant component in the nuclear region, we can see that the accuracy of $\sqrt{A_{7S_\frac{1}{2}}A_{7P_\frac{1}{2}}}$ can give some indication of the accuracy of $\langle 7S_\frac{1}{2} \vert \beta\gamma_5{p}^2 \vert 7P_\frac{1}{2} \rangle$, and therefore of $\langle \Psi_\alpha^\mathrm{(0)}\vert \beta\gamma_5{p}^2 \vert \Psi_\nu^\mathrm{(0)}\rangle$.
Other terms in Eq.~(\ref{eq:R-expr}) can be compared directly against experimental results, so this term is the greatest source of uncertainty in the error estimate.
Table~\ref{tb:A-all} shows the RCC values of the hyperfine constants of the 7$S_\frac{1}{2}$, 7$P_\frac{1}{2}$, 8$P_\frac{1}{2}$, and 9$P_\frac{1}{2}$ states, with various correction terms listed and the result compared against the available experimental~\cite{Grossman} and theoretical results~\cite{Sahoo2}. The included correction terms are the Breit interaction terms, the approximate QED effect terms, the perturbative triple excitation (pT) terms, and the Bohr-Weisskopf (BW) effect terms. The Breit interaction is the lowest order relativistic correction to the Coulomb interaction~\cite{Grant, Breit}, and the QED effect terms include corrections due to vacuum polarization effects and electron self-energy effects, calculated approximately~\cite{Yu, Ginges}. The correction due to inclusion of partial effective three particle-three hole excited states is evaluated by treating the excitation as a perturbation on the evaluated RCCSD state, as outlined by Sahoo et al. in Ref.~\cite{Sahoo}. The BW effect is the correction due to the magnetization of the nucleus~\cite{Bohr, Ginges2}, to which the hyperfine constant values are sensitive. For the 7$S_\frac{1}{2}$ state, our final calculation gives a value with an approximate 0.76\% deviation from experimental results, which is an excellent agreement. The value for the 7$P_\frac{1}{2}$ state also shows good agreement with experimental results, with a deviation of about 0.64\%. Our results give a marginally better agreement with available experimental results compared to the results in Ref.~\cite{Sahoo2}. The main differences between our method and the RCC method used in Ref.~\cite{Sahoo2} is that Ref.~\cite{Sahoo2} have used a quadratic basis set instead of the GTO's that we have used, and that Ref.~\cite{Sahoo2} have not included the BW contributions. Our results demonstrate the power of the RCC method used in this work to provide reliable calculations of atomic properties.
\begin{table}[h!]
\caption{RCC calculations of selected hyperfine structure constant quantities ($A$) for $^{210}$Fr using the DC Hamiltonian and the correction terms due to the Breit interaction, approximate QED effects, perturbative triple excitation (pT) terms, and the BW effect. The final results are compared against the available experimental measurements and theoretical values. Values are given in units of MHz.}
\begin{tabularx}{8.6cm}{X X X X X X}
\hline
\hline
\multicolumn{2}{l}{Term} & 7$S_\frac{1}{2}$ & 7$P_\frac{1}{2}$ & 8$P_\frac{1}{2}$ & 9$P_\frac{1}{2}$\\ [0.5ex]
\hline
\multicolumn{2}{l}{DC} & 7488.42 & 944.56 & 296.22 & 132.80 \\
\multicolumn{2}{l}{Breit} & 16.217 & -1.584 & -0.363 & -0.108 \\
\multicolumn{2}{l}{QED} & -41.026 & 3.466 & 0.970 & 0.421 \\
\multicolumn{2}{l}{pT} & -14.389 & 1.959 & 0.507 & 0.205 \\
\multicolumn{2}{l}{BW} & -199.228 & -8.163 & -2.563 & -1.149\\
\hline
\multicolumn{2}{l}{Total} & 7249.99 & 940.236 & 294.772 & 132.173\\
\hline
\hline
\multicolumn{2}{l}{Experiment~\cite{Grossman}} & 7195.1(4) & 946.3(2) & - & - \\
\multicolumn{2}{l}{Ref.~\cite{Sahoo2} (theory)} & 7254(75) & 939(7) & 295(4) & -\\
\hline
\hline
\end{tabularx}
\label{tb:A-all}
\end{table}
Table~\ref{tb:DC-A} shows the values of selected hyperfine structure constant quantities of the form $\sqrt{A_{7S_\frac{1}{2}}A_{nP_\frac{1}{2}}}$ (for $n=7,8,9$) calculated using the RCC wave function, compared against the available experimental results. Here, the results for $A$ using the DC Hamiltonian without the various correction terms are used, so that we obtain a conservative estimate. For $\sqrt{A_{7S_\frac{1}{2}}A_{7P_\frac{1}{2}}}$, the calculated value is about 1.9\% larger than the experimental value. From the argument above, this can be thought of as the largest error coming from the EDM matrix element.
\begin{table}[h!]
\caption{RCC calculations of selected hyperfine structure constant quantities for $^{210}$Fr using the DC Hamiltonian, compared against the available experimental measurements.}
\begin{tabularx}{8.6cm}{X X X}
\hline
\hline
Terms & This work & Experiment\\ [0.5ex]
\hline
$\sqrt{A_{7S_\frac{1}{2}}A_{7P_\frac{1}{2}}}$ & 2657.79 & 2609.37~\cite{Grossman} \\
$\sqrt{A_{7S_\frac{1}{2}}A_{8P_\frac{1}{2}}}$ & 1488.43 & - \\
$\sqrt{A_{7S_\frac{1}{2}}A_{9P_\frac{1}{2}}}$ & 996.76 & - \\
\hline
\hline
\end{tabularx}
\label{tb:DC-A}
\end{table}
The second quantity to compare is the E1 transition amplitude, found in the numerator of Eq.~(\ref{eq:R-expr}). Table~\ref{tb:DC-E1} shows the values of the E1 transition amplitudes of selected valence transitions calculated using the RCC wave function used in the enhancement factor calculation, compared against the available experimental values~\cite{Simsarian} and theoretical calculations~\cite{Safronova, Dzuba}. It can be seen that the various correction terms added to the DC result (Breit and QED) have a limited effect on the final values of the transition amplitudes. For the 7S$_\frac{1}{2} \rightarrow$ 7P$_\frac{1}{2}$ transition, the magnitude of the calculated quantity is about 1.59\% larger than the experimental value, which is again in good agreement, especially considering that the many-body wave function was optimized not just for the transition amplitudes but also simultaneously for the hyperfine constants and the EDM enhancement factor. Comparison against two other theoretical results are made in Table~\ref{tb:DC-E1} as well. Ref.~\cite{Safronova} uses a linearized coupled-cluster method, which only considers terms that are linear in the cluster operators $T$ and $S$, and Ref.~\cite{Dzuba} uses a many-body perturbation theory with screened Coulomb interactions. Ref.~\cite{Safronova} also uses linear combinations of B-splines as the basis orbitals, instead of the GTO's that we use here. Our calculation includes many more terms than what have been included in both Ref.~\cite{Safronova} and Ref.~\cite{Dzuba}.
\begin{table}[h!]
\caption{RCC calculations of magnitudes of selected E1 transition amplitudes for $^{210}$Fr using the DC Hamiltonian and the correction terms due to the Breit interaction and approximate QED effects. The final results are compared against the available experimental measurements and theoretical values. Values are given in units of Bohr radius.}
\begin{tabularx}{8.6cm}{X X X X X}
\hline
\hline
\multicolumn{2}{l}{Term} & 7S$_\frac{1}{2} \rightarrow$ 7P$_\frac{1}{2}$ & 7S$_\frac{1}{2} \rightarrow$ 8P$_\frac{1}{2}$ & 7S$_\frac{1}{2} \rightarrow$ 9P$_\frac{1}{2}$ \\ [0.5ex]
\hline
\multicolumn{2}{l}{DC} & 4.345 & 0.333 & 0.111 \\
\multicolumn{2}{l}{Breit} & 0.0004 & 0.0028 & 0.0016 \\
\multicolumn{2}{l}{QED} & -0.0005 & -0.0019 & -0.0011 \\
\hline
\multicolumn{2}{l}{Total} & 4.345 & 0.334 & 0.114 \\
\hline
\hline
\multicolumn{2}{l}{Experiment~\cite{Simsarian}} & 4.277(8) & - & - \\
\multicolumn{2}{l}{Ref.~\cite{Safronova} (theory)} & 4.256 & 0.327 & 0.110 \\
\multicolumn{2}{l}{Ref.~\cite{Dzuba} (theory)} & 4.304 & 0.301 & - \\
\hline
\hline
\end{tabularx}
\label{tb:DC-E1}
\end{table}
The third is the excitation energy between the ground and selected excited states, as seen in the denominator of Eq.~(\ref{eq:R-expr}). For the selected excitation energies, all three calculation results show a very small deviation from experimental results, at 0.47\%, 0.17\%, and 0.11\%, for the 7P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$, 8P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$, and 9P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$ transitions, respectively. This is not surprising, as the DF method optimizes the orbital wave functions by minimizing the DF energy value. Thus, the errors introduced from this term can be assumed to be smaller than the errors due to the other two terms.
\begin{table}[h!]
\caption{RCC calculations of selected excitation energies for $^{210}$Fr using the DC Hamiltonian, compared against the experimental measurements given in Ref.~\cite{Sansonetti}. The values are given in units of cm$^{-1}$.}
\begin{tabularx}{8.6cm}{X X X}
\hline
\hline
Transition & This work & Experiment~\cite{Sansonetti}\\ [0.5ex]
\hline
7P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$ & 12295.04 & 12237.41 \\
8P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$ & 23151.49 & 23112.96 \\
9P$_\frac{1}{2} \rightarrow$ 7S$_\frac{1}{2}$ & 27149.03 & 27118.21 \\
\hline
\hline
\end{tabularx}
\label{tb:DC-energy}
\end{table}
Taking these three sources of errors to be independent, we add the fractional uncertainties in quadrature, and obtain a total conservative estimated error of about 3\%. Notably, previous results do not fall within the error bar of this result, indicating that the improvements we have made in this calculation, especially the additional terms in $\bar{D}$ we have included, have a significant effect on the final result, and may indicate that continued efforts for an improved calculation are necessary.
We emphasize here that evaluations of the errors in the individual terms in Eq.~(\ref{eq:R-expr}), by this method or otherwise, had not been performed by Mukherjee et al.~\cite{Fr1}. Byrnes et al.~\cite{Fr2} have reported estimates of the errors in excitation energies, E1 transition amplitudes, and the EDM matrix elements, and have reported a total error of 5\%. Our calculations provide a comprehensive analysis of the error in our calculation of $R$.
We now discuss the corrections introduced to $R$ due to the consideration of the Breit interaction and approximate QED effects. Table~\ref{tb:Breit-210} shows the RCC contributions to $R$ for $^{210}$Fr with Breit interaction effects accounted for.
\begin{table}[h!]
\caption{DF and the leading RCC contributions to $R$ of the atomic EDM of $^{210}$Fr using the DC Hamiltonian with Breit interaction correction terms, calculated using the RCCSD method. ``Norm.'' refers to the correction due to normalization of the RCC wave function, and ``Extra'' refers to contributions due to terms not listed in the table, which have been calculated.}
\begin{tabularx}{8.6cm}{X X X}
\hline
\hline
Terms & DC results & DC + Breit results \\ [0.5ex]
\hline
DF (core) & 24.85 & 24.76 \\
DF (valence) & 702.39 & 694.87 \\
$\bar{D}T_1^{(1)} +$ h.c. & 44.05 & 43.99 \\
$\bar{D}S_1^{(1)} +$ h.c. & 889.18 & 880.11 \\
$\bar{D}S_2^{(1)} +$ h.c. & -49.46 & -49.07 \\
${S_1^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c. & -14.15 & -14.05 \\
${S_2^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c. & -48.45 & -48.02 \\
${S_1^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c. & -3.84 & -3.81 \\
${S_2^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c. & 11.99 & 11.90 \\
Extra & 4.98 & 4.96 \\
Norm. & -22.11 & -21.92 \\
\hline
Total & 812.19 & 804.08 \\
\hline
\hline
\end{tabularx}
\label{tb:Breit-210}
\end{table}
The inclusion of Breit interaction terms reduce the value of $R$ by about 8.11, or a decrease of about 1\%. This is mainly due to the difference in the $\bar{D}S_1^{(1)} +$ h.c. term. This is not surprising, as the valence 7s electron for Fr is expected to behave relativistically due to the large atomic size, and so it is natural that the Breit interaction, which is a relativistic effect, impacts terms involving the valence electron more significantly.
Table~\ref{tb:QED-210} shows the RCC contributions to $R$ for $^{210}$Fr with approximate QED effects accounted for.
\begin{table}[h!]
\caption{DF and the leading RCC contributions to $R$ of the atomic EDM of $^{210}$Fr using the DC Hamiltonian with approximate QED corrections, calculated using the RCCSD method. ``Norm.'' refers to the correction due to normalization of the RCC wave function, and ``Extra'' refers to contributions due to terms not listed in the table, which have been calculated.}
\begin{tabularx}{8.6cm}{X X X}
\hline
\hline
Terms & DC results & DC + QED results \\ [0.5ex]
\hline
DF (core) & 24.85 & 24.74 \\
DF (valence) & 702.39 & 701.96 \\
$\bar{D}T_1^{(1)} +$ h.c. & 44.05 & 43.84 \\
$\bar{D}S_1^{(1)} +$ h.c. & 889.18 & 888.70 \\
$\bar{D}S_2^{(1)} +$ h.c. & -49.46 & -49.42 \\
${S_1^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c. & -14.15 & -14.12 \\
${S_2^{(0)}}^\dagger \bar{D} S_1^{(1)} +$ h.c. & -48.45 & -48.44 \\
${S_1^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c. & -3.84 & -3.83 \\
${S_2^{(0)}}^\dagger \bar{D} S_2^{(1)} +$ h.c. & 11.99 & 11.99 \\
Extra & 4.98 & 4.95 \\
Norm. & -22.11 & -22.09 \\
\hline
Total & 812.19 & 811.57 \\
\hline
\hline
\end{tabularx}
\label{tb:QED-210}
\end{table}
It can be seen that the QED corrections reduce the value of $R$ slightly, mainly due to the reduction in the $\bar{D}T_1^{(1)} +$ h.c. and $\bar{D}S_1^{(1)} +$ h.c. terms, but overall the difference is very small, at about -0.62, or -0.076\%.
Finally, effective three particle-three hole excitation contributions were calculated using the DC Hamiltonian through a perturbative method~\cite{Sahoo}. In total, this resulted in a correction of about -4.64 from the DC result, or a -0.58\% correction, as shown in Table~\ref{tb:total-210}.
\begin{table}[h!]
\caption{$R$ of the atomic EDM of $^{210}$Fr calculated using the DF and the RCCSD methods, with various correction terms included. ``DC'' refers to the result obtained using the Dirac-Coulomb Hamiltonian, ``QED'' to approximate QED correction terms, ``Breit'' to correction terms due to the Breit interaction, and ``pT'' to the effective three particle-three hole excitation contribution terms.}
\begin{tabularx}{8.6cm}{X X X X}
\hline
\hline
\multicolumn{2}{l}{Method} & Correction & $R$ \\ [0.5ex]
\hline
\multicolumn{2}{l}{DF (DC)} & - & 727.24 \\
\multicolumn{2}{l}{RCCSD (DC)} & 0 & 812.19 \\
\multicolumn{2}{l}{RCCSD (DC+Breit)} & -8.105 & 804.08 \\
\multicolumn{2}{l}{RCCSD (DC+QED)} & -0.621 & 811.57 \\
\multicolumn{2}{l}{RCCSD (DC+pT)} & -4.644 & 807.55 \\
\multicolumn{2}{l}{RCCSD (DC+Breit+QED+pT)} & -13.369 & 798.82 \\
\hline
\hline
\end{tabularx}
\label{tb:total-210}
\end{table}
If we combine the Breit interaction correction, the approximate QED correction, and the perturbative triples correction, we obtain a final value of $R = 799$ for $^{210}$Fr. We see from Table~\ref{tb:total-210} that the three correction terms each reduce the value of $R$ from the RCCSD DC value, leading to a smaller final value than the pure DC result. We note that our work is the first to apply all of these correction terms to the calculation of $R$ for $^{210}$Fr.
\section{Conclusion}
Results for improved RCC calculations of the EDM enhancement factor for $^{210}$Fr are presented in this work, evaluated to be at $R = 799$, with an estimated error of about 3\%. This is about 11\% smaller than the result from a previous calculation using an approximate RCCSD method~\cite{Fr1}. This difference can be attributed to the fact that the various approximations and shortcomings in the previous calculation were addressed in this work, such as by the improvement of both the size and quality of the basis functions used, and by the inclusion of amplitudes of all multipoles and nonlinear RCC terms using a self-consistent approach, applied to an open-shell system for the first time here.
We emphasize that we have outlined the method of error evaluation more comprehensively than what is given in previous calculations of the same atom, if given at all.
A detailed analysis of the many-body effects contributing to the EDM enhancement of Fr was given as well, and it was found that BPC and ECP effects contribute most heavily to $R$. This has shed light on the many-body physics involved in this complex phenomenon.
This work has also included corrections due to the Breit interaction and QED effects, as well as contributions due to perturbative triple excitations, which previous Fr EDM calculations have not included.
The improved RCC method has also been used to evaluate the magnetic dipole hyperfine constants and the E1 transition amplitudes of selected states of $^{210}$Fr and compared against available experimental and theoretical results, with relevant correction terms included. Our results showed excellent agreement with both experiment and other detailed theoretical calculations, and have demonstrated the versatility of our RCC method in providing accurate results for atomic properties.
The notable difference between our results with previous results indicates that it is necessary to continue to perform these calculations to higher levels of accuracy, for a reliable appraisal of the upper limit of the magnitude of the electron EDM in conjunction with future experimental results. For a comprehensive study of BSM physics, it is necessary to conduct EDM searches on multiple candidates. In this respect, the continuation of efforts for EDM measurement in atomic systems is still important. The merit of Fr in particular as an electron EDM search candidate lies in its large enhancement factor and its ability for the theoretical calculation of its $R$ to be obtained to a higher accuracy compared to molecules. In addition, it is an ideal system in which to also probe the S-PS coupling constants, which also contribute to the atomic EDM, because of the wealth of isotopes available for production. We hope that our theoretical work on this promising candidate will not only complement experimental results, but also contribute to the understanding of relativistic many-body theory in atoms, and the development of RCC methods in general.
\section{Acknowledgements}
The calculations in this work were performed on the TSUBAME 3.0 supercomputer at the Tokyo Institute of Technology, through the TSUBAME Grand Challenge Program.
The authors would like to thank Professor Y. Sakemi and Dr. K. Harada for providing useful information on experimental aspects of Fr EDM, and Dr. V. A. Dzuba for providing useful information related to theoretical calculations of Fr EDM.
|
2,877,628,088,615 | arxiv | \section{Introduction}
Visual data exploration plays a central role in the scientific discovery process; it is invaluable for the understanding and interpretation of data and results. From analysis to physical interpretation, most research tasks rely on or even require some kind of visual representation of data and concepts, either interactive or static, to be created, explored, and discussed. This is certainly the case of the ESA Gaia space mission \citep{2016A&A...595A...1G}, with its current and planned data releases \citep[e.g.][]{2016A&A...595A...2G}. The particularity of Gaia is the volume -- the number of sources and of attributes per source -- of its data products, which makes interactive visualisation a non-trivial endeavour.
The Gaia Data Releases comprise more than $10^9$ individual astronomical objects, each with tens of associated parameters in the earlier data releases, up to thousands in the final data release, considering the spectrophotometric and spectroscopic data which are produced per object and per epoch. The extraction of knowledge from such large and complex data volumes is highly challenging. This is a tendency that shows no sign of slowing down in the dawn of the sky surveys such as the LSST \citep{2008arXiv0805.2366I} and the ESA Euclid mission \citep{2011arXiv1110.3193L}. As several authors have pointed out \citep[e.g.][]{2001Sci...293.2037S, unwin:graphics:2006, hey:fourthparadigm:2009, 2017PASP..129b8001B}, new science enabling tools and strategies are necessary to tackle these data sets; to allow the best science to be extracted from this data deluge, interactive visual exploration must be performed.
One essential issue is the inherent visual clutter that emerges while visualising these data sets. Although there can be millions or billions of individual entities that can be simultaneously represented in a large-scale visualisation, a naive brute-force system that simply displays all such data would not lead to increased knowledge. In fact, such a system would just hinder human understanding, due to the clutter of information that hides structures that may be present in the data \cite[e.g.][]{Peng04clutterreduction}. Thus, strategies have to be put in place to address the issues of data clutter and the clutter of the graphical user interface of the visualisation system \citep[e.g.][]{Rosenholtz05featurecongestion}.
Interactivity is also key for data exploration \citep[e.g.][]{Keim:2002:IVV:614285.614508}.
The ability to quickly move through the data set (e.g. by zooming, panning, rotating) and to change the representations (e.g. by re-mapping parameter dimensions to colours, glyphs, or by changing the visualised parameter spaces) are indispensable for productive exploration and discovery of structures in the data.
However, interactivity for large data sets is challenging \citep{2012AN....333..505G} and current approaches require high-end hardware and having the data set locally at the computer used for the visualisation \citep[e.g.][]{2013MNRAS.429.2442H}. In the best cases, these systems are bounded by I/O speed \citep[e.g.][for GPU-based visualisation of large-scale n-body simulations]{2008arXiv0811.2055S}.
Another essential functionality for visual data exploration is the linking of views from multiple interactive panels with different visualisations produced from different dimensions of the same data set, or even of different data sets \citep[e.g.][]{tukey1977,Jern:2007, Tanaka:2014}. The simultaneous identification of objects or groups of objects in different parameter projections is a powerful tool for multi-dimensional data exploration \citep[][discusses this in a nice historical perspective]{2012AN....333..505G}.
This surely applies to Gaia with its astrometric, photometric, and spectroscopic measurements \citep{2016A&A...595A...4L, 2017A&A...599A..32V, 2011EAS....45..189K} combined with derived astrophysical information such as the orbits of minor planets \citep{2016P&SS..123...87T}, the parameters of double and multiple stars \citep{2011AIPC.1346..122P}, the morphology of unresolved galaxies \citep{2013A&A...556A.102K}, the variable parameters of stars \citep{2014EAS....67...75E}, and the classifications and parameters of objects \citep{2013A&A...559A..74B}.
Visualisation is not only for exploring the data, but also for communicating results and ideas.
One of the most remarkable visualisations of our galaxy was created in the middle of the past century at the Lund Observatory\footnote{\url{http://www.astro.lu.se/Resources/Vintergatan/}}. It is a one-by-two-meter representation of the galactic coordinates of 7000 stars, overlaid on a painting of the Milky Way, represented in an Aitoff projection. This visualisation was produced by Knut Lundmark, Martin Kesk\"ula and Tatjana Kesk\"ula, and for decades was the reference panorama of our Galaxy.
Another emblematic and scientifically correct visualisation of the Milky Way was produced from the data gathered by the ESA Hipparcos space mission, and published by ESA in 2013\footnote{\url{http://sci.esa.int/hipparcos/52887-the-hipparcos-all-sky-map/}}. This image represents, in galactic coordinates, the fluxes of $\sim2.5$ million sources from the Tycho-2 catalogues. The Milky Way diffuse light, mostly created by unresolved stars and reflection or emission in the interstellar medium, is also represented. It was determined from additional data provided by background measurements from the Tycho star mapper on board the Hipparcos satellite. Minor additions of known structures not observed by Hipparcos (which just like Gaia was optimised to observe point sources) were made by hand.
Now, the remarkable Gaia Data Release 1 (DR1) will deliver the next generation of Galactic panoramas, and will set our vision of the Milky Way galaxy probably for decades to come.
This paper introduces the Gaia Archive Visualisation Service, which was designed and developed to allow interactive visual exploration of very large data sets by many simultaneous users. In particular, the version presented here is tailored to the contents of DR1.
The paper is organised as follows. First, in Sect.~\ref{sec:sysconc} the system concept is presented and the services described. Then, in Sect.~\ref{sec:deploy} a brief overview of the deployment of the platform is given. Later, Sect.~\ref{sec:contents} presents a thorough description of the visual contents offered by the service and how they were created. Then, Sect.~\ref{sec:other} addresses other visualisation tools with some degree of tailoring to Gaia data. Finally, some concluding remarks and planned developments for the near future are given in Sect.~\ref{sec:conclusions}.
\section{System concept}
\label{sec:sysconc}
In addition to the central functionalities discussed above (e.g. interactivity, large data sets, linked views), many other features are required from a modern interactive visual data exploration facility. The Gaia Data Processing and Analysis Consortium (DPAC) issued an open call to the astronomical community requesting generic use cases for the mission archive \citep{BrownTN026}. Some of these use cases are related to visualisation, and are listed in Appendix A of this paper. These cases formed the basis for driving the requirements of the Gaia Archive Visualisation Service (hereafter GAVS).
\paragraph{Visual queries}
In addition, GAVS introduces a new concept of how to deal with database queries: {\it visual queries}. Since the introduction of the Sloan Digital Sky Survey SkyServer and CasJobs infrastructures \citep[e.g.][]{2000AJ....120.1579Y, 2016AJ....151...44D}, astronomers wanting to extract data from most modern astronomical surveys have been facing the need to learn at least the basics of the Structured Query Language, SQL, or more commonly of the Astronomical Data Query Language, ADQL \citep{2008IVOAADQL}, which is the astronomical dialect of SQL.
These are declarative languages used to query the relational databases at the underlying structure of most modern astronomical data sources. Nevertheless, there are a multitude of querying tasks performed in Astronomy that should not require that these languages be mastered, for example spatial queries of data lying within polygonal regions of n-dimensional visual representation of tables. Accordingly, GAVS introduces to Astronomy a {\it visual query} paradigm; for Gaia it is possible to create ADQL queries directly from visual representations of data without having to write ADQL directly.
The visual interface creates a query from a visual abstraction that can be used to extract additional information from the database. The visually created query string can be edited and modified, or coupled to more complex queries. It can be shared with other users or added to scientific papers, thus increasing scientific reproducibility.
\subsection{Architecture}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.80\textwidth]{server_gdr1.png}
\caption{Static architecture diagram for the GAVS Server. It presents the components of the Server (Services, Plots Backend, Spatial Indexing, and Database Manager), how they are connected, and the context within the GAVS.}\label{fig:server_diagram}
\end{figure*}
From the architectural perspective, one of the fundamental requirements is that the interactive visual explorations of the whole archive should be possible with the common laptops, desktops, and (if possible) mobile devices available to most users.
The GAVS described here addresses this architectural issue by adopting a web service pattern. A server residing near the data is responsible for hiding as far as is possible the complexity and volume of the Gaia archive data from the user web interface. This avoids huge brute force data transfers of the archive data to the remote visualisation display that would congest the servers, the network, the user machine, and that in the end would not convey any additional scientific information. In a way, this reaffirms the concept of `bring the computation to the data' \citep{hey:fourthparadigm:2009}.
However, the server can create an additional pressure on the archive, especially when several users access the service in parallel. To alleviate this pressure, the service includes caching mechanisms to prevent performance penalties from repeated requests and/or re-computations of the same data. This caching mechanism is active whether the request is being performed by the same user or not.
While the server design is not tied to a specific hardware configuration, it pursues a scalable solution that can run on modest hardware (see Sect.~\ref{sec:deploy}).
The server was implemented as a Java EE application, designed to run in Apache Tomcat web containers. This application has two main functions, processing dynamic requests for interactive visualisation purposes and delivering static content to the user’s browser (images, CSS files, and client scripts).
The client side is a web application, designed in Javascript, HTML, and CSS to run in a web browser. Chrome and Firefox are the recommended platforms as these were the platforms used to test the service. Still, the client-side application should be compatible with any modern web browser.
The next two sections detail the server and client components of GAVS.
\subsection{Visualisation Server}
\label{sec:server}
The structure of the Visualisation Server is depicted in Fig. \ref{fig:server_diagram}. It is responsible for receiving and interpreting requests related to the different provided services (see Table \ref{tab:serv_service}) and responding accordingly.
The server's components are divided into two different levels: the Services and the Plots Backend. The Services component receives REST requests \citep{Fielding:2000}, and performs checks to ensure the validity of the request and of its parameters. Then, it converts these parameters from the received text format to the correct abstractions and makes the necessary calls to the Plots Backend. Finally, it processes the answers of the Backend and adapts the replies to the visualisation client. The Plots Backend component processes the requests interpreted by the Services at a lower level, and is responsible for processing data, generating static images and image tiles, and calling further libraries as needed.
Spatial Indexing is a specialised module for indexing data in a spatial way, supporting an arbitrary number of dimensions and data points. Each specific visualisation will have its own separate index pre-computed (e.g. a scatter plot of galactic longitude and latitude will have an index built from those two coordinates). This pre-computation is key for providing interactivity. Scalability tests with the current implementation of the indexing were performed, indicating the feasibility of treating more than $2\times10^9$ individual database entries using a normal computer on the server side (16GB of RAM with a normal $\sim500$MB/s SSD attached to the SATAIII bus).
The indexing works in the following manner. First, the minimum and the maximum values are determined for each dimension of the data space being indexed. Based on this information, the root page of a tree is created. Then, data points are inserted into the root page one by one. If the number of data points in a page exceeds a certain configured threshold, the page is divided into children and the data points are also split among the child pages. The division of a page is performed by dividing each dimension by two; therefore, the number of children after a split will be $2^d$. In one dimension, each page is divided into two child pages, in two dimensions each page is divided into four child pages, and so on. When querying the index for data, only the pages that intersect the query range (in terms of area or volume, depending on the number of the dimensions of the index) are filtered, reducing significantly the amount of processing required.
The ideal threshold for the number of data points per page must be chosen taking into consideration that each page will always be retrieved from the database as a whole block. Accordingly, if this number is too high the amount of data retrieved from the database per request will be too big, even for spatial queries in small regions of the data space. On the other hand, if this threshold is too low, spatial queries will request a very high number of small blocks from the database. Both scenarios can hinder the performance of the application and prevent a satisfactory user experience on the client side. Our tests indicate that limiting the number of data points per page to the range $6-12\times10^4$ yields satisfactory response time for interactive visualisation.
Inside each page the data points are divided among different levels of detail. This is done for two main reasons: first, to prevent data crowding while producing the visual representation of the data (care is taken to avoid cropping or panning issues in the representation), and second, to keep the number of individual data points to be passed to the visualisation client and to be represented on the screen at a limit that permits the client side to experience interactivity.
Levels of details are numbered from 0 to n, with 0 being the level of detail containing the fewest data points or, in our terminology, the lowest level of detail. In our representation, the levels of detail are cumulative, i.e. level $n+1$ includes all the data points of level $n$. Nonetheless, the data points are not repeated in our data structure, and any query processing just accumulates the data of each previously processed level up to the requested level of detail. The number of data points at each level of detail is $2^d$ times the number of the previous one. For example, if the level of detail 0 has 500 data points, the level of detail 1 will have 2,000, the level of detail 2 will have 8,000 and so on. There are several ways in which the selection of points can be performed, but the most direct one, a simple random sampling, is known to present several advantages for visualisation. As discussed by \cite{4376143}, among other features, it keeps spatial information, it can be localised, and it is scalable.
Storing the data structures and providing further querying functionalities require a final component, a Database Manager. The visualisation service described in this paper can use any Database Manager (e.g. MongoDB\footnote{https://www.mongodb.com/}, OrientDB\footnote{http://orientdb.com/}) that can provide at least the two most basic required functions, storing and retrieving data blocks. A data block is a string of bytes with a long integer number for identification. The internal data organisation within the database is irrelevant for indexing purposes. The present implementation of GAVS, tailored to DR1, adopts our own Java-optimised NoSQL Database.
While tree indexation is common in multi-dimensional data retrieval, the specific indexing and data serving schemes developed here are, to the best of our knowledge, unique in systems for interactive visualisation of (large) astronomical tables.
\begin{table}[ht]
\caption{Role of the services provided by the server side.}
\label{tab:serv_service}
\begin{tabularx}{0.45\textwidth}{XX}
\\\hline
\noalign{\smallskip}
Service & Role \\\hline\noalign{\smallskip}
adql & ADQL query generation and validation \\ \noalign{\smallskip}\hline \noalign{\smallskip}
histogram1d & 1D histogram data manipulation and static 1D histogram generation \\ \noalign{\smallskip}\hline \noalign{\smallskip}
linkedviews & Linked views for point selections and data subsets \\ \noalign{\smallskip}\hline \noalign{\smallskip}
plotsinfo & Information on plots metadata (dimensions, axes names, axes limits, and more)
\\ \noalign{\smallskip}\hline \noalign{\smallskip}
plug-ins & Data from JS9 and Aladin plug-ins \\ \noalign{\smallskip}\hline \noalign{\smallskip}
scatterplot2d & 2D scatter plot image generation, both dynamic and static \\ \noalign{\smallskip}\hline \noalign{\smallskip}
search & Name search in external services (CDS/Sesame) \\ \noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabularx}
\end{table}
\subsection{Web client}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.80\textwidth]{clientDiagram.png}
\caption{Client architecture diagram providing the structure and context in GAVS. The Client is structured in a Model-View-Controller
design expressed in the Directives, Controllers, and Services modules.}\label{fig:cli_diagram}
\end{figure*}
\begin{table}[ht]
\caption{Role of the services, directives, and controllers that compose the architecture of the client side.}
\label{tab:cli_service}
\begin{tabularx}{0.45\textwidth}{XX}
\\\hline
\noalign{\smallskip}
Service & Role \\\hline\noalign{\smallskip}
adql & ADQL query requests\\
\noalign{\smallskip
histogram1d & 1D histogram requests\\
\noalign{\smallskip
createVisualizations & Creation of visualisations by user file requests\\
\noalign{\smallskip
plug-ins & JS9 and Aladin lite plug-ins requests\\
\noalign{\smallskip
scatterplot2d & 2D scatter plot requests\\
\noalign{\smallskip
visualizationsService & Created visualisations request\\
\noalign{\smallskip}\hline \noalign{\smallskip}
Directive & Role\\\hline\noalign{\smallskip}
I/O modals & Allows user to communicate with the system\\
\noalign{\smallskip
aladin & Shows an Aladin lite window\\
\noalign{\smallskip
js9 & Shows a JS9 window\\
\noalign{\smallskip
main & Creates the main page and the gridster windows\\
\noalign{\smallskip
plotWithAxes & Creates an abstract plot that can assume any available type: Histogram 1D or Scatterplot 2D\\
\noalign{\smallskip}\hline \noalign{\smallskip}
Controller & Role\\
\hline\noalign{\smallskip}
adqlController & Controls the adql I/O\\
\noalign{\smallskip
aladinLiteController & Controls the Aladin lite window\\
\noalign{\smallskip
modalsControllers & Controls the I/O modals\\
\noalign{\smallskip
histogram1DController & Controls the histogram 1D windows\\
\noalign{\smallskip
js9Controller & Controls the js9 window\\
\noalign{\smallskip
mainController & Controls the main window functions and the gridster windows\\
\noalign{\smallskip
stateController & Controls the save and restore state\\
\noalign{\smallskip
scatterPlot2DController & Controls the scatterplot 2D windows\\
\noalign{\smallskip}\hline
\end{tabularx}
\end{table}
The structure of the web client is depicted in Fig. \ref{fig:cli_diagram}. The client is responsible for the user interaction with the visualisation service and thus for the communication between the user's computer and the visualisation server.
The Client is a single-page application structured in a Model-View-Controller (MVC) design pattern. Accordingly, the Client is divided into three major components:
\begin{itemize}
\item the directives that manipulate the HTML and thus serve data to the client’s display;
\item the services that communicate directly with the server side through REST requests;
\item the controller that works as the broker between the services and the directives.
\end{itemize}
The components, available services, and the specifications of each individual role are described in Table~\ref{tab:cli_service}.
Grids of windows providing different functionalities can be created on the client web page using the gridster.js\footnote{\url{http://dsmorse.github.io/gridster.js/}} framework. Using these windows, the Visualisation Service deployed for DR1 provides the following types of plots: 1D histograms, 2D scatter plots, and the JS9\footnote{\url{http://js9.si.edu/}} (FITS viewer) and Aladin lite\footnote{\url{http://aladin.u-strasbg.fr/AladinLite/}} (HiPS viewer) plug-ins. Options and configurations for each plot are available in modal windows that appear superposed on the main web page when requested.
The 1D histograms are visualised (but not computed) using the d3.js\footnote{\url{https://d3js.org/}} library. This library provides tools for drawing the histogram bins and axes. For 1D histograms, the client requests the bin values to the Server, specifying the number of bins and the maximum and minimum limits over which to compute the histogram. The Server then calculates the number of points in each bin and replies these values to the client using a JSON object. Performance at the server side is improved by not counting every single data point in the data set. If the limits of a data page in the index fall within the limits of a bin, the pre-computed total number of points of the page is used, instead of iterating over every data point. This provides quick response times, allowing to interactively change bin limits and sizes, even for the over one billion points in DR1.
The 2D scatter plots are produced using the leaflet\footnote{\url{http://leafletjs.com/}} interactive map library. This library is specialised in tile-based maps and has a small code footprint. The Server application generates the tiles from projections of the data based on client-side requests. The client side then uses these tiles via leaflet to display them to the user. The axes of the scatter plots are created following the same underlying logic and libraries as the 1D histograms, providing a homogeneous user experience throughout the visualisation service. Finally, the client-side application also supports additional overlays with interactive layers and vector objects.
\section{Deployment}
\label{sec:deploy}
The entire development and prototyping phase of the service was performed using a virtual machine infrastructure at ESAC, together with a physical set-up at the Universidade de Lisboa.
The visualisation web service is deployed at ESAC in a dedicated physical machine. This service came online together with the Gaia DR1. It has been in continuous operation since the moment the archive went public on September 14, 2016.
The service is accessible through the Gaia Archive portal\footnote{\url{http://gea.esac.esa.int/archive/}}, in a special pane dedicated to the online visual exploration of the data release contents. It can also be accessed via a direct link\footnote{\url{http://gea.esac.esa.int/visualisation}}.
The fundamental configuration and characteristics of the operational infrastructure are
\begin{itemize}
\item CPU: Intel(R) Xeon(R) E5-2670 v3 @ 2.30GHz, 16 cores;
\item Memory: 64 gigabytes;
\item Storage: 3 TB SSD;
\item Application server: Apache Tomcat 8;
\item Java version: 1.8.
\end{itemize}
The software and hardware deployed for the Visualisation Service proved robust. It has not crashed even once in the several months it has been online, despite several heavy access epochs, and also considering that the service is sustained by a single physical machine.
In the first four hours after starting online operations, the visualisation service had already served more than 4286 single users. These users created and interacted with 145 1D histograms and 5650 2D scatter plots, which triggered the generation of $>1.5\times10^6$ different tiles\footnote{The caching mechanism prevents a tile from being created twice.}.
By the end of the DR1 release day, over 7500 individual users had been logged and interacted with the visualisation service.
\section{Contents produced for DR1}
\label{sec:contents}
In the service deployed for DR1, the visualisation index pre-computations (sect.~\ref{sec:server}) are determined by the GAVS operator. Hence, the GAVS portal serves a predefined list of scatter plots and histograms:
\begin{description}
\item 1D histograms
\begin{itemize}
\item GDR1 data: galactic latitude; galactic longitude; G mean magnitude; G mean flux
\item TGAS data: parallax; proper motion in right ascension; proper motion in declination; parallax error; proper motion modulus
\end{itemize}
\item 2D scatter plots
\begin{itemize}
\item GDR1 data: galactic coordinates; equatorial coordinates; ecliptic latitude and longitude
\item TGAS data: parallax error vs. parallax; proper motion in declination vs. proper motion in right ascension; colour magnitude diagram (G-Ks vs. G, with Ks from 2MASS)
\end{itemize}
\end{description}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.80\textwidth]{map_render.png}
\caption{Outline of the steps followed when creating visualisations for distribution and for viewing with external applications. It covers the creation of the DR1 poster image in various formats as well as HiPS and and FITS files.}\label{fig:map_pipe}
\end{figure*}
In addition to the interactive scatter plots and histograms that can be explored in the visualisation portal, the service has also produced content for distribution or for viewing with other specialised software. It is made available at GAVS under the `Gallery' menu item. Here we list that other content and briefly describe how it was created:
\begin{itemize}
\item All-sky source density map in a plane projection. This is the DR1 poster image shown in \cite{2016A&A...595A...2G} and available in several sizes at \url{http://sci.esa.int/gaia/58209-gaia-s-first-sky-map/}, also with annotations;
\item A similar map, but for integrated (logarithmic) G-band flux. It is shown and discussed below;
\item Several zoom-ins of regions of interest, re-projected centred on those regions. Employed as presentation material. Two examples are shown and discussed below;
\item All-sky HiPS maps of source density and integrated (logarithmic) G-band flux for viewing with the Aladin Lite plug-in at the Archive Visualisation Service;
\item All-sky low-resolution FITS map in a cartesian projection with WCS header keywords for viewing with the JS9 plug-in in the Archive Visualisation Service;
\item FITS images of selected regions in orthographic projection with WCS header keywords;
\item Large format all-sky source density and integrated flux maps for projection in planetaria.
\end{itemize}
Images are produced with the pipeline presented in Fig.~\ref{fig:map_pipe}. The pipeline is written in python.
The input data are stored in tabular form in the file system.
Schematically, the steps are as follows:
\begin{enumerate}
\item Data are read in blocks. This allows images to be produced from tables of arbitrary sizes, larger than would fit in memory, as long as there is enough disc storage space. The pipeline uses the python package Pandas\footnote{\url{http://pandas.pydata.org}} in this process.
\item\label{item:hpx_map} The computation of the statistic to be visualised requires partitioning the celestial sphere in cells. The Healpy\footnote{\url{https://github.com/healpy/healpy}} implementation of the Hierarchical Equal Area isoLatitude Pixelation \citep[HEALPIX, ][]{gorski2005healpix} tessellation is used for this purpose. Each source is assigned a HEALPix from its sky coordinates.
The statistic can be simply the number of sources in the cell, the integrated luminous flux of sources in the cell, or any other quantity that can be derived from the source attributes listed in the input table. The statistics determined for the data blocks are added to a list of values for each HEALPix. Averaged or normalised statistics (e.g. number of sources per unit area) are only computed in the end to avoid round-off errors. Finally, the central sky coordinates of each HEALPix are determined and a list of the statistics for those coordinates is produced.
\item[3a.] The statistics determined on the sphere are represented on a plane. Because the sphere cannot be represented on a plane without distortion, many approaches exist for map projections \citep{Synder1993}. The Hammer projection, used to produce the DR1 image, is known to be an equal-area projection that reduces distortions towards the edges of the map. The projection results in x,y positions in a 2:1 ratio with x confined to (-1,1). The zoomed images use an orthographic--azimuthal projection, which is a projection of points onto the tangent plane.
\setcounter{enumi}{3}
\item The x,y coordinates of the map projection are re-sampled (scaled and discretised) onto a 2D matrix with a specified range (image dimensions in arcminutes) and number of pixels in each dimension. The number stored in each pixel corresponds to the combined statistics of the HEALpix that fall in the pixel. Because the HEALPix and pixels have different geometries, they will not match perfectly. It is thus important that the pixel area should be substantially larger than the HEALPix area, i.e. that each pixel includes many HEALPix to minimise artefacts due to differences in the areas covered by both surface decompositions. In the case of the Hammer projection, we have found that a pixel area 32 times larger that the HEALPix area will keep artefacts at the percent level. Given the 2:1 aspect ratio of the Hammer projection, this corresponds to an average of 8$\times$4 HEALpix per pixel.
\item The image is rendered from the matrix built in the previous step. There are many libraries available, but only a few produce images with 16 bit colour maps. This is required to produce high-quality images with enough colour levels (65536 levels of grey, compared to 256 levels for 8 bit images) to go through any post-processing that might be desirable for presentation purposes. Here the pyPNG\footnote{\url{https://pythonhosted.org/pypng/}} package is used. The output is a PNG image.
\item[3b.] HiPS and FITS image files are produced. Healpy can create FITS files with HEALPix support (embedded HEALPix list and specific header keywords) directly from the HEALPix matrix produced in step~\ref{item:hpx_map} of the image pipeline. HiPS images were then created from the HEALPix fits file with the Aladin/Hipsgen code following the instructions in \url{http://aladin.u-strasbg.fr/hips/HipsIn10Steps.gml}. The input data were mapped into HiPS tiles, without resampling, using the command java -Xmx2000m -jar Aladin.jar in=`HealpixMap.fits'. While this method allows a quick and easy creation of HIPS files, it does not handle well very high resolutions. To illustrate the issue, for a nside of $2^{13}$=8192 the HEALPixs array has a length of 805306368, while for a nside of $2^{14}$ it has a length of 3221225472. The DR1 HiPS maps provided in the Visualisation portal have a base nside of 8192.
Regarding JS9, a javascript version of the popular DS9 FITS viewer\footnote{\url{http://ds9.si.edu}}, the current version does not support HEALPix FITS files. For JS9, the HEALPIx map was directly projected on the cartesian plane and converted to FITS using the Astropy\footnote{\url{http://www.astropy.org/}} FITS module astropy.io.fits.
\end{enumerate}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.8\textwidth]{GDR1_flux_2k.png}\\
\includegraphics[width=0.8\textwidth]{GDR1_density_2k.png}
\includegraphics[width=0.8\textwidth]{GDR1_mag20log_small.png}
\caption{All-sky maps of DR1: Integrated flux (top), density (middle), density for sources brighter than G=20 mag (bottom)}\label{fig:allsky}
\end{figure*}
The DR1 poster image \citep{2016A&A...595A...2G} is available in several sizes\footnote{\url{http://sci.esa.int/gaia/58209-gaia-s-first-sky-map/}}, also with annotations. It
is a Hammer projection of the Galactic plane represented in galactic coordinates. This specific projection was chosen in order to have the same area per pixel.
The images of different sizes are scaled versions of a baseline image of 8000$\times$4000 pixels, which corresponds to an area of$\sim 5.901283423$ arcmin$^2$ per pixel. As explained above, the plane projected images are created from higher resolution HEALPix matrices. In this case, an NSIDE = 8192 was used, which corresponds to an area of 0.184415106972 arcmin$^2$ per HEALPix or ($\sim$ 8$\times$4) 32 HEALPix per pixel.
The greyscale represents the number of sources/arcmin$^2$. In \cite{2016A&A...595A...2G} a scale bar is presented together with the map. The maps mentioned above, which are available at the ESA website, are based on a logarithmic scale followed by some post-processing fine-tuning of the scale with an image editing program.
As noted in \cite{2016A&A...595A...2G}, the scales were adjusted in order to highlight the rich detail of Galactic plane and the signature of the Gaia scanning.
The maximum density is slightly higher than 260 sources/ arcmin$^2$ 1.000.000 sources/degr$^2$
The minimum is 0, but this mostly due to gaps in certain crowded regions where no sources have been included in DR1, noticeably the stripes close to the Galactic centre. Not considering these missing parts with zero density, the minimum at this resolution is about $\sim$300 sources/degr$^2$ .
As mentioned above, an all-sky logarithmic integrated G flux map was also produced. It is shown in Fig.~\ref{fig:allsky} together with the density map for comparison.
While the density map highlights dense groups of stars, even very faint stars at the limiting magnitude, the flux map can highlight sparse groups of bright stars.
This explains why the density map is so full of detail. Many very faint but dense star clusters and nearby galaxies are easily seen. Features in the dust distribution also become prominent as they create pronounced apparent underdensities of stars.
It also explains why the striking Gaia scanning patterns in the density map are mostly absent in the flux-based map. As discussed in \cite{2016A&A...595A...2G}, the patterns are an effect of incompleteness which affects mostly the faint end of the survey. This is confirmed with the density map in the lower panel of Fig.~\ref{fig:allsky}, which was built from sources brighter than G=20 mag and shows many fewer scanning footprints.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{LMC_dens_zoom.png}
\includegraphics[width=0.45\textwidth]{LMC_flux_zoom.png}
\caption{$12\degr\times 10\degr$ density (left) and integrated flux (right) maps of the LMC.}\label{fig:lmc}
\end{figure*}
This illustrates how these density and flux-based maps provide complementary views, where one reveals structures that are not seen in the other. This is further illustrated in Fig.~\ref{fig:lmc}, which is a zoom into a field of $~\sim 12\degr\times 10\degr$ centred on the LMC. Here the LMC bar and arms are seen differently in the two images. The density map displays scanning artefacts, specially in the bar, but also reveals many faint star clusters and clearly delineates the extent of the arms. The structure of the bar and the 30 Doradus region (above the centre of the images) are better revealed by the bright stars that dominate the flux-based image. It is worth noting that despite its photo-realism, this is not a photograph, but a visualisation of specific aspects of the contents of the DR1 catalogue.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{orion_dens.png}
\includegraphics[width=0.45\textwidth]{orion_dss.png}\\
\includegraphics[width=0.45\textwidth]{orion_flux.png}
\includegraphics[width=0.45\textwidth]{orion_2mass.png}
\caption{$15\degr\times 11\degr$ field centred on the Orion-A region. Density (top left) and integrated flux (bottom left) maps with DR1 data. Coloured DSS (top right) and 2MASS (bottom right) images of the same field. The DR1 images reveal a cat-like structure created by an extinction patch, highlighting how the mere positions in DR1 can already reveal structures not seen in currently available optical and near-infrared images.}\label{fig:orion}
\end{figure*}
While the astrophysical interest of Orion cannot be overstated, it also provides a highlight of how even though Gaia is an optical mission, the mere positions of the stars published in DR1 can reveal structures not yet revealed by other surveys. Figure~\ref{fig:orion} shows four panels of a $\sim 15\degr\times 11\degr$ field centred on the Orion-A cloud. The two panels on the left are DR1 density (upper) and flux (lower) maps. The panels on the right are coloured DSS (upper) and 2MASS (lower) images of the same field. The sources in the DR1 maps delineate a distinctive extinction patch that closely resembles a cat\footnote{At public presentations, some members of the audience have suggested that it is a fox. We are currently re-analysing the data and taking a deeper look into this issue.} flying or jumping stretched from the left to the right, with both paws to the front. This structure is not seen in currently available optical and near infrared images, except for the shiny `nose' and the cat's `left eye'.
Finally, to end this section, large 16384$\times$8192 pixel density and integrated flux maps in a cartesian projection have been produced for display in planetaria. They are currently employed in several Digistar\footnote{\url{https://www.es.com/digistar/}} planetaria around the world.
\section{ Workflows}
\label{sec:workflows}
An online GAVS Quick Guide can be consulted under the `Help' menu button. The guide describes the full set of functionalities offered by GAVS. It includes explanations of the basic capabilities such as adding new visualisation panels, types of visualisations, presets, and configurations. It also covers more advanced features such as creating (and sharing) geometrical shapes for marking regions of interest, overlaying catalogues of objects, and generating ADQL visual queries.
This section gives a few examples of workflows using GAVS. More examples and details on the user interface can be found in the online guide.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{oph_center_banner.png}
\includegraphics[width=0.45\textwidth]{oph_region.png}\\
\includegraphics[width=0.45\textwidth]{oph_adql.png}
\caption{Workflow for producing a visual ADQL query. Top panel: The user selects a region to centre the view (top). Middle panel: The selected polygonal region is shown in green. Region menu with the ADQL functionality also displayed. Bottom panel: The resulting ADQL query.}\label{fig:adql_oph}
\end{figure}
Figure~\ref{fig:adql_oph} illustrates a workflow for centring on a field of interest (using the Sesame name resolver), marking a region, and producing an ADQL query that can be pasted into the archive query interface. In the first step (top panel) the user clicks on the lens icon in the top right corner of the window and enters the name of a region or object (in this example, Ophiuchus). The visualisation window will centre the field on the region if the CDS Sesame service can resolve the name. Alternatively, instead of a name, central coordinates can be used as input. Afterwards, the user clicks on the `Regions' menu of the visualisation window in the top left, and selects a polygonal region or rectangular region. The user then creates the region using the mouse, e.g. by clicking on each vertex of the polygon, and closing the polygon at the end (middle panel). Finally, the user clicks again on the `Regions' menu and selects the `ADQL' option. This will result in the creation of an ADQL query that is presented to the user (bottom panel). Behind the scenes, the software validates the resulting ADQL query before presenting it to the user, assuring that it is correctly constructed. The user can now run this query as is at the Gaia Archive search facility, or can customise it (e.g. to select which data columns to retrieve from the table or to perform a table join) before submitting it to the archive.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{oph_simbad.png}
\caption{Archive and Simbad searches of object of interest.}\label{fig:simbad_oph}
\end{figure}
Figure~\ref{fig:simbad_oph} illustrates the functionality for archive and Simbad cone searches around objects of interest and also generates an ADQL query-by-identifier for the select object. At any visualisation panel, when the user clicks twice (not double-click) on an object, the system displays a dialog box with some options. These options, shown in Fig.~\ref{fig:simbad_oph}, identify the selected Source ID from DR1 and give the possibility of generating an ADQL query with this source ID for retrieving further information from the archive. This dialog box also gives the option of retrieving more information from CDS/Simbad or of generating an ADQL cone search query centred on the selected source.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{MW_region_cats.png}
\caption{User catalogue of open clusters, globular clusters, and nearby dwarf galaxies overlaid on a scatter plot of the DR1 sources in galactic coordinates.}\label{fig:reg_cat}
\end{figure*}
Figure~\ref{fig:reg_cat} shows catalogues of open clusters \citep[][ Version 3.5, Jan 2016]{2002A&A...389..871D}, globular clusters \citep{1996AJ....112.1487H}, and nearby dwarf galaxies \citep{2012AJ....144....4M} uploaded by the user and overlaid on a scatter plot of the DR1 sources in Galactic coordinates. To upload a catalogue of regions, the user must click on the `Regions' menu, at the top left of the visualisation window, then select the `Load Regions' options, and select the region file to be uploaded (we note that the user can save any region created earlier by using the `Save Regions' option). To overplot points, the region catalogues must be formatted as \emph{region} files with points. An example file can be copied from the region file built from the catalogue of open clusters\footnote{\url{https://gaia.esac.esa.int/gdr1visapp/help/MW_OpenClusters.reg}}.
\section{Other Gaia oriented visualisation tools}
\label{sec:other}
This section provides a small list of applications or services that were identified as having been developed or improved with the exploration of Gaia in mind, and that offer features that complement those of the Archive Visualisation service. It does not intend to be a complete survey of visualisation tools for exploring Gaia data.
\paragraph{CDS/Aladin} The CDS has an area dedicated to Gaia\footnote{\url{http://cdsweb.u-strasbg.fr/gaia}}. In particular, a page for exploring a DR1 source density HiPS map in Aladin Lite, with optional overlay of individual sources from DR1, TGAS, and SIMBAD is offered\footnote{\url{http://cds.unistra.fr/Gaia/DR1/AL-visualisation.gml}}. The HiPS density map with different HEALPix NSIDE builds can also be downloaded\footnote{\url{http://alasky.u-strasbg.fr/footprints/tables/vizier/I_337_gaia}} .
\paragraph{ESASky}\footnote{\url{http://sky.esa.int}} Gaia catalogues are available in ESASky \citep{Baines2016}, allowing users to visually compare them with other science catalogues from other ESA missions in an easy way. In this case, users can overplot the Gaia DR1 and TGAS catalogues on top of any image from Gamma-ray to radio wavelengths, click on any single source in the image to identify it in a simplified results table below, and retrieve the selected table as a CSV file or as a VOtable. To cope with potentially slow retrieval times for large fields of view, the resulting table from any search is sorted by median G-magnitude; only the first 2000 sources are found and it is not possible to select more than 50,000 sources. In the future, ESASky will develop a more sophisticated way to display many sources for large fields of view.
\paragraph{Gaia-Sky}\footnote{\url{https://zah.uni-heidelberg.de/gaia/outreach/gaiasky/}} is a Gaia-focused 3D universe tool intended to run on desktop and laptop systems; its main aim is the production of outreach material. Gaia-Sky provides a state-of-the-art 3D interactive visualisation of the Gaia catalogue and offers a comprehensive way to visually explain different aspects of the mission. The latest version contains over 600,000 stars (the stars with relevant parallaxes in TGAS). The upcoming versions will be able to display the 1 billion sources of the final Gaia catalogue. The application features different object types such as stars (which can be displayed with their proper motion vectors), planets, moons, asteroids, orbit lines, trajectories, satellites, or constellations. The pace and direction of time can also be tuned interactively via a time warp factor. Graphically, it makes use of advanced rendering techniques and shading algorithms to produce appealing imagery. Internally, the system uses an easily extensible event-driven architecture and is scriptable via Python through a high-level Aplication Programming Interface (API). Different kinds of data sets and objects can also be loaded into the program in a straightforward manner thanks to the simple and human-readable JSON-based format. The system is 3D ready and features four different stereoscopic profiles (cross-eye, parallel view, anaglyphic, and 3DTV); it offers a planetarium mode able to render videos for full-dome systems and a newly added $360{\degr}$ panorama setting which displays the scene in all viewing directions interactively. Gaia Sky is an open source project, it is multi-platform and builds are provided for Linux (RPM, DEB, AUR), Windows (32 and 64 bit versions) and OS X.
\paragraph{TOPCAT}\citep{2005ASPC..347...29T} is a desktop Graphical User Interface (GUI) application for manipulation of source catalogues, widely used to analyse astronomical data\footnote{\url{http://www.star.bris.ac.uk/~mbt/topcat/}}. One of its features is a large and growing toolkit of highly flexible 1D, 2D, and 3D visualisation options, intended especially for interactive exploration of high-dimensional tabular data. It is suitable for interactive use with hundreds of columns and up to a few million rows on a standard desktop or laptop computer. It can thus work with the whole of the TGAS subset, but not the whole Gaia source catalogue. The visualisation capabilities are also accessible from the corresponding command-line package, STILTS \citep{2006ASPC..351..666T}, which can additionally stream data to generate visualisations from arbitrarily large data sets provided there is enough computer power.
None of TOPCAT's visualisation capabilities are specific to the Gaia mission, but part of the development work has been carried out within the DPAC, and has accommodated visualisation requirements arising from both preparation and anticipated exploitation of the Gaia catalogue. New features stimulated to date by the requirements of Gaia data analysis include improved control of colour maps; options for assembling, viewing, and exporting HEALPix maps with various aggregation modes; options to view pre-calculated 2D density maps, for instance produced by database queries; improved vector representations, for instance to depict proper motions; plots that trace requested quantiles of noisy data; and Gaussian fitting to histograms. Though developed within the context of Gaia data analysis, all these features are equally applicable to other existing and future data sets.
\paragraph{Vaex}\citep{2016arXiv161204183B} is a visualisation desktop/laptop tool written with the goal of exploring Gaia data\footnote{\url{https://www.astro.rug.nl/~breddels/vaex/}}. It can provide interactive statistical visualisations of over a billion objects in the form of 1D histograms, 2D density plots, and 3D volume renderings. It allows large data volumes to be visualised by computing statistical quantities on a regular grid and displaying visualisations based on those statistics. From the technical point of view, Vaex operates as a HDF5 viewer that exploits the possibilities of memory mapping those files and binning the stored data previous to the rendering and display. However, the full exploration of the over a billion objects requires that the variables of each plot are all loaded in memory. This has the effect of requiring high-end machines, with large amounts of RAM for multi-panel visual exploration of the full DR1. Vaex also operates as a Python library.
\paragraph{Glue}\citep{2015ASPC..495..101B} is a python library for interactive visual data exploration. While not specifically developed for Gaia, a number of uncommon features make Glue deserve a special mention here. It supports the analysis of related data sets spread across different files: a common need of astronomers when analysing data from various sources, including their own observations. A key characteristic is the ability of creating linked views between visualisations of different types of files (images and catalogues).
Glue offers what the authors call \emph{hackable user interfaces}. This means providing GUIs, which are better for interactive visual exploration, and an API, which is better suited to expressing and automating the creation of visualisations, allowing simple integration in Python notebooks, scripts, and programs. Among other features, Glue also provides advanced capabilities of 3D point cloud selection and support for plug-ins.
\section{Concluding remarks and future developments}
\label{sec:conclusions}
Online, fully interactive, and free visual exploration of the Gaia-sized archives up to the last of each of the more than $10^9$ individual entries was something not offered by any service in the world. This scenario has changed with the Gaia Archive Visualisation Service for Data Release 1 presented in this paper. In addition to being used for scientific data exploration and public presentation, the GAVS has also been employed in the validation of DR1 \citep{2017A&A...599A..50A}.
The software architecture, design, and implementation have proved highly stable throughout the past months, and is capable of serving a fully interactive visual exploration environment of the Gaia archive to thousands of users.
This work has also been extending the application of visual abstractions of astronomical data sets. It introduces for the first time the simple but powerful concept of a {\it visual query}. This {\it visual query} directly generates ADQL queries from visual abstractions of the data and tables. It effectively enables any researcher to create complex queries, which can later be executed against ADQL compliant databases such as the one provided by the Gaia Archive. This concept will be evolved in the future to enable even more complex queries, performed through multiple tables, to be built with no knowledge of the ADQL and SQL languages.
Gaia Data Release 2 will bring a multitude of new parameters. From the astrometric point of view, proper motions, and parallaxes for most of the more than one billion objects will be available. By building on top of concepts and prototypes developed during an exploratory ESA project (code name IVELA: Interactive Visualisation Environment for Large Archives), future versions of the Visualisation Service will provide 3D point cloud interactive visualisation, allowing a fully online 3D navigation and exploration of the release contents.
The future versions of GAVS will also bring useful features such as annotation tools, various image formats for exportation, and new plot types, such as 2D raster plots (e.g. histograms, density plots) and specialised panels for the analysis of time series, all prepared for very large data sets. Triggering of visualisation pre-computations by users is under analysis.
Also planned is the extension of GAVS to serve visualisation data (indexes, levels of detail, linked views, and more metadata) to other applications beyond the the current web client/portal. The baseline is currently a REST API, with wrappers planned for python and other languages.
Advanced data analysis will then become possible with tools (e.g. Glue) that otherwise would not be able to handle the volume of the Gaia Archive, but the data feeds do not have to be limited to data analysis frameworks. The success of DR1 has demonstrated the high level of interest among the general public in the Gaia mission. It would thus also be natural to feed education and outreach Universe exploration tools such as Gaia-Sky or the World Wide Telescope\footnote{\url{http://www.worldwidetelescope.org/}}.
Finally, further ahead, a future which includes a deeper articulation with virtual organisations such as the Virtual Observatory seems unavoidable. In the light of this paradigm, it is only natural that code should be brought close to the data, and not the other way around. Accordingly, there have been studies, designs, and developments of platforms such as the Gaia Added Value Platform (GAVIP) \citep{GAVIP2016}, which would allow codes, for instance Python, to run near the Gaia data. The Visualisation functionality of the GAVIP platform, known as the Gaia Added Value Interface for Data Visualisation (GAVIDAV), has been developed in close contact with the Gaia Archive Visualisation Services. This proximity will enable any application running on such a platform, and thus near the Gaia Archive, to profit from many of the large-data visualisation capabilities of the tools described in this paper, thus bringing the power of visually exploring billions of database entries to the hands of {\it any} astronomer or human being, regardless of the levels of resources available in their country or institution.
\begin{acknowledgements}
The authors greatly appreciated the constructive comments by the referee, Alyssa Goodman.
This work has made use of results from the European Space Agency (ESA) space mission Gaia, whose data were processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The Gaia mission website is \url{http: //www.cosmos.esa.int/gaia}. The authors are current or past members of the ESA Gaia mission team and of the Gaia DPAC. This work has received financial support from the European Commission’s Seventh Framework Programme through the grant FP7-606740 (FP7-SPACE-2013-1) for the Gaia European Network for Improved data User Services (GENIUS); from the Portuguese Funda\c c\~ao para a Ci\^encia e a Tecnologia (FCT) through grants PTDC/CTE-SPA/118692/2010, PDCTE/CTE-AST/81711/2003, and SFRH/BPD/74697/2010; from the Portuguese Strategic Programmes PEstOE/AMB/UI4006/2011 for SIM, UID/FIS/00099/2013 for CENTRA, and UID/EEA/00066/2013 for UNINOVA; from the ESA contracts ESA/RFQ/3-14211/14/NL/JD and ITT-AO/1-7094/12/NL/CO Ref:B00015862. This research has made use of the Set of Identifications, Measurements, and Bibliography for Astronomical Data \citep{Wenger2000} and of the ``Aladin sky atlas'' \citep{2000A&AS..143...33B,2014ASPC..485..277B}, which are developed and operated at Centre de Donn\'ees astronomiques de Strasbourg (CDS), France.
XL acknowledges support by the MINECO (Spanish Ministry of Economy) - FEDER through grant ESP2014-55996-C2-1-R, MDM-2014-0369 of ICCUB (Unidad de Excelencia `Mar\'ia de Maeztu').
This publication made use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Thomas Boch and Eric Mandel are warmly thanked for their always friendly assistance in integrating the Aladin Lite and JS9 plug-ins, respectively.
\end{acknowledgements}
|
2,877,628,088,616 | arxiv | \section{Introduction}
\setcounter{equation}{0}
The celebrated KAM theory is established by Kolmogorov \cite{Ko}, Arnold \cite{Ar} and Moser \cite{Mo}. It mainly concerns the stability of motions or orbits in dynamical systems under small perturbations and indeed has a long history of over sixty years. So far, as well known, KAM theory has been widely spread to various systems, such as volume-preserving flows due to Broer et al \cite{Br}, generalized Hamiltonian systems due to Parasyuk \cite{Pa} and Li and Yi \cite{L1,L2}, finitely differentiable Hamiltonians due to Salamon \cite{Sa}, Bounemoura \cite{Ab} and Koudjinan \cite{CK}, Gevrey Hamiltonians due to Popov \cite{Po}, and multiscale dynamical systems due to Qian et al \cite{QW}. For other related results, see for instance, \cite{QL,CQ, MR1938331,MR1302146,MR1483538,MR1843664,MR1354540}. In studying discrete dynamical systems, Moser \cite{Mo} firstly studied the persistence of invariant tori of twist mappings with perturbation only on the action variable, namely of the following form
\[ \mathcal{G}:
\begin{cases}
\theta^{1}=\theta+r,\\
r^{1}=r+ g(\theta,r),
\end{cases}\]
where $ g $ is assumed to be of class $ C^{333} $. He stated a differentiable invariant curve theorem, which also is of great importance for the study of stability of periodic solutions. An analytic invariant curve theorem was provided in \cite{Si} by Siegel and Moser. Concentrating on finitely differentiable case, Svanidze established a KAM theorem for twist mappings in \cite{SV}. The version of class $C^{\alpha}$ with $\alpha>4$ and the optimal situation of class $C^{\alpha}$ with $\alpha>3$ about mappings on the annulus were
due to R\"ussmann \cite{Ru} and Herman \cite{He,H1}, respectively. For relevant works on the existence of invariant tori of voluming-preserving mappings, see Cheng and Sun \cite{CS} $(3{\rm -dimensional})$, Feng and Shang \cite{FS}, Shang \cite{MR1790659} and Xia \cite{XZ} $(n\geq 3)$. Cong et al \cite{CL} gave the persistence of the invariant tori when considering mappings with the intersection property, which has different numbers of actions and angular variables. Levi and Moser \cite{ML} gave a Lagrange proof of the invariant curve theorem for twist mappings using the method introduced by Moser \cite{M1}. The invariant curve theorem for quasi-periodic reversible mappings was studied by Liu \cite{LB}. For some reversible mappings, see Sevryuk's book \cite{Se}. Recently, Yang and Li \cite{YL}, Zhao and Li \cite{ZL} have extended the existence of invariant tori to resonance surfaces of twist mappings and multiscale mappings, respectively. Apart from above, Liu and Xing \cite{LX} presented a new proof of Moser's theorem for twist mappings with a parameter.
Among the KAM theory for Hamiltonian systems, the preservation of prescribed frequency is also important in studying invariance of dynamics under small perturbations, see Salamon \cite{Sa} for instance, especially without certain nondegeneracy such as Kolmogorov or R\"ussmann conditions, see Du et al \cite{DL} and Tong et al \cite{TD}. However, as generally known, the frequency of dynamical systems may have a drift effect by the perturbations during the KAM iteration, and therefore
it is indeed difficult to frequency-preserving.
\textit{To the best of our knowledge, there are no KAM results for twist mappings on this aspect, no one knows whether the prescribed frequency could be preserved for an invariant torus.} In this paper, we will touch this question. To this end, it is necessary to propose some transversality conditions involving topological degree condition as well as certain weak convexity condition to overcome the drift of frequency, see \cite{DL, TD} and the references therein. Based on them, we will establish the KAM persistence with frequency-preserving for twist mappings with intersection property. The so-called intersection property is that any torus close to the invariant torus of the unperturbed system intersects its image under the mapping. More precisely, denote by $\mathbb{T}^{n}=\mathbb{R}^{n}/ 2\pi\mathbb{Z}^{n}$ the $n$-dimensional torus, and let $E\subset \mathbb{R}^{n}$ be a connected closed bounded domain with interior points. Then consider the twist mapping $\mathscr{F}:\mathbb{T}^{n}\times E\rightarrow \mathbb{T}^{n}\times\mathbb{R}^{n}$ with intersection property
\begin{equation}\label{equation-1}
\mathscr{F}:
\begin{cases}
\theta^{1}=\theta+\omega(r)+\varepsilon f(\theta,r,\varepsilon),\\
r^{1}=r+\varepsilon g(\theta,r,\varepsilon),
\end{cases}
\end{equation}
where the perturbations $f$ and $g$ are real analytic about $(\theta,r)$ on $\mathbb{T}^{n}\times E$, $\omega$ is assumed to be only continuous about $r$ on $E$, and $ \varepsilon $ is a sufficiently small scalar. By introducing parameter translation technique, we prove in Theorem \ref{theorem-1} the persistence of invariant tori of such a family of twist mappings with the frequency unchanged under small perturbations, and as a byproduct, \textit{this gives rise to the first result for Moser's theorem with frequency-preserving.} Moreover, using similar approach, we also investigate the perturbed mapping $\mathscr{F}:\mathbb{T}^{n}\times \Lambda\rightarrow \mathbb{T}^{n}$ with parameter
\begin{equation}\label{intro2}
\mathscr{F}: \theta^{1}=\theta+\omega(\xi)+\varepsilon f(\theta,\xi,\varepsilon),
\end{equation}
provided with $ \Lambda $ the same as $ E $, and $ \varepsilon $ is a sufficiently small scalar. The perturbation $ f $ is analytic about $ \theta $ on $ \mathbb{T}^n $, and \textit{only continuity} with respect to the parameter $ \xi \in \Lambda $ is assumed for $ f $ and $ \omega $. \textit{Under such weak settings, we show the unexpected frequency-preserving KAM persistence via transversality conditions in Theorem \ref{theorem-2}.} As an explicit example, one could deal with irregular perturbations, such as nowhere differentiable systems.
This paper is organized as follows. Section \ref{SEC2} introduces some basic notations on modulus of continuity. In Section \ref{SEC3}, we state Theorem \ref{theorem-1} with frequency-preserving for twist mapping \eqref{equation-1} satisfying the intersection property. When $n=1$, we obtain Moser's invariant curve theorem with frequency-preserving, see Corollary \ref{cor-1}. Theorem \ref{theorem-2} concerns mapping \eqref{intro2} with only angular variables, and shows the persistence of invariant torus with frequency-preserving, where the perturbation $f(\theta,\xi,\varepsilon)$ is real analytic about $\theta$, continuous about the parameter $\xi$, and the frequency $\omega(\xi)$ is also continuous about $\xi$. To emphasize the weak regularity, we provide Corollary \ref{COROMH} with nowhere H\"older about parameter. Some discussions involving \textit{parameter without dimensional limitation problem} are also given in Section \ref{SEC3}. The one cycle of KAM steps from $\nu$-th to $(\nu+1)$-th step is shown in Section \ref{SEC4}. In more detail, instead of digging out a series of decreasing domains for frequency, we construct a translation to keep frequency unchanged during the iterative process. In addition, we have to construct a conjugate mapping to overcome the loss of intersection property. Finally, Section \ref{SEC5} is devoted to the proof of our main results.
\section{Preliminaries}\label{SEC2}
\setcounter{equation}{0}
To describe only continuity, we first introduce some definitions in this section, involving the modulus of continuity and the norm based on it.
\begin{definition}\label{def-1}
A modulus of continuity is denoted as $\varpi(x)$, which is a strictly monotonic increasing continuous function on $\mathbb{R}^{+}$ that satisfies
\begin{equation*}
\lim_{x\rightarrow 0^{+}}\varpi(x)=0,
\end{equation*}
and
\begin{equation*}
\varlimsup_{x\rightarrow 0^{+}}\frac{x}{\varpi(x)}<+\infty.
\end{equation*}
\end{definition}
\begin{definition}
Let a modulus of continuity $ \varpi $ be given. A function $f(x)$ is said to be $\varpi$ continuous about $ x $, if
\begin{equation*}
|f(x)-f(y)|\leq \varpi(|x-y|),\qquad \forall\ 0<|x-y|\leq 1.
\end{equation*}
\end{definition}
It is well known that a mapping defined on a bounded connected closed set in a finite dimensional space must admit a modulus of continuity, see \cite{Herman3,KO}. For example, for a function $ f(x) $ defined on $ [0,1] \subset {\mathbb{R}^1} $, it automatically admits a modulus of continuity
\[{\varpi _{f }}\left( x \right): = \mathop {\sup }\limits_{y \in \left[ {0,1} \right],0 < \left| {x - y} \right| \leq 1 } \left| {f\left( x \right) - f\left( y \right)} \right|.\]
We therefore only concentrate on modulus of continuity throughout this paper, especially in Theorem \ref{theorem-2}.
Next we will introduce the comparison relation between the strength and the weakness of modulus of continuity.
\begin{definition}\label{DE2.3}
Assume $\varpi_{1}$, $\varpi_{2}$ are two modulus of continuity. We say $\varpi_{1}$ is to be not weaker than $\varpi_{2}$ if
\begin{equation*}
\varlimsup_{x\rightarrow 0^{+}}\frac{\varpi_{1}(x)}{\varpi_{2}(x)}<+\infty,
\end{equation*}
and denote it as $\varpi_{1}\leq \varpi_{2}$ $($or $\varpi_{2}\geq \varpi_{1})$.
\end{definition}
\begin{remark}\label{remark-1}
\begin{itemize}
\item[(a)]If the function $f$ is real analytic about $x$ on a bounded closed set, then $f$ is naturally continuously differentiable with $x$ of order one. Obviously, one has $|f(x)-f(y)|\leq c|x-y|$ for some $c>0$ independent of $x, y$, that is, there exists a modulus of continuity $\varpi_{1}(x)=x$ with
\begin{equation*}
|f(x)-f(y)|\leq c\varpi_{1}(|x-y|),\qquad \forall \ 0<|x-y|\leq 1.
\end{equation*}
\item[(b)]The classical $ \alpha $-H\"older case corresponds to modulus of continuity $ \varpi_{\rm H}^\alpha(x) = x^\alpha $ with some $ 0<\alpha<1 $, and the Logarithmic Lipschitz case $\varpi_{\rm {LL}}(x)\sim (-\log x)^{-1}$ as $ x \to 0^+ $ is weaker than arbitrary $ \alpha $-H\"{o}lder continuity, that is, $ \varpi_{\rm H}^\alpha(x) \leq \varpi_{\rm {LL}}(x) $. Both of them characterise regularity weaker than that of Lipschitz.
\end{itemize}
\end{remark}
It needs to be pointed out that, the regularity of the majority of functions is indeed very weak from the perspective of Baire category, such as nowhere differentiable. In fact, the nowhere differentiable regularity could be even worse. More precisely, we present the following theorem constructing very weak continuity, one can see details from Theorem 7.2 in \cite{TD}.
\begin{theorem}\label{nonowh}
Given a modulus of continuity $ {\varpi _1} $, there exists a function $ f $ (actually, a family) on $ \mathbb{R} $ and a modulus of continuity $ {\varpi _2} \geq {\varpi _1}$, such that $ f $ is $ {\varpi _2} $ continuous, but nowhere $ {\varpi _1} $ continuous.
\end{theorem}
\begin{remark}
These kind of functions are usually constructed by trigonometric series admitting self-similarity, similar to Weierstrass function and so on.
\end{remark}
\begin{remark}\label{noholder}
As a direct application, we can construct a family of functions, which are nowhere Lipschitz or even nowhere H\"older continuous.
\end{remark}
Finally, in order to specify the norm based on the modulus of continuity, we need to give the domains of the variables in detail. Throughout this paper, let
\begin{align*}
D(h)&:=\{\theta\in\mathbb{C}^{n}: {\rm Re}\ \theta\in\mathbb{T}^{n},\ |{\rm Im}\ \theta|\leq h\},\\
G(s)&:=\{r\in \mathbb{C}^{n}:{\rm Re}\ r\in E, \ | {\rm Im}\ r|\leq s\}
\end{align*}
be the complex neighborhoods of $\mathbb{T}^{n}$ and $E$ for given $h,s>0$. For each vector $r=(r_{1},\cdots,r_{n})\in\mathbb{R}^{n}$, we denote by $|r|$ the $l^{1}$-norm of $r$:
\begin{equation*}
|r|=|r_{1}|+\cdots+|r_{n}|.
\end{equation*}
Also, for ease of notation, we write
$\mathcal{D}(h,s):=D(h)\times G(s)$. Next we introduce the norm defined as follows.
\begin{definition}
For the perturbation function $f(\theta,r,\varepsilon)$, which is real analytic about $(\theta,r)\in\mathcal{D}(h,s)$, one can also claim that $f(\theta,r,\varepsilon)$ is $\varpi_{1}(x)=x$ continuous about $r$ due to Remark \ref{remark-1}, define its norm as follows
\begin{equation*}
||f||_{\mathcal{D}(h,s)}:=|f|_{\mathcal{D}(h,s)}+[f]_{\varpi_{1}},
\end{equation*}
where
\begin{equation*}
|f|_{\mathcal{D}(h,s)}=\sup_{(\theta,r)\in \mathcal{D}(h,s)}|f(\theta,r)|,
\end{equation*}
and
\begin{equation*}
[f]_{\varpi_{1}}=\sup_{\theta\in D(h)}\sup_{\substack{r',r''\in G(s)\\0<|r'-r''|\leq 1}}\frac{|f(\theta,r',\varepsilon)-f(\theta,r'',\varepsilon)|}{\varpi_{1}(|r'-r''|)}.
\end{equation*}
\begin{remark}
As to weaker continuity described by certain modulus of continuity $ \varpi \geq \varpi_1$, one only needs to change $ \varpi_1 $ to $ \varpi $ in the norm accordingly.
\end{remark}
\end{definition}
\section{Main results}\label{SEC3}
This section is divided into two parts, namely stating our main KAM results as well as giving some further discussions.
\subsection{Frequency-preserving KAM}
\setcounter{equation}{0}
Before starting, let us make some preparations. Following Remark \ref{remark-1}, there exists a modulus of continuity $\varpi_{1}(x)=x$ such that $f$ and $g$ are automatically $\varpi_{1}$ continuous about $r$. Besides, the following assumptions are crucial to our KAM theorems.\\
\begin{itemize}
\item[{\rm (A1)}] Let $p\in \mathbb{R}^{n}$ be given in advance and denote by $E^{\circ}$ the interior of $E$. Assume that
\begin{equation}\label{A1}
\deg\left(\omega(\cdot),E^{\circ},p\right)\neq 0.
\end{equation}
\item[{\rm (A2)}] Assume that $\omega(r_{*})=p$ with some $r_{*}\in E^{\circ}$ by \eqref{A1}, and
\begin{equation*}
|\langle k,\omega(r_{*})\rangle -k_{0}|\geq \frac{\gamma}{|k|^{\tau}},\qquad \forall\ k\in \mathbb{Z}^{n}\backslash \{0\},\ k_{0}\in\mathbb{Z}, \ |k_{0}|\leq M_{0}|k|,
\end{equation*} where $\gamma>0$, $\tau>n-1$ is fixed, and $M_{0}$ is assumed to be the upper bound of $|\omega|$ on $E$.\\
\item[{\rm (A3)}] Assume that $B(r_{*},\delta)\subset E^{\circ}$ with $ \delta>0 $ is a neighborhood of $r_{*}$. There exists a modulus of continuity $\varpi_{2}$
such that
\begin{equation*}
|\omega(r')-\omega(r'')|\geq \varpi_{2}(|r'-r''|),\qquad r',r''\in B(r_{*},\delta), \quad 0<|r'-r''|\leq 1.
\end{equation*}
\end{itemize}
Via these assumptions, we are now in a position to present the following KAM theorem for twist mapping with intersection property, \textit{which is the first frequency-preserving result on Moser's theorem to the best of our knowledge.}
\begin{theorem}\label{theorem-1} Consider mapping \eqref{equation-1} with intersection property. Assume that the perturbations are real analytic about $(\theta,r)$, and the frequency $\omega$ is continuous about $r$. Moreover, ${\rm (A1)}$-${\rm (A3)}$ hold. Then there exists a sufficiently small $\varepsilon_{0}$, a transformation $\mathscr{W}$ when $0<\varepsilon<\varepsilon_{0}$. The transformation $\mathscr{W}$ is a conjugation from $\mathscr{F}$ to $\hat{\mathscr{F}}$, and $\hat{\mathscr{F}}(\theta,r)=(\theta+\omega(r_{*}),r-\tilde{r})$ is the integrable rotation on $\mathbb{T}^{n}\times E$ with frequency $\omega(r_{*} )=p$, where $\tilde{r}$ is the translation about the action $r$ resulting from the transformation $\mathscr{W}$, and the constant $\tilde{r} \rightarrow 0$ as $\varepsilon\rightarrow0$. That is, the following holds:
\begin{equation*}
\mathscr{W}\circ\hat{\mathscr{F}}=\mathscr{F}\circ\mathscr{W} .
\end{equation*}
\end{theorem}
When $n=1$, consider the area-preserving mapping of the form \eqref{equation-1}, which obviously satisfies the intersection property. Correspondingly, we obtain Moser's invariant curve theorem with frequency-preserving as stated in the following corollary.
\begin{corollary}\label{cor-1}
Consider mapping \eqref{equation-1} for $n=1$. Assume that the perturbations are real analytic about $(\theta,r)$, and the frequency $\omega$ is continuous and strictly monotonic concerning $r$. Moreover, $({\rm A2})$ and $({\rm A3})$ hold. Then there exists a sufficiently small $\varepsilon_{0}$, a transformation $\mathscr{W}$ when $0<\varepsilon<\varepsilon_{0}$. The transformation $\mathscr{W}$ is a conjugation from $\mathscr{F}$ to $\hat{\mathscr{F}}$, and $\hat{\mathscr{F}}(\theta,r)=(\theta+\omega(r_{*}),r-\tilde{r})$ is the integrable rotation on $\mathbb{T}\times E$ with frequency $\omega(r_{*})=p$ for $p\in \omega(E^\circ)^\circ$ fixed, where $\tilde{r}$ is the translation about $r$ resulting from the transformation $\mathscr{W}$, and the constant $\tilde{r}\rightarrow 0$ as $\varepsilon\rightarrow 0$. That is, the following holds:
\begin{equation*}
\mathscr{W}\circ\hat{\mathscr{F}}=\mathscr{F}\circ\mathscr{W} .
\end{equation*}
\end{corollary}
Besides concentrating on twist mappings with action-angular variables, Herman \cite{He,H1} first considered the smooth mappings that contain only angular variables. It inspires us to investigate the perturbed mappings on $ \mathbb{T}^n $ as well. We therefore consider the following mapping $\mathscr{F}:\mathbb{T}^{n}\times \Lambda\rightarrow \mathbb{T}^{n}$ defined by
\begin{equation}\label{equation-3.1}
\theta^{1}=\theta+\omega(\xi)+\varepsilon f(\theta,\xi,\varepsilon),
\end{equation}
where $\theta\in\mathbb{T}^{n}=\mathbb{R}^{n}/ 2\pi\mathbb{Z}^{n}$, $\xi\in\Lambda\subset\mathbb{R}^{n}$ is a parameter, $\Lambda$ is a connected closed bounded domain with interior points, and $ \varepsilon $ is a sufficiently small scalar. Assume that the perturbation $f(\theta,\xi,\varepsilon)$ is real analytic about $\theta$, continuous about the parameter $\xi$, and the frequency $\omega(\xi)$ is continuous about $\xi$. We will prove that mapping \eqref{equation-3.1} has an invariant torus with the frequency unchanged during the iteration process. Moreover, the assumptions ${\rm (B1)}$-${\rm (B3)}$ corresponding to ${\rm (A1)}$-${\rm (A3)}$ are respectively
\begin{itemize}
\item[{\rm (B1)}] Let $q\in \mathbb{R}^{n}$ be given in advance and denote by $\Lambda^{\circ}$ the interior of the parameter set $\Lambda$. Assume that
\begin{equation}\label{eq-3.3}
\deg\left(\omega(\cdot),\Lambda^{\circ},q\right)\neq 0.
\end{equation}
\item[{\rm (B2)}] Assume that $\omega(\xi_{*})=q$ with some $\xi_*\in \Lambda^{\circ}$ by \eqref{eq-3.3}, and
\begin{equation*}
| \langle k,\omega(\xi_{*})\rangle-k_{0}|\geq \frac{\gamma}{| k|^{\tau}}, \qquad \forall k\in \mathbb{Z}^{n}\backslash\{0\},\ k_{0}\in\mathbb{Z}, \ |k_{0}|\leq M_{1}| k|,
\end{equation*}
where $\tau>n-1$, $\gamma>0$ and $M_{1}$ is assumed to be the upper bound of $|\omega|$ on $\Lambda$.\\
\item[{\rm (B3)}] Assume $B(\xi_{*},\delta)\subset \Lambda^{\circ}$ with $\delta>0$ is the neighborhood of $\xi_{*}
$. There exists a modulus of continuity $\varpi_{2}$ with $\varpi_{1}\leq \varpi_{2}$ such that
\begin{equation*}
| \omega(\xi')-\omega(\xi'')|\geq \varpi_{2}(| \xi'-\xi''|),\qquad \xi',\xi''\in B(\xi_{*},\delta),\qquad 0<|\xi'-\xi''|\leq1.
\end{equation*}
\end{itemize}
Similar to Theorem \ref{theorem-1}, we have the following theorem on $ \mathbb{T}^n $, {\textit{where the parameter-dependence for the perturbations is shown to be only continuous.}} This result is new, and unexpected, thanks to the parameter translation technique introduced in \cite{DL,TD} as we forego.
\begin{theorem}\label{theorem-2}
Consider mapping \eqref{equation-3.1}. Assume that the perturbation $f(\theta,\xi,\varepsilon)$ is real analytic about $\theta$ on $D(h)$, $\varpi_{1}$ continuous about $\xi$ on $\Lambda$, and $\omega$ is continuous about $\xi$ on $\Lambda$. Moreover, ${\rm (B1)}$-${\rm (B3)}$ hold. Then there exists a sufficiently small $\varepsilon_{0}$, a transformation $\mathscr{U}$ when $0<\varepsilon<\varepsilon_{0}$. The transformation $ \mathscr{U} $ is a conjugation from $ \mathscr{F} $ to $ \hat{\mathscr{F}} $, and $\hat{\mathscr{F}}(\theta)=\theta+\omega(\xi_{*})$ is the integrable rotation on $\mathbb{T}^{n}\times \Lambda$ with frequency $\omega(\xi_{*})=q$. That is, the following holds:
\begin{equation}\label{UUUU}
\mathscr{U}\circ\hat{\mathscr{F}}=\mathscr{F}\circ\mathscr{U}.
\end{equation}
\end{theorem}
The main difference between Theorems $\ref{theorem-1}$ and $\ref{theorem-2}$ is that the analyticity of the perturbation $f$ about $r$ can be used in Theorem $\ref{theorem-1}$ to ensure that $f$ is at least Lipschitz continuous about $r$, that is, there exists a modulus of continuity $\varpi_{1}(x)=x$. In fact, it prohibits us from extending $r$ to complex strips in the KAM scheme if $f$ is assumed to be only continuous about $r$. However, for Theorem $\ref{theorem-2}$ we consider the case where there is no action variable $r$ but only parameter $\xi$. In this situation, the perturbation $f$ being continuous about $\xi$ is enough, and we will employ condition $({\rm B3})$ directly in the proof of frequency-preserving. Explicitly, the parameter-dependence for $ f $ could be very weak, such as the arbitrary $ \alpha $-H\"{o}lder continuity $\varpi_{\rm H}^{\alpha}(x)=x^{\alpha}$ with any $0<\alpha<1$, then $ \varpi_{2} $ in {\rm (B3)} being the Logarithmic Lipschitz type $\varpi_{\rm {LL}}(x)\sim (-\log x)^{-1}$ as $ x\to 0^+ $ allows for Theorem \ref{theorem-2} due to Remark \ref{remark-1}. Actually, in view of Theorem \ref{nonowh}, we could deal with the case which admits extremely weak regularity, at least nowhere differentiable. In order to show the wide applicability of Theorem \ref{theorem-2}, we directly give the following corollary.
\begin{corollary}\label{COROMH}
Consider mapping \eqref{equation-3.1}, where the perturbation $f(\theta,\xi,\varepsilon)$ is assumed to be real analytic about $\theta$ on $D(h)$ and continuous about $\xi$ on $\Lambda$, but nowhere H\"older continuous, the frequency mapping $\omega(\xi)$ is continuous about $\xi$ on $\Lambda$. Besides, assume that ${\rm (B1)}$-${\rm (B3)}$ hold with certain $ \varpi_2 $ weaker than the modulus of continuity $ \varpi_1 $ automatically admitted by $ f $ with respect to $ \xi $. Then the conjugacy \eqref{UUUU} in Theorem \ref{theorem-2} holds as long as $ \varepsilon>0 $ is sufficiently small.
\end{corollary}
\begin{remark}
One could construct explicit applications following Example 7.5 in \cite{TD} and we omit here for simplicity.
\end{remark}
\subsection{Further discussions}
Here we make some further discussions, including how to touch the parameter without dimension limitation problem under our approach, as well as the importance of the weak convexity in preserving prescribed frequency.
\subsubsection{Parameter without dimension limitation}
The parameter without dimension limitation problem, as is known to all, is fundamental and difficult in KAM theory, especially using the classical digging frequency method. More precisely, both the angular variable and the action variable have dimensions of $n$, but the dimension of the parameter may be less than $n$. We will touch this question by employing our parameter translation technique. To this end, let us start with a discussion of the topological conditions {\rm (A1)} and {\rm (B1)}.
As can be seen in the proof, these conditions are proposed to ensure that the new parameters $\hat r_{\nu+1}$ and $\xi_{\nu+1}$ could be found in the next KAM step, while the prescribed frequencies remain unchanged due to frequency equations \eqref{equation-4.122} and \eqref{eq-5.7}, see \eqref{DISA1} and \eqref{DISA11} respectively. Here we have used the fact that the non-zero Brouwer degree does not change under small perturbations from the KAM iteration, and therefore the solvability of the frequency equations (\eqref{equation-4.122} and \eqref{eq-5.7}) remains. Actually, the continuity of the frequency mapping $\omega(\xi)$ with respect to parameter $\xi$ is enough to guarantee this, see the new \textit{range conditions} that can replace the topological conditions {\rm (A1)} and {\rm (B1)} below:
\begin{itemize}
\item[{\rm (A1*)}] Let $ p=\omega(r_*)\in {\tilde \Omega } \subset \mathbb{R}^n $ satisfy the Diophantine condition in {\rm (A2)}, where $ {\tilde \Omega } $ is an open set of $ \omega(\Omega) $, and $\Omega\subset E \subset \mathbb{R}^n$ is open.
\item[{\rm (B1*)}] Let $ q=\omega(\xi_*)\in {\tilde \Omega } \subset \mathbb{R}^n$ satisfy the Diophantine condition in {\rm (B2)}, where $ {\tilde \Omega } $ is an open set of $ \omega(\Omega) $, and $\Omega\subset \Lambda \subset \mathbb{R}^m$ is open. Here $1 \leq m \leq +\infty$ could be different from $n$.
\end{itemize}
One notices that $ {\omega ^{ - 1}}( {\tilde \Omega } ) $ is also an open set due to the continuity of $ \omega $. As a result, as long as the perturbations in KAM are sufficiently small, the solvability of the frequency equations \eqref{equation-4.122} and \eqref{eq-5.7} do not change thanks to the continuity of $ \omega $ (note that we avoid the boundary of range), and the uniform convergence of $\{r_\nu\}$ and $\{\xi_\nu\}$ could still be obtained by Cauchy theorem through weak convexity conditions {\rm (A3)} and {\rm (B3)}. Besides, the Brouwer degree requires that the domain of definition and range of mapping should be of the same dimension, while the range condition {\rm (A1*)} and {\rm (B1*)} removes this limitation. Consequently, we directly give the following conclusion.
\begin{theorem}
Replace {\rm (A1)} and {\rm (B1)} with {\rm (A1*)} and {\rm (B1*)} respectively, leaving the other assumptions unchanged. Then the frequency-preserving KAM persistence in Theorem \ref{theorem-1}, Corollary \ref{cor-1}, Theorem \ref{theorem-2} and Corollary \ref{COROMH} is still allowed. Especially, for Theorem \ref{theorem-2} and Corollary \ref{COROMH} related to perturbed mapping with parameter, the dimension of parameter could be different from that for angular variable.
\end{theorem}
\subsubsection{Weak convexity}
We end this section by making some comments on our weak convexity conditions {\rm (A3)} and {\rm (B3)}. Such conditions were firstly proposed in \cite{DL,TD} to keep the prescribed frequency in Hamiltonian systems unchanged, \textit{and were shown to be unremovable in the sense of frequency-preserving, see the counterexample constructed in \cite{DL}.} Although the KAM theorems of the mapping form are somewhat different from the former, the weak convexity conditions still ensure frequency-preserving KAM persistence, as shown in Theorems \ref{theorem-1} and \ref{theorem-2}.
\section{KAM steps}\label{SEC4}
\setcounter{equation}{0}
In this section, we will show details of one cycle of KAM steps. Throughout this paper, $c$ is used to denote an intermediate positive constant, and $c_{1}-c_{4}$ are positive constants. All of them are independent of the iteration process.
\subsection{Description of the $0$-th KAM step}\label{section-2}
For sufficiently large integer $m$, let $\rho$ be a constant with $0<\rho<1$, and assume $\eta>0$ such that $(1+\rho)^{\eta}>2$. Define
\begin{equation*}
\gamma=\varepsilon^{\frac{1}{4(n+m+2)}}.
\end{equation*}
The parameters in the $0$-th KAM step are defined by
\begin{equation*}
h_{0}=h,\qquad s_{0}= s,\qquad \gamma_{0}= \gamma,\qquad \mu_{0}= \varepsilon^{\frac{1}{8\eta(m+1)}},
\end{equation*}
\begin{equation*}
D(h_{0})=\{\theta\in \mathbb{C}^{n}: {\rm Re}\ \theta\in\mathbb{T}^{n}, \ |{\rm Im}\ \theta|\leq h_{0}\},\qquad G(s_{0})=\{r\in \mathbb{C}^{n}:{\rm Re}\ r\in E, \ |{\rm Im}\ r|\leq s_{0}\},
\end{equation*}
where $0<s_{0},h_{0},\gamma_{0},\mu_{0}\leq 1$, and denote $\mathcal{D}_{0}:=\mathcal{D}(h_{0},s_{0})=D(h_{0})\times G(s_{0})$ for simplicity.
The mapping at $0$-th KAM step is
\begin{equation*}\mathscr{F}_{0}:
\begin{cases}
\theta^{1}_{0}=\theta_{0}+\omega_{0}(r_{0})+f_{0}(\theta_{0},r_{0},\varepsilon),\\
r^{1}_{0}=r_{0}+g_{0}(\theta_{0},r_{0},\varepsilon),
\end{cases}
\end{equation*}
where $\omega_{0}(r_{0})=\omega(r_{*})=p$, $f_{0}(\theta_{0},r_{0},\varepsilon)=\varepsilon f(\theta_{0},r_{0},\varepsilon)$, and $ g_{0}(\theta_{0},r_{0},\varepsilon)=\varepsilon g(\theta_{0},r_{0},\varepsilon)$.
The following lemma states the estimates on perturbations $f_{0}$ and $g_{0}$.
\begin{lemma}Assume $\varepsilon_{0}$ is sufficiently small so that
\begin{equation*}
\varepsilon^{\frac{3}{4}}(||f||_{\mathcal{D}_{0}}+||g||_{\mathcal{D}_{0}})\leq s^{m}_{0}\varepsilon^{\frac{1}{8\eta(m+1)}},
\end{equation*}
for $0<\varepsilon<\varepsilon_{0}$.
Then
\begin{equation*}
||f_{0}||_{\mathcal{D}_{0}}+ ||g_{0}||_{\mathcal{D}_{0}}\leq \gamma^{n+m+2}_{0}s^{m}_{0}\mu_{0}.
\end{equation*}
\end{lemma}
\begin{proof}Following $\gamma_{0}^{n+m+2}=\varepsilon^{\frac{1}{4}}$ and $\mu_{0}=\varepsilon^{\frac{1}{8\eta(m+1)}}$, one has
\begin{align*}
\gamma^{n+m+2}_{0}s^{m}_{0}\mu_{0}&= s^{m}_{0}\varepsilon^{\frac{1}{4}}\varepsilon^{\frac{1}{8\eta(m+1)}}\\
&\geq s^{m}_{0}\varepsilon^{\frac{1}{4}}\varepsilon^{\frac{1}{8\eta(m+1)}}s^{-m}_{0}\varepsilon^{\frac{3}{4}}\varepsilon^{-\frac{1}{8\eta(m+1)}}(||f||_{\mathcal{D}_{0}}+||g||_{\mathcal{D}_{0}})\\
&=\varepsilon(||f||_{\mathcal{D}_{0}}+||g||_{\mathcal{D}_{0}})\\
&=||f_{0}||_{\mathcal{D}_{0}}+ ||g_{0}||_{\mathcal{D}_{0}}.
\end{align*}
\end{proof}
\subsection{Description of the $\nu$-th KAM step }
We now define the parameters appear in $\nu$-th KAM step:
\begin{equation*}
h_{\nu}=\frac{h_{\nu-1}}{2}+\frac{h_{0}}{4},\quad s_{\nu}=\frac{s_{\nu-1}}{2},\quad \mu_{\nu}=\mu^{1+\rho}_{\nu-1},\quad \mathcal{D}_{\nu}=\mathcal{D}(h_{\nu},s_{\nu}).
\end{equation*}
After $\nu$ KAM steps, the mapping becomes
\begin{equation*}\mathscr{F}_{\nu}:
\begin{cases}
\theta^{1}_{\nu}=\theta_{\nu}+\omega_{0}(r_{0})+f_{\nu}(\theta_{\nu},r_{\nu},\varepsilon),\\
r^{1}_{\nu}=r_{\nu}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+g_{\nu}(\theta_{\nu},r_{\nu},\varepsilon).
\end{cases}
\end{equation*}
Moreover,
\begin{equation}\label{equation-4.4}
||f_{\nu}||_{\mathcal{D}_{\nu}}+ ||g_{\nu}||_{\mathcal{D}_{\nu}}\leq \gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}.
\end{equation}
Define
\begin{align*}
h_{\nu+1}&=\frac{h_{\nu}}{2}+\frac{h_{0}}{4},\\
s_{\nu+1}&=\frac{s_{\nu}}{2},\\
\mu_{\nu+1}&=\mu^{1+\rho}_{\nu},\\
K_{\nu+1}&=([\log\frac{1}{\mu_{\nu}}]+1)^{3\eta},\\
\mathscr{D}_{i}&=\mathcal{D}(h_{\nu+1}+\frac{i-1}{4}(h_{\nu}-h_{\nu+1}),is_{\nu+1}),\quad i=1,2,3,4,\\
\mathcal{D}_{\nu+1}&=\mathcal{D}(h_{\nu+1},s_{\nu+1}),\\
\hat{\mathcal{D}}_{\nu+1}&=\mathcal{D}(h_{\nu+2}+\frac{3}{4}(h_{\nu+1}-h_{\nu+2}),s_{\nu+2}),\\
\Gamma(h_{\nu}-h_{\nu+1})&=\sum_{0<|k|\leq K_{\nu+1}}|k|^{\tau}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\leq \frac{4^{\tau}\tau!}{(h_{\nu}-h_{\nu+1})^{\tau}}.
\end{align*}
For simplicity of notation, we also denote
\begin{align*}
&\mathscr{D}_{3}:=\mathscr{D}_{*}\times\mathscr{G}_{*}:=D(h_{\nu+1}+\frac{1}{2}(h_{\nu}-h_{\nu+1}))\times G(3s_{\nu+1}),\\
&\mathscr{D}_{4}:=\mathscr{D}_{**}\times\mathscr{G}_{**}:=D(h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))\times G(4s_{\nu+1}).\\
\end{align*}
\subsubsection{Truncation}
The Fourier series expansion of $f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)$ is
\begin{equation}\label{equation-42}
f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)=\sum_{k\in\mathbb{Z}^{n}}f_{k,\nu}(r_{\nu+1})e^{{\rm i}\langle k,\theta_{\nu+1}\rangle},
\end{equation}
where $f_{k,\nu}(r_{\nu+1})=\int_{\mathbb{T}^n}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)e^{-{\rm i}\langle k,\theta_{\nu+1}\rangle}\,d\theta_{\nu+1}$ is the Fourier coefficient of $f_{\nu}$.
The truncation and remainder of $f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)$ are respectively
\begin{align*}
\mathcal{T}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)&=\sum_{0<|k|\leq K_{\nu+1}}f_{k,\nu}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle},\\
\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)&=\sum_{|k|> K_{\nu+1}}f_{k,\nu}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}.
\end{align*}
Thus, $f_{\nu}$ has an equivalent expression of the form
\begin{equation*}
f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)=f_{0,\nu}(r_{\nu+1})+\mathcal{T}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon).
\end{equation*}
Furthermore, we have the following estimate.
\begin{lemma}
If
\begin{equation*}
{(\rm H1)}\qquad\int^{+\infty}_{K_{\nu+1}}l^{n}e^{-l\frac{h_{\nu}-h_{\nu+1}}{4}}\,dl\leq \mu_{\nu},
\end{equation*}
then there exists a constant $c_{1}$ such that
\begin{equation*}
|| \mathcal{R}_{K_{\nu+1}}f_{\nu}||_{\mathscr{D}_{3}}\leq c_{1}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}.
\end{equation*}
\end{lemma}
\begin{proof}
Since the Fourier coefficients decay exponentially, one has
\begin{align*}
| f_{k,\nu}|_{\mathscr{G}_{*}}
&\leq | f_{\nu}| _{\mathscr{D}_{4}}e^{-| k| (h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))}\\
&\leq \gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}e^{-| k| (h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))},
\end{align*}
then
\begin{align*}
| \mathcal{R}_{K_{\nu+1}}f_{\nu}|_{\mathscr{D}_{3}}&\leq\sum_{| k|>K_{\nu+1}}| f_{k,\nu}|_{\mathscr{G}_{*}} e^{| k|(h_{\nu+1}+\frac{1}{2}(h_{\nu}-h_{\nu+1}))}\\
&\leq\sum_{| k|> K_{\nu+1}}| f_{\nu}| _{\mathscr{D}_{4}}e^{-| k| \frac{h_{\nu}-h_{\nu+1}}{4}}\\
&\leq\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}\sum_{| k|>K_{\nu+1}}e^{-| k|\frac{h_{\nu}-h_{\nu+1}}{4}}\\
&\leq\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}\int^{+\infty}_{K_{\nu+1}}l^{n}e^{-l\frac{h_{\nu}-h_{\nu+1}}{4}}\,dl\\
&\leq\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}.
\end{align*}
Moreover,
\begin{align*}
[ \mathcal{R}_{K_{\nu+1}}f_{\nu}]_{\varpi_{1}}&=\sup_{\theta_{\nu+1}\in \mathscr{D}_{*} }\sup_{\substack{ r'_{\nu+1},r''_{\nu+1}\in \mathscr{G}_{*}\\ r'_{\nu+1}\neq r''_{\nu+1}}}\frac{| \mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r'_{\nu+1},\varepsilon)-\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r''_{\nu+1},\varepsilon)|}{\varpi_{1}(| r'_{\nu+1}-r''_{\nu+1}|)}\\
&=\sup_{\theta_{\nu+1}\in \mathscr{D}_{*} }\sup_{\substack{ r'_{\nu+1},r''_{\nu+1}\in \mathscr{G}_{*}\\ r'_{\nu+1}\neq r''_{\nu+1}}}\frac{\big|\sum\limits_{| k|>K_{\nu+1}} f_{k,\nu}(r'_{\nu+1})e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}-\sum\limits_{| k|>K_{\nu+1}}f_{k,\nu}(r''_{\nu+1})e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}\big|}{\varpi_{1}(| r'_{\nu+1}-r''_{\nu+1}|)}\\
&\leq\sup_{\theta_{\nu+1}\in \mathscr{D}_{*} }\sup_{\substack{ r'_{\nu+1},r''_{\nu+1}\in \mathscr{G}_{*}\\ r'_{\nu+1}\neq r''_{\nu+1}}}\frac{\sum\limits_{| k|>K_{\nu+1}}| f_{k,\nu}(r'_{\nu+1})-f_{k,\nu}(r''_{\nu+1})| e^{|k|(h_{\nu+1}+\frac{1}{2}(h_{\nu}-h_{\nu+1}))}}{\varpi_{1}(| r'_{\nu+1}-r''_{\nu+1}|)}\\
&\leq\sup_{\theta_{\nu+1}\in \mathscr{D}_{**} }\sup_{\substack{ r'_{\nu+1},r''_{\nu+1}\in \mathscr{G}_{**}\\ r'_{\nu+1}\neq r''_{\nu+1}}}\frac{| f_{\nu}(\theta_{\nu+1},r'_{\nu+1},\varepsilon)-f_{\nu}(\theta_{\nu+1},r''_{\nu+1},\varepsilon)| \sum\limits_{|k|>K_{\nu+1}} e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}}{\varpi_{1}(|r'_{\nu+1}-r''_{\nu+1}|)}\\
&\leq[f_{\nu}]_{\varpi_{1}}\sum_{| k|>K_{\nu+1}}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\\
&\leq[f_{\nu}]_{\varpi_{1}}\sum_{| k|>K_{\nu+1}}k^{n}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\\
&\leq[f_{\nu}]_{\varpi_{1}}\int^{+\infty}_{K_{\nu+1}}l^{n}e^{-l\frac{h_{\nu}-h_{\nu+1}}{4}}\,dl\\
&\leq\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu},
\end{align*}
i.e.,
\begin{equation*}
||\mathcal{R}_{K_{\nu+1}}f_{\nu}||_{\mathscr{D}_{3}}=|\mathcal{R}_{K_{\nu+1}}f_{\nu}|_{\mathscr{D}_{3}}+[\mathcal{R}_{K_{\nu+1}}f_{\nu}]_{\varpi_{1}}\leq c_{1}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}.
\end{equation*}
\end{proof}
Similarly, we get
\begin{equation*}
||\mathcal{R}_{K_{\nu+1}}g_{\nu}||_{\mathscr{D}_{3}}\leq c_{1}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}.
\end{equation*}
\subsubsection{Transformation}
For $\mathscr{F}_{\nu}$, on $\mathcal{D}_{\nu+1}$, introduce a transformation $\mathscr{U}_{\nu+1}:={\rm id}+(U_{\nu+1},V_{\nu+1})$ that satisfies
\begin{equation}\label{equation-4.2}
\mathscr{U}_{\nu+1}\circ\bar{\mathscr{F}}_{\nu+1}=\mathscr{F}_{\nu}\circ\mathscr{U}_{\nu+1},
\end{equation}
where ${\rm id}$ denotes the identity mapping. Since
\begin{equation*}
\mathscr{U}_{\nu+1}:
\begin{cases}
\theta^{1}_{\nu}=\theta^{1}_{\nu+1}+U_{\nu+1}(\theta^{1}_{\nu+1},r^{1}_{\nu+1}),\\
r^{1}_{\nu}=r^{1}_{\nu+1}+V_{\nu+1}(\theta^{1}_{\nu+1},r^{1}_{\nu+1}),
\end{cases}
\end{equation*}
and
\begin{equation*}\bar{\mathscr{F}}_{\nu+1}:
\begin{cases}
\theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon),\\
r^{1}_{\nu+1}=r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon),
\end{cases}
\end{equation*}
with $r^{*}_{0}=0$, from the left side of \eqref{equation-4.2}, we can derive that
\begin{align*}
\theta^{1}_{\nu}=&\theta^{1}_{\nu+1}+U_{\nu+1}(\theta^{1}_{\nu+1},r^{1}_{\nu+1})\\
=&\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\\
+&U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}),\\
r^{1}_{\nu}=&r^{1}_{\nu+1}+V_{\nu+1}(\theta^{1}_{\nu+1},r^{1}_{\nu+1})\\
=&r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\\
+&V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}).
\end{align*}
Also, one has
\begin{equation*}\mathscr{F}_{\nu}:
\begin{cases}
\theta^{1}_{\nu}=\theta_{\nu}+\omega_{0}(r_{0})+f_{\nu}(\theta_{\nu},r_{\nu},\varepsilon),\\
r^{1}_{\nu}=r_{\nu}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+g_{\nu}(\theta_{\nu},r_{\nu},\varepsilon),
\end{cases}
\end{equation*}
and
\begin{equation*}\mathscr{U}_{\nu+1}:
\begin{cases}
\theta_{\nu}=\theta_{\nu+1}+U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}),\\
r_{\nu}=r_{\nu+1}+V_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}).
\end{cases}
\end{equation*}
By the right side of \eqref{equation-4.2}, we obtain
\begin{align*}
\theta^{1}_{\nu}=&\theta_{\nu}+\omega_{0}(r_{0})+f_{\nu}(\theta_{\nu},r_{\nu},\varepsilon)\\
=&\theta_{\nu+1}+U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})+\omega_{0}(r_{0})\\
+&f_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon),\\
r^{1}_{\nu}=&r_{\nu}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+g_{\nu}(\theta_{\nu},r_{\nu},\varepsilon)\\
=&r_{\nu+1}+V_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})-\sum\limits^{\nu}_{i=0}r^{*}_{i}\\
+& g_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon).
\end{align*}
Therefore,
\eqref{equation-4.2} implies that
\begin{align}\label{equation-43}
&\omega_{0}(r_{0})+\bar{f}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1})\notag\\
=&U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})+\omega_{0}(r_{0})+f_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon),\\
\label{equation-44}
&\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1})\notag\\
=&V_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})+g_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon).
\end{align}
Following the iteration process before, the $\omega_{0}(r_{0})$ on the right side of \eqref{equation-44} is actually
\begin{equation*}
\omega_{0}(r_{0})=\omega_{0}(r_{\nu})+\sum^{\nu-1}_{i=0}f_{0,i}(r_{\nu}).
\end{equation*}
Since the frequency $\omega_{0}$ is continuous about $r$, assume that there exists a modulus of continuity $\varpi_{*}$ such that
\begin{equation}\label{eq-46}
[\omega_{0}]_{\varpi_{*}}=\frac{|\omega_{0}(r_{\nu})-\omega_{0}(r_{\nu+1})|}{\varpi_{*}(|r_{\nu}-r_{\nu+1}|)}<+\infty.
\end{equation}
The perturbation $f$ is $\varpi_{1}$ continuous about $r$ with $\varpi_{1}\leq \varpi_{*}$ due to $\varpi_{1}(x)=x$, one has
\begin{equation}\label{eq-47}
[f_{0,i}]_{\varpi_{1}}=\frac{|f_{0,i}(r_{\nu})-f_{0,i}(r_{\nu+1})|}{\varpi_{1}(|r_{\nu}-r_{\nu+1}|)}<+\infty,\qquad 0\leq i\leq \nu-1 .
\end{equation}
Consequently, from \eqref{eq-46} and \eqref{eq-47}, we may deduce that
\begin{equation*}
\omega_{0}(r_{\nu})=\omega_{0}(r_{\nu+1})+\mathcal{O}(\varpi_{*}(|V_{\nu+1}|)),
\end{equation*}
and
\begin{equation*}
f_{0,i}(r_{\nu})=f_{0,i}(r_{\nu+1})+\mathcal{O}(\varpi_{1}(|V_{\nu+1}|)).
\end{equation*}
Then \eqref{equation-43} and \eqref{equation-44} are equal to the following
\begin{align*}
&U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1})-U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})\notag\\
+&U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})-U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})\notag\\
+&\omega_{0}(r_{0})+\bar{f}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\notag\\
=&\omega_{0}(r_{\nu+1})+\sum^{\nu-1}_{i=0}f_{0,i}(r_{\nu+1})+\mathcal{O}(\varpi_{*}(|V_{\nu+1}|))+\mathcal{O}(\varpi_{1}(|V_{\nu+1}|))\notag\\
+&f_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon)-f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+f_{0,\nu}(r_{\nu+1})+\mathcal{T}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\\
+&\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon),\\
&V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1})-V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})\notag\\
+&V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})-V_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})+\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\notag\\
=&g_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon)-g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+g_{0,\nu}(r_{\nu+1})+\mathcal{T}_{K_{\nu+1}}g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\\
+&\mathcal{R}_{K_{\nu+1}}g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon).
\end{align*}
The transformation $\mathscr{U}_{\nu+1}={\rm id}+(U_{\nu+1},V_{\nu+1})$ needs to satisfy the homological equations
\begin{equation}\label{eq-4.8}
\begin{cases}
U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})-U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})=\mathcal{T}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon),\\
V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})-V_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})=\mathcal{T}_{K_{\nu+1}}g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon).
\end{cases}
\end{equation}
The new perturbations are respectively
\begin{align}\label{eq-4.10}
\bar{f}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)&=f_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon)-f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\notag\\
&+\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)+\mathcal{O}(\varpi_{*}(|V_{\nu+1}|))+\mathcal{O}(\varpi_{1}(|V_{\nu+1}|))\notag\\
&+U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i})\notag\\
&-U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}),\\
\label{eq-4.11}
\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)&=g_{\nu}(\theta_{\nu+1}+U_{\nu+1},r_{\nu+1}+V_{\nu+1},\varepsilon)-g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\notag\\
&+g_{0,\nu}(r_{\nu+1})+\mathcal{R}_{K_{\nu+1}}g_{\nu}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\notag\\
&+V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),r_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i})\notag\\
&-V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+\bar{f}_{\nu+1},r_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i}+\bar{g}_{\nu+1}).
\end{align}
The homological equations \eqref{eq-4.8}
are uniquely solvable on $\mathcal{D}_{\nu+1}$. Let us start by considering the first equation in \eqref{eq-4.8}.
Formally, denote $U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})$ as
\begin{equation*}
U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})=\sum_{0<|k|\leq K_{\nu+1}}U_{k,\nu}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle },
\end{equation*}
taking it into the first equation in \eqref{eq-4.8}, one has
\begin{equation*}
\sum_{0<|k|\leq K_{\nu+1}}U_{k,\nu+1}e^{{\rm i}\langle k,\theta_{\nu+1}+\omega_{0}(r_{0})\rangle}-\sum_{0<|k|\leq K_{\nu+1}}U_{k,\nu+1}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}=\sum_{0<|k|\leq K_{\nu+1}}f_{k,\nu}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}.
\end{equation*}
By comparing the coefficients above, we have
\begin{equation}\label{equation-4.12}
U_{k,\nu+1}(e^{{\rm i}\langle k,\omega_{0}(r_{0})\rangle}-1)=f_{k,\nu}.
\end{equation}
The details of estimating \eqref{equation-4.12} can be seen in the following lemma.
\begin{lemma}
The equation \eqref{equation-4.12} has a unique solution $U_{k,\nu+1}$ on $G(s_{\nu+1})$ satisfying the following estimate
\begin{equation*}
||U_{k,\nu+1}||_{\mathscr{G}_{*}}\leq c_{2}||f_{\nu}||_{\mathscr{D}_{4}}\gamma^{-1}_{0}|k|^{\tau}e^{-|k|(h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))}.
\end{equation*}
\end{lemma}
\begin{proof}We notice that the coefficients $f_{k,\nu}$ decay exponentially, i.e.,
\begin{equation*}
||f_{k,\nu}||_{\mathscr{G}_{**}}\leq ||f_{\nu}||_{\mathscr{D}_{4}}e^{-|k|(h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))}.
\end{equation*}
There exists a $k_{0}\in\mathbb{Z}$ satisfying $|\frac{\langle k,\omega_{0}(r_{0})\rangle-k_{0}}{2}|\leq \frac{\pi}{2}$ such that
\begin{align*}
||e^{{\rm i}\langle k,\omega_{0}(r_{0})\rangle}-1||_{\mathscr{G}_{**}}&\geq2||\sin\frac{\langle k,\omega_{0}(r_{0})\rangle-k_{0}}{2}||_{\mathscr{G}_{**}}\notag\\
&\geq\frac{4}{\pi}||\frac{\langle k,\omega_{0}(r_{0})\rangle-k_{0}}{2}||_{\mathscr{G}_{**}}\notag\\
&\geq c||\langle k,\omega_{0}(r_{0})\rangle-k_{0}||_{\mathscr{G}_{**}}\notag\\
&\geq\frac{c\gamma_{0}}{|k|^{\tau}}.
\end{align*}
Therefore,
\begin{align}\label{equation-46}
||U_{k,\nu+1}||_{\mathscr{G}_{*}}&\leq \frac{||f_{k,\nu}||_{\mathscr{G}_{**}}}{||e^{{\rm i}\langle k,\omega_{0}(r_{0})\rangle}-1||_{\mathscr{G}_{**}}}\notag\\
&\leq c_{2}||f_{\nu}||_{\mathscr{D}_{4}}\gamma^{-1}_{0}|k|^{\tau}e^{-|k|(h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))}.
\end{align}
\end{proof}
In the same way, we get
\begin{eqnarray*}
V_{k,\nu+1}(e^{{\rm i}\langle k,\omega_{0}(r_{0})\rangle}-1)=g_{k,\nu}.
\end{eqnarray*}
Thus,
\begin{align}\label{equation-47}
||V_{k,\nu+1}||_{\mathscr{G}_{*}}&\leq\frac{||g_{k,\nu}||_{\mathscr{G}_{**}}}{||e^{{\rm i}\langle k,\omega_{0}(r_{0})\rangle}-1||_{\mathscr{G}_{**}}}\notag\\
&\leq c_{2}||g_{\nu}||_{\mathscr{D}_{4}}\gamma^{-1}_{0}|k|^{\tau}e^{-|k|(h_{\nu+1}+\frac{3}{4}(h_{\nu}-h_{\nu+1}))}.
\end{align}
\subsubsection{Translation}\label{subsec-4.2.3}
In the usual iteration process, one has to find out a decreasing series of domains that the Diophantine condition fails. To avoid this, we will construct a translation to keep the frequency unchanged in this section. Consider the translation
\begin{equation*}
\mathscr{V}_{\nu+1}:\theta_{\nu+1}\rightarrow \theta_{\nu+1},\quad r_{\nu+1}\rightarrow r_{\nu+1}+r^{*}_{\nu+1}:=\hat{r}_{\nu+1},
\end{equation*}
where $\hat{r}_{\nu+1}\in B_{c\mu_{\nu}}(\hat{r}_{\nu})$. The action has a shift under the translation $ \mathscr{V}_{\nu+1} $, but the angular variable is unchanged. To make the frequency-preserving, it requests that
\begin{equation*}
\omega_{0}(\hat{r}_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\hat{r}_{\nu+1})-\omega_{0}(r_{0})+\omega_{0}(r_{0})=\omega_{0}(r_{0}),
\end{equation*}
i.e.,
\begin{equation*}
\omega_{0}(\hat{r}_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\hat{r}_{\nu+1})=\omega_{0}(r_{0}).
\end{equation*}
We will demonstrate the equation in the following subsection. After the translation $\mathscr{V}_{\nu+1}$, the mapping $\bar{\mathscr{F}}_{\nu+1}$ becomes $\mathscr{F}_{\nu+1}=\bar{\mathscr{F}}_{\nu+1}\circ\mathscr{V}_{\nu+1}$, i.e.,
\begin{equation*}
\mathscr{F}_{\nu+1}:
\begin{cases}
\theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega_{0}(r_{0})+f_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon),\\
\hat{r}^{1}_{\nu+1}=\hat{r}_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i}+g_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon).
\end{cases}
\end{equation*}
The corresponding new perturbations are
\begin{align}\label{equation-4.25}
f_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)&=f_{\nu}(\theta_{\nu+1}+U_{\nu+1},\hat{r}_{\nu+1}+V_{\nu+1},\varepsilon)-f_{\nu}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)\notag\\
&+\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)+\mathcal{O}(\varpi_{*}(|V_{\nu+1}|))+\mathcal{O}(\varpi_{1}(|V_{\nu+1}|))\notag\\
&+U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),\hat{r}_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i})\notag\\
&-U_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+f_{\nu+1},\hat{r}_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i}+g_{\nu+1}),
\end{align}
and
\begin{align}\label{equation-4.26}
g_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)&=g_{\nu}(\theta_{\nu+1}+U_{\nu+1},\hat{r}_{\nu+1}+V_{\nu+1},\varepsilon)-g_{\nu}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)\notag\\
&+g_{0,\nu}(\hat{r}_{\nu+1})+\mathcal{R}_{K_{\nu+1}}g_{\nu}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon)\notag\\
&+V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0}),\hat{r}_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i})\notag\\
&-V_{\nu+1}(\theta_{\nu+1}+\omega_{0}(r_{0})+f_{\nu+1},\hat{r}_{\nu+1}-\sum^{\nu}_{i=0}r^{*}_{i}+g_{\nu+1}).
\end{align}
\subsubsection{Frequency-preserving}
In this section, we will show that the frequency is unchanged during the iteration process under the conditions ${\rm (A1)}$ and ${\rm (A3)}$. The topological degree condition $({\rm A1})$ states that we can find a $\hat{r}_{\nu+1}$ such that the frequency remains preserved. Besides, the weak convexity condition $({\rm A3})$ ensures that $\{\hat{r}_{\nu}\}$ is a Cauchy sequence. The following lemma is crucial to our consideration.
\begin{lemma}\label{lemma-4.5}Assume that
\begin{equation*}
{(\rm H2)}\qquad||\sum^{\nu}_{i=0}f_{0,i}||_{G(s_{\nu+1})}\leq c\mu^{\frac{1}{2}}_{0}.
\end{equation*}
Then there exists a $\hat{r}_{\nu+1}\in B_{c\mu_{\nu}}(\hat{r}_{\nu})$ such that
\begin{equation}\label{equation-4.27}
\omega_{0}(\hat{r}_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\hat{r}_{\nu+1})=\omega_{0}(r_{0}).
\end{equation}
\end{lemma}
\begin{proof}
The proof is an induction on $\nu$. Obviously, $\omega_{0}(r_{0})=\omega_{0}(r_{0})$ when $\nu=0$. Now assume that for some $\nu\geq 1$, one has
\begin{equation}\label{equation-4.29}
\omega_{0}(\hat{r}_{j})+\sum^{j-1}_{i=0}f_{0,i}(\hat{r}_{j})=\omega_{0}(r_{0}), \qquad \hat{r}_{j}\in B_{}(\hat{r}_{j-1})\subset B(r_{0},\delta),\ 1\leq j\leq \nu.
\end{equation}
We need to find out a $\hat{r}_{\nu+1}$ in the neighborhood of $\hat{r}_{\nu}$ that satisfies
\begin{equation}\label{equation-4.122}
\omega_{0}(\hat{r}_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\hat{r}_{\nu+1})=\omega_{0}(r_{0}).
\end{equation}
Since $\mu_{0}^{\frac{1}{2}}$ is sufficiently small and the condition ${\rm (A1)}$ holds, we have
\begin{equation}\label{DISA1}
\deg\Big(\omega_{0}(\cdot)+\sum^{\nu}_{i=0}f_{0,i}(\cdot),B(r_{0},\delta),\omega_{0}(r_{0})\Big)=\deg\left(\omega_{0}(\cdot),B(r_{0},\delta),\omega_{0}(r_{0})\right)\neq 0,
\end{equation}
where $ \omega_{0}(r_0)=p $, and $ p \in \mathbb{R}^n $ is given in advance. This shows that there exists at least a $\hat{r}_{\nu+1}\in B(r_{0},\delta)$ with some $\delta>0$ such that \eqref{equation-4.27} holds. Remark \ref{remark-1} tells us that there exists a modulus of continuity $\varpi_{1}(x)=x$ such that $f$ is $\varpi_{1}$ continuous about $r$, and following \eqref{equation-4.4}, one has
\begin{equation*}
[f_{0,i}]_{\varpi_{1}}\leq c\mu_{i},\qquad 0\leq i\leq \nu,
\end{equation*}
i.e.,
\begin{equation*}
|f_{0,i}(\hat{r}_{\nu+1})-f_{0,i}(\hat{r}_{\nu})|\leq c\mu_{i}\varpi_{1}(|\hat{r}_{\nu+1}-\hat{r}_{\nu}|),\qquad 0\leq i\leq \nu.
\end{equation*}
Following Definitions \ref{def-1} and \ref{DE2.3}, and together with $({\rm A3})$, one has
\begin{equation*}
\varlimsup_{x\rightarrow 0^{+}}\frac{x}{\varpi_{2}(x)}<+\infty,
\end{equation*}
this means $\varpi_{1}\leq \varpi_{2}$. The equations \eqref{equation-4.29} and \eqref{equation-4.122} imply that
\begin{equation*}
\omega_{0}(\hat{r}_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\hat{r}_{\nu+1})=\omega_{0}(\hat{r}_{\nu})+\sum^{\nu-1}_{i=0}f_{0,i}(\hat{r}_{\nu}).
\end{equation*}
Then
\begin{align*}
|f_{0,\nu}(\hat{r}_{\nu+1})|&=|\omega_{0}(\hat{r}_{\nu})-\omega_{0}(\hat{r}_{\nu+1})+\sum\limits^{\nu-1}_{i=0}(f_{0,i}(\hat{r}_{\nu})-f_{0,i}(\hat{r}_{\nu+1}))|\notag\\
&\geq|\omega_{0}(\hat{r}_{\nu})-\omega_{0}(\hat{r}_{\nu+1})|-\sum\limits^{\nu-1}_{i=0}|f_{0,i}(\hat{r}_{\nu})-f_{0,i}(\hat{r}_{\nu+1})|\notag\\
&\geq\varpi_{2}(|\hat{r}_{\nu}-\hat{r}_{\nu+1}|)-c(\sum\limits^{\nu-1}_{i=0}\mu_{i})\varpi_{1}(|\hat{r}_{\nu}-\hat{r}_{\nu+1}|)\notag\\
&\geq\frac{\varpi_{2}(|\hat{r}_{\nu}-\hat{r}_{\nu+1}|)}{2}.
\end{align*}
The last inequality holds since $\varepsilon$ is sufficiently small such that $c(\sum\limits^{\nu-1}_{i=0}\mu_{i})\leq \frac{1}{2}$, and $\varpi_1 \leq \varpi_2$. Therefore,
\begin{equation}\label{equation-4.33}
|\hat{r}_{\nu}-\hat{r}_{\nu+1}|\leq \varpi^{-1}_{2}(2|f_{0,\nu}(\hat{r}_{\nu+1})|)\leq \varpi^{-1}_{2}(2c\mu_{\nu})\leq c\varpi^{-1}_{1}(2c\mu_{\nu})\leq c\mu_{\nu},
\end{equation}
where the last inequality is due to Definition \ref{def-1}, i.e., $\varlimsup\limits_{x\rightarrow 0^{+}}\frac{x}{\varpi_{1}(x)}<+\infty$. This implies that $\{\hat{r}_{\nu}\}$ is a Cauchy sequence and $\hat{r}_{\nu+1}\in B_{c\mu_{\nu}}(\hat{r}_{\nu})$.
\end{proof}
\subsubsection{Estimates on new transformations}
According to \eqref{equation-46} and \eqref{equation-47}, the estimates on transformations are given in the lemma below.
\begin{lemma}\label{lemma-4.6}Assume that there exists a constant $c_{3}$, and
\begin{equation*}
{(\rm H3)}\qquad c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\leq \min\{s_{\nu+1},\frac{h_{\nu}-h_{\nu+1}}{4}\}.
\end{equation*}
Then the followings hold.
\begin{itemize}
\item[$({\rm i})$]$||\mathscr{U}_{\nu+1}-{\rm id}||_{\mathscr{D}_{3}}\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})$.
\item[$(\rm ii)$] $\mathscr{U}_{\nu+1}:\mathcal{D}_{\nu+1}\rightarrow \mathcal{D}_{\nu}$.
\item[$(\rm iii)$] Set $\mathscr{W}_{\nu+1}:=\mathscr{U}_{\nu+1}\circ\mathscr{V}_{\nu+1}$, one has $\mathscr{W}_{\nu+1}:\mathcal{D}_{\nu+1}\rightarrow \mathcal{D}_{\nu}$, and
\begin{equation*}
||\mathscr{W}_{\nu+1}-{\rm id}||_{\mathcal{D}_{\nu+1}}\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1}).
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
$(\rm i)$ Since $U_{\nu+1}(\theta_{\nu+1},r_{\nu+1}-\sum\limits^{\nu}_{i=0}r^{*}_{i})=\sum\limits_{0<|k|\leq K_{\nu+1}}U_{k,\nu+1}e^{{\rm i}\langle k,\theta_{\nu+1}\rangle}$, we get
\begin{align}\label{equation-4.36}
||U_{\nu+1}||_{\mathscr{D}_{3}}&\leq ||U_{k,\nu+1}||_{\mathscr{G}_{*}}\sum_{0<|k|\leq K_{\nu+1}}e^{|k|(h_{\nu+1}+\frac{1}{2}(h_{\nu}-h_{\nu+1}))}\notag\\
&\leq||f_{\nu}||_{\mathscr{D}_{4}}\gamma^{-1}_{0}\sum_{0<|k|\leq K_{\nu+1}}|k|^{\tau}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\notag\\
&\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1}).
\end{align}
Besides,
\begin{align}\label{equation-4.35}
||V_{\nu+1}||_{\mathscr{D}_{3}}&\leq ||V_{k,\nu+1}||_{\mathscr{G}_{*}}\sum_{0<|k|\leq K_{\nu+1}}e^{|k|(h_{\nu+1}+\frac{1}{2}(h_{\nu}-h_{\nu+1}))}\notag\\
&\leq||g_{\nu}||_{\mathscr{D}_{4}}\gamma^{-1}_{0}\sum_{0<|k|\leq K_{\nu+1}}|k|^{\tau}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\notag\\
&\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1}).
\end{align}
Thus, $(\rm i)$ is due to \eqref{equation-4.4}, \eqref{equation-4.36}, and \eqref{equation-4.35}.
$(\rm ii)$ By $(\theta_{\nu+1},r_{\nu+1})\in\mathscr{D}_{3}$, $(\rm H3)$ implies that
\begin{align*}
|\theta_{\nu}-\theta_{\nu+1}|&=||U_{\nu+1}||_{\mathscr{D}_{3}}\notag\\
&\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\notag\\
&\leq\frac{h_{\nu}-h_{\nu+1}}{4},\\
|r_{\nu}-r_{\nu+1}|&=||V_{\nu+1}||_{\mathscr{D}_{3}}\notag\\
&\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\notag\\
&\leq s_{\nu+1}.
\end{align*}
Thus, $\mathscr{U}_{\nu+1}:\mathcal{D}_{\nu+1}\subset\mathscr{D}_{4}\rightarrow \mathscr{D}_{3}\subset\mathcal{D}_{\nu}$.
$(\rm iii)$ now follows from $(\rm i)$ and $(\rm ii)$ immediately.
\end{proof}
\subsubsection{Estimates on the new perturbations}
In what follows, we are able to show the estimates on the new perturbations.
\begin{lemma}\label{lemma-4.7}Assume that
\begin{equation*}
{(\rm H4) } \qquad\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\leq \varpi^{-1}_{*}(\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu^{2}_{\nu}).
\end{equation*}
Then there exists a constant $c_{4}$ such that
\begin{align*}
&||\bar{f}_{\nu+1}||_{\mathcal{D}_{\nu+1}}+||\bar{g}_{\nu+1}||_{\mathcal{D}_{\nu+1}}\\
\leq& c_{4}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}(\gamma^{n+m+1}_{0}s^{m}_{\nu}(h_{\nu+1}-h_{\nu+2})^{-1}\Gamma(h_{\nu}-h_{\nu+1})+\gamma^{n+m+1}_{0}s^{m-1}_{\nu}\Gamma(h_{\nu}-h_{\nu+1})+1).
\end{align*}
Moreover, if
\begin{equation*}
{(\rm H5)}\qquad 2^{m}c_{4}\mu^{1-\rho}_{\nu}(\gamma^{n+m+1}_{0}s^{m}_{\nu}(h_{\nu+1}-h_{\nu+2})^{-1}\Gamma(h_{\nu}-h_{\nu+1})+\gamma^{n+m+1}_{0}s^{m-1}_{\nu}\Gamma(h_{\nu}-h_{\nu+1})+1)\leq 1,
\end{equation*}
then
\begin{equation*}
||f_{\nu+1}||_{\mathcal{D}_{\nu+1}}+||g_{\nu+1}||_{\mathcal{D}_{\nu+1}}\leq \gamma^{n+m+2}_{0}s^{m}_{\nu+1}\mu_{\nu+1}.
\end{equation*}
\end{lemma}
\begin{proof}
Note that $\bar{f}_{\nu+1}$ and $\bar{g}_{\nu+1}$ are solved by the implicit function theorem from \eqref{eq-4.10} and \eqref{eq-4.11}.
Thus,
\begin{align}\label{equation-4.38}
||\bar{f}_{\nu+1}||_{\mathcal{D}_{\nu+1}}&\leq c ||\partial_{\theta_{\nu+1}}f_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||U_{\nu+1}||_{\mathcal{D}_{\nu+1}}+c||\partial_{r_{\nu+1}}f_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||V_{\nu+1}||_{\mathcal{D}_{\nu+1}}\notag\\
&+c||\mathcal{R}_{K_{\nu+1}}f_{\nu}||_{\mathcal{D}_{\nu+1}}+c\varpi_{*}(|V_{\nu+1}|),
\end{align}
where $\varpi_{1}(|V_{\nu+1}|)\leq \varpi_{*}(|V_{\nu+1}|)$ is due to $\varpi_{1}\leq \varpi_{*}$.
The intersection property implies that there exists a $\theta^{0}_{\nu+1}$ such that for each $r^{0}_{\nu+1}\in G(s_{\nu+1})$, one has
\begin{equation*}
\bar{g}_{\nu+1}(\theta^{0}_{\nu+1},r^{0}_{\nu+1},\varepsilon)=0,
\end{equation*}
i.e.,
\begin{align*}
&\sup_{\theta_{\nu+1}\in D(h_{\nu+1})}||\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)||\\
=&\sup_{\theta_{\nu+1}\in D(h_{\nu+1})}||\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)-\bar{g}_{\nu+1}(\theta^{0}_{\nu+1},r^{0}_{\nu+1},\varepsilon)||\\
=&\mathop{{\rm osc}}_{\theta_{\nu+1}\in D(h_{\nu+1})}\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)\\
=&\mathop{{\rm osc}}_{\theta_{\nu+1}\in D(h_{\nu+1})}(\bar{g}_{\nu+1}(\theta_{\nu+1},r_{\nu+1},\varepsilon)-\bar{h}),
\end{align*}
where $\bar{h}$ is a function of $r_{\nu+1}$. Specially, taking $\bar{h}=g_{0,\nu}(r_{\nu+1})$, one has
\begin{equation*}
\frac{1}{2}||\bar{g}_{\nu+1}||_{\mathcal{D}_{\nu+1}}\leq ||\bar{g}_{\nu+1}-g_{0,\nu}||_{\mathcal{D}_{\nu+1}}.
\end{equation*}
Therefore,
\begin{align}\label{equation-4.40}
||\bar{g}_{\nu+1}||_{\mathcal{D}_{\nu+1}}&\leq c||\bar{g}_{\nu+1}-g_{0,\nu}||_{\mathcal{D}_{\nu+1}}\notag\\
&\leq c||\partial_{\theta_{\nu+1}}g_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||U_{\nu+1}||_{\mathcal{D}_{\nu+1}}+c||\partial_{r_{\nu+1}}g_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||V_{\nu+1}||_{\mathcal{D}_{\nu+1}}\notag\\
&+c||\mathcal{R}_{K_{\nu+1}}g_{\nu}||_{\mathcal{D}_{\nu+1}}.
\end{align}
Following \eqref{equation-4.38}, \eqref{equation-4.40}, $({\rm H4})$, and estimates obtained earlier, we have
\begin{align*}
||\bar{f}_{\nu+1}||_{\mathcal{D}_{\nu+1}}+||\bar{g}_{\nu+1}||_{\mathcal{D}_{\nu+1}}&\leq c||\partial_{\theta_{\nu+1}}f_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||U_{\nu+1}||_{\mathcal{D}_{\nu+1}}+c||\partial_{\theta_{\nu+1}}g_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||U_{\nu+1}||_{\mathcal{D}_{\nu+1}}\notag\\
&+ c||\partial_{r_{\nu+1}}f_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||V_{\nu+1}||_{\mathcal{D}_{\nu+1}}+c||\partial_{r_{\nu+1}}g_{\nu}||_{\hat{\mathcal{D}}_{\nu+1}}||V_{\nu+1}||_{\mathcal{D}_{\nu+1}}\notag\\
&+ ||\mathcal{R}_{K_{\nu+1}}f_{\nu}||_{\mathcal{D}_{\nu+1}}+c||\mathcal{R}_{K_{\nu+1}}g_{\nu}||_{\mathcal{D}_{\nu+1}}+c\varpi_{*}(|V_{\nu+1}|)\notag\\
&\leq c\frac{\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}}{h_{\nu+1}-h_{\nu+2}}\cdot\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\notag\\
&+ c\frac{\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu}}{s_{\nu+1}-s_{\nu+2}}\cdot\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\notag\\
&+ c\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}\notag\\
&\leq c_{4}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}\cdot\gamma^{n+m+1}_{0}s^{m}_{\nu}(h_{\nu+1}-h_{\nu+2})^{-1}\Gamma(h_{\nu}-h_{\nu+1})\\
&+ c_{4}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}\cdot\gamma^{n+m+1}_{0}s^{m-1}_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\\
&+ c_{4}\gamma^{n+m+2}_{0}s^{m}_{\nu}\mu^{2}_{\nu}.
\end{align*}
Finally, $(\rm H5)$ implies that
\begin{equation*}
||f_{\nu+1}||_{\mathcal{D}_{\nu+1}}+||g_{\nu+1}||_{\mathcal{D}_{\nu+1}}\leq \gamma^{n+m+2}_{0}s^{m}_{\nu+1}\mu_{\nu+1}.
\end{equation*}
\end{proof}
\subsection{The preservation of intersection property}
In the previous Section \ref{subsec-4.2.3}, we have constructed a translation $\mathscr{V}_{\nu+1}$ such that the frequency $\omega_{0}(r_{0})$ unchanged. The translation $\mathscr{V}_{\nu+1}$ truns $\bar{\mathscr{F}}_{\nu+1}$ into $\mathscr{F}_{\nu+1}=\bar{\mathscr{F}}_{\nu+1}\circ\mathscr{V}_{\nu+1}$ but drops the intersection property. For this purpose, we construct the conjugation of $ \bar {\mathscr{F}}_{\nu+1} $ such that it has the same properties as $ \bar {\mathscr{F}}_{\nu+1} $. Denote by $\hat{\mathscr{F}}_{\nu+1}$ the conjugation of $\bar{\mathscr{F}}_{\nu+1}$, that is, $\hat{\mathscr{F}}_{\nu+1}=\mathscr{V}^{-1}_{\nu+1}\circ\bar{\mathscr{F}}_{\nu+1}\circ\mathscr{V}_{\nu+1}$, where
\begin{equation*}
\mathscr{V}^{-1}_{\nu+1}: \theta^{1}_{\nu+1}\rightarrow \theta^{1}_{\nu+1},\quad \hat{r}^{1}_{\nu+1}\rightarrow \hat{r}^{1}_{\nu+1}-r^{*}_{\nu+1}.
\end{equation*}
Therefore, $\hat{\mathscr{F}}_{\nu+1}$ has the form
\begin{equation*}\hat{\mathscr{F}}_{\nu+1}:
\begin{cases}
\theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega_{0}(r_{0})+f_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon),\\
\hat{r}^{1}_{\nu+1}=\hat{r}_{\nu+1}-\sum\limits^{\nu+1}_{i=0}r^{*}_{i}+g_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon).
\end{cases}
\end{equation*}
It ensures that the mapping $\hat{\mathscr{F}}_{\nu+1}$ still has the intersection property.
\section{Proof of the main results}\label{SEC5}
In this section, we will show the proof of Theorem \ref{theorem-1}, Corollary \ref{cor-1} and Theorem \ref{theorem-2} successively.
\subsection{Proof of Theorem \ref{theorem-1}}
\setcounter{equation}{0}
\subsubsection{Iteration lemma}
The iteration lemma guarantees the inductive construction of the transformations in all KAM steps. Let $s_{0}$, $h_{0}$, $\gamma_{0}$, $\mu_{0}$, $\mathscr{F}_{0}$, $\mathcal{D}_{0}$ be given in Section \ref{section-2}, and set $K_{0}=0$, $r^{*}_{0}=0$, $0<\rho<1$ is a constant. We define the following sequences inductively for all $\nu=0,1,2,\cdots$.
\begin{align*}
h_{\nu+1}&=\frac{h_{\nu}}{2}+\frac{h_{0}}{4},\\
s_{\nu+1}&=\frac{s_{\nu}}{2},\\
\mu_{\nu+1}&=\mu^{1+\rho}_{\nu},\\
K_{\nu+1}&=([\log\frac{1}{\mu_{\nu}}]+1)^{3\eta},\\
\mathcal{D}_{\nu+1}&=\mathcal{D}(h_{\nu+1},s_{\nu+1}),\\
\hat{\mathcal{D}}_{\nu+1}&=\mathcal{D}(h_{\nu+2}+\frac{3}{4}(h_{\nu+1}-h_{\nu+2}),s_{\nu+2}),\\
\Gamma(h_{\nu}-h_{\nu+1})&=\sum_{0<|k|\leq K_{\nu+1}}|k|^{\tau}e^{-|k|\frac{h_{\nu}-h_{\nu+1}}{4}}\leq \frac{4^{\tau}\tau!}{(h_{\nu}-h_{\nu+1})^{\tau}}.
\end{align*}
\begin{lemma}
Consider mapping \eqref{equation-1} for $\nu=0,1,2,\cdots$. If $\varepsilon_{0}$ is sufficiently small such that $(\rm H1)-(\rm \rm H5)$ hold, and
\begin{equation*}
||f_{\nu}||_{\mathcal{D}_{\nu}}+||g_{\nu}||_{\mathcal{D}_{\nu}}\leq \gamma^{n+m+2}_{0}s^{m}_{\nu}\mu_{\nu},
\end{equation*}
then the iteration process described above is valid, and the following properties hold.
\begin{itemize}
\item[$(\rm i)$]
There exists a real analytic transformation $\mathscr{W}_{\nu+1}:=\mathscr{U}_{\nu+1}\circ\mathscr{V}_{\nu+1}$ that satisfies $\mathscr{W}_{\nu+1}\circ\hat{\mathscr{F}}_{\nu+1}=\hat{\mathscr{F}}_{\nu}\circ\mathscr{W}_{\nu+1}$, where
\begin{equation*}\hat{\mathscr{F}}_{\nu+1}:
\begin{cases}
\theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega_{0}(r_{0})+f_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon),\\
\hat{r}^{1}_{\nu+1}=\hat{r}_{\nu+1}-\sum\limits^{\nu+1}_{i=0}r^{*}_{i}+g_{\nu+1}(\theta_{\nu+1},\hat{r}_{\nu+1},\varepsilon).
\end{cases}
\end{equation*}
Also, the transformation $\mathscr{W}_{\nu+1}$ has the estimate
\begin{equation}\label{equation-51}
||\mathscr{W}_{\nu+1}-{\rm id}||_{\mathcal{D}_{\nu+1}}\leq c_{3}\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1}).
\end{equation}
\item[$(\rm ii)$] $\{\hat{r}_{\nu}\}$ is a Cauchy sequence and
\begin{equation*}\label{equation-5.1}
|\hat{r}_{\nu+1}-\hat{r}_{\nu}|\leq c\mu_{\nu}.
\end{equation*}
\item[$(\rm iii)$] The estimate on new perturbations is
\begin{equation*}\label{equation-5.2}
||f_{\nu+1}||_{\mathcal{D}_{\nu+1}}+||g_{\nu+1}||_{\mathcal{D}_{\nu+1}}\leq \gamma^{n+m+2}_{0}s^{m}_{\nu+1}\mu_{\nu+1}.
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
The proof is an induction on $\nu$. It is easy to see that we can take sufficiently small $\varepsilon_{0}$ to ensure that $(\rm H1)-(\rm H5)$ hold. Since
\begin{align*}
\hat{\mathscr{F}}_{\nu+1}&=\mathscr{V}^{-1}_{\nu+1}\circ\bar{\mathscr{F}}_{\nu+1}\circ\mathscr{V}_{\nu+1}\notag\\
&=\mathscr{V}^{-1}_{\nu+1}\circ(\mathscr{U}^{-1}_{\nu+1}\circ\hat{\mathscr{F}}_{\nu}\circ\mathscr{U}_{\nu+1})\circ\mathscr{V}_{\nu+1},
\end{align*}
and $\mathscr{W}_{\nu+1}=\mathscr{U}_{\nu+1}\circ\mathscr{V}_{\nu+1}$, we have that $\mathscr{W}_{\nu+1}\circ\hat{\mathscr{F}}_{\nu+1}=\hat{\mathscr{F}}_{\nu}\circ\mathscr{W}_{\nu+1}$. For the estimate \eqref{equation-5.1} in $(\rm i)$, see Lemma \ref{lemma-4.6}. In addition, we notice that $(\rm ii)$ is due to \eqref{equation-4.33}, and $({\rm iii})$ follows from Lemma \ref{lemma-4.7}.
\end{proof}
\subsubsection{Convergence}
Observe that
\begin{align}\label{equation-55}
\hat{\mathscr{F}}_{\nu+1}&=\mathscr{W}^{-1}_{\nu+1}\circ\hat{\mathscr{F}}_{\nu}\circ\mathscr{W}_{\nu+1}\notag\\
&=\mathscr{W}^{-1}_{\nu+1}\circ\mathscr{W}^{-1}_{\nu}\circ\hat{\mathscr{F}}_{\nu-1}\circ\mathscr{W}_{\nu}\circ\mathscr{W}_{\nu+1}\notag\\
&=\cdots\notag\\
&=\mathscr{W}^{-1}_{\nu+1}\circ\cdots\circ\mathscr{W}^{-1}_{1}\circ\mathscr{F}_{0}\circ\mathscr{W}_{1}\circ\cdots\circ\mathscr{W}_{\nu+1}.
\end{align}
Denote
\begin{align*}
\mathscr{W}^{\nu+1} &:= \mathscr{W}_{1}\circ\mathscr{W}_{2}\circ\cdots\circ\mathscr{W}_{\nu+1},
\end{align*}
then \eqref{equation-55} implies that
\begin{equation*}
\mathscr{W}^{\nu+1}\circ\hat{\mathscr{F}}_{\nu+1}=\mathscr{F}_{0}\circ\mathscr{W}^{\nu+1}.
\end{equation*}
The transformation $\mathscr{W}^{\nu+1}$ is convergent since
\begin{align}
|| \mathscr{W}^{\nu+1}-\mathscr{W}^{\nu}||_{\mathcal{D}_{\nu+1}}&=||\mathscr{W}_{1}\circ\cdots\circ\mathscr{W}_{\nu}\circ\mathscr{W}_{\nu+1}-\mathscr{W}_{1}\circ\cdots\circ\mathscr{W}_{\nu}||_{\mathcal{D}_{\nu+1}}\notag\\
&\leq||\mathscr{W}^{\nu}||_{\mathcal{D}_{\nu}}||\mathscr{W}_{\nu+1}-{\rm id}||_{\mathcal{D}_{\nu+1}}\notag\\
&\leq\prod\limits^{\nu}_{i=1}(1+c\gamma^{n+m+1}_{0}s^{m}_{i-1}\mu_{i-1}\Gamma(h_{i-1}-h_{i}))c\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1})\notag\\
&\leq c\gamma^{n+m+1}_{0}s^{m}_{\nu}\mu_{\nu}\Gamma(h_{\nu}-h_{\nu+1}).
\end{align}
Therefore, $\lim\limits_{\nu\rightarrow \infty}\mathscr{W}^{\nu}:=\mathscr{W}$, as well as $\mathscr{F}_{\infty}=\lim\limits_{\nu\rightarrow\infty}\hat{\mathscr{F}}_{\nu}$, we thus deduce that
\begin{equation*}
\mathscr{W}\circ\mathscr{F}_{\infty}=\mathscr{F}\circ\mathscr{W}.
\end{equation*}
It remains to consider the following convergence. By Lemma \ref{lemma-4.5}, one has
\begin{align}
&\omega_{0}(\hat{r}_{1})+f_{0,0}(\hat{r}_{1})=\omega_{0}(r_{0}),\notag\\
&\omega_{0}(\hat{r}_{2})+f_{0,0}(\hat{r}_{2})+f_{0,1}(\hat{r}_{2})=\omega_{0}(r_{0}),\notag\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~\vdots\notag\\
\label{equation-5.48}
&\omega_{0}(\hat{r}_{\nu})+f_{0,0}(\hat{r}_{\nu})+\cdots+f_{0,\nu-1}(\hat{r}_{\nu})=\omega_{0}(r_{0}).
\end{align}
Taking limits on both sides of \eqref{equation-5.48}, we obtain
\begin{equation*}
\omega_{0}(\hat{r}_{\infty})+\sum^{\infty}_{i=0}f_{0,i}(\hat{r}_{\infty})=\omega_{0}(r_{0})=\omega(r_{*}),
\end{equation*}
that is, for given $r_{*}\in E^{\circ}$, the mapping $\mathscr{F}_{\infty}$ on $\mathcal{D}_{\infty}$ becomes the integrable rotation
\begin{equation*}\mathscr{F}_{\infty}:
\begin{cases}
\theta^{1}_{\infty}=\theta_{\infty}+\omega(r_{*}),\\
r^{1}_{\infty}=r_{\infty}-\tilde{r},
\end{cases}
\end{equation*}
where $\omega(r_{*})=p$, and $ p $ is given in advance. Besides, $\tilde{r}=\sum\limits^{\infty}_{i=0}r^{*}_{i} \to 0$ as $ \varepsilon \to 0 $. This completes the proof of the frequency-preserving KAM persistence in Theorem \ref{theorem-1}.
\subsection{Proof of Corollary \ref{cor-1}}
This subsection is devoted to the proof of Corollary \ref{cor-1}. We will show that the assumption on $ \omega(r) $ here contains the transversality condition $({\rm A1})$. In fact, the frequency mapping $\omega(r)$ is injective on $E^{\circ}$, and consequently, it is surjective from $E^{\circ}$ to $\omega({E^{\circ}})$. Therefore it is a homeomorphism and by Nagumo's theorem, we have that the Brouwer degree $\deg(\omega,E^{\circ},p)=\pm 1$ for some $ p \in \omega(E^{\circ})^\circ $. Finally, by applying Theorem \ref{theorem-1} we directly obtain the desired frequency-preserving KAM persistence in Corollary \ref{cor-1}.
\subsection{Proof of Theorem \ref{theorem-2}}
This section outlines the proof of Theorem \ref{theorem-2}. We focus on describing the parts of the proof of Theorem \ref{theorem-2} that differ from those of Theorem \ref{theorem-1}. We shall see that $\varpi_{1}\leq \varpi_{2}$ is used directly by assumption $({\rm A3})$ in the process of proving Lemma \ref{lemma-5.2}. Moreover, there is no need to construct a conjugation of $\bar{\mathscr{F}}_{\nu+1}$ since the mapping we considered in Theorem \ref{theorem-2} does not have the intersection property. The parts we leave out in this section are similar to those described in Section \ref{SEC4}.
Consider the mapping $\mathscr{F}:\mathbb{T}^{n}\times \Lambda\rightarrow \mathbb{T}^{n}$ defined by
\begin{equation*}
\theta^{1}=\theta+\omega(\xi)+\varepsilon f(\theta,\xi,\varepsilon),
\end{equation*}
where $\xi\in\Lambda\subset \mathbb{R}^{n}$ is a parameter, $\Lambda$ is a connected closed bounded domain with interior points. For $\theta_{0}\in D(h_{0})$, and $\xi_{0}\in\Lambda_{0}:=\{\xi\in\Lambda|\ |\xi-\xi_{0}|<{\rm dist}\ (\xi_{0},\partial \Lambda)\}$, denote
\begin{equation*}
\theta^{1}_{0}=\theta_{0}+\omega(\xi_{0})+f_{0}(\theta_{0},\xi_{0},\varepsilon),
\end{equation*}
where $\omega(\xi_{0})=\omega(\xi_{*})=q$ for given $ q\in \mathbb{R}^n $ in advance and $f_{0}(\theta_{0},\xi_{0},\varepsilon)=\varepsilon f(\theta_{0},\xi_{0},\varepsilon)$. The estimate on $||f_{0}||_{D(h_{0})}$ is
\begin{equation*}
||f_{0}||_{D(h_{0})}\leq \gamma^{n+m+2}_{0}\mu_{0},
\end{equation*}
if
\begin{equation*}
\varepsilon^{\frac{3}{4}}\varepsilon^{-\frac{1}{8\eta(m+1)}}||f||_{D(h_{0})}\leq 1.
\end{equation*}
Set
\begin{equation*}
\Lambda_{\nu}:=\{\xi:{\rm dist}\ (\xi,\partial\Lambda_{\nu-1}) <\mu_{\nu-1}\}, \;\; \nu \in \mathbb{N}^+.
\end{equation*}
Suppose that after $\nu$ KAM steps, for $\theta_{\nu}\in D(h_{\nu})$ and $\xi_{\nu}\in\Lambda_{\nu}$, the mapping becomes
\begin{equation*}
\theta^{1}_{\nu}=\theta_{\nu}+\omega(\xi_{0})+f_{\nu}(\theta_{\nu},\xi_{\nu},\varepsilon),
\end{equation*}
and one has
\begin{equation*}
||f_{\nu}||_{D(h_{\nu})}\leq \gamma^{n+m+2}_{0}\mu_{\nu}.
\end{equation*}
Introduce a transformation $\mathscr{U}_{\nu+1}:={\rm id}+U_{\nu+1}$ that satisfies $\mathscr{U}_{\nu+1}\circ\bar{\mathscr{F}}_{\nu+1}=\mathscr{F}_{\nu}\circ\mathscr{U}_{\nu+1}$. Then the conjugation $\bar{\mathscr{F}}_{\nu+1}$ of mapping $\mathscr{F}_{\nu}$ is
\begin{equation*}
\bar{\mathscr{F}}_{\nu+1}: \theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega(\xi_{0})+\bar{f}_{\nu+1}(\theta_{\nu+1},\xi_{\nu},\varepsilon).
\end{equation*}
We obtain the homological equation
\begin{equation}\label{eq-56}
U_{\nu+1}(\theta_{\nu+1}+\omega(\xi_{0}))-U_{\nu+1}(\theta_{\nu+1})=\mathcal{T}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},\xi_{\nu},\varepsilon),
\end{equation}
and the new perturbation
\begin{align*}
\bar{f}_{\nu+1}(\theta_{\nu+1},\xi_{\nu},\varepsilon)&=f_{\nu}(\theta_{\nu+1}+U_{\nu+1},\xi_{\nu},\varepsilon)-f_{\nu}(\theta_{\nu+1},\xi_{\nu},\varepsilon)+\mathcal{R}_{K_{\nu+1}}f_{\nu}(\theta_{\nu+1},\xi_{\nu},\varepsilon)\\
&+U_{\nu+1}(\theta_{\nu+1}+\omega(\xi_{0}),\xi_{\nu},\varepsilon)-U_{\nu+1}(\theta_{\nu+1}+\omega(\xi_{0})+\bar{f}_{\nu+1},\xi_{\nu},\varepsilon).
\end{align*}
The homological equation \eqref{eq-56} is uniquely solvable on $D(h_{\nu+1})$, and the new perturbation $\bar{f}_{\nu+1}$ can be solved by the implicit function theorem.
To keep the frequency unchanged, construct a translation
\begin{equation}\label{5.6}
\mathscr{V}_{\nu+1}: \theta_{\nu+1}\rightarrow \theta_{\nu+1},\quad \tilde{\xi}_{\nu}\rightarrow \tilde{\xi}_{\nu}+\xi_{\nu+1}-\xi_{\nu},
\end{equation}
where $\xi_{\nu+1}$ is to be determined. This translation changes the parameter alone, and the mapping becomes $\mathscr{F}_{\nu+1}=\bar{\mathscr{F}}_{\nu+1}\circ\mathscr{V}_{\nu+1}$, that is,
\begin{equation*}
\mathscr{F}_{\nu+1}: \theta^{1}_{\nu+1}=\theta_{\nu+1}+\omega(\xi_{0})+f_{\nu+1}(\theta_{\nu+1},\xi_{\nu+1},\varepsilon),
\end{equation*}
where the frequency $\omega(\xi_{0})=\omega(\xi_{\nu+1})+\sum\limits^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1})$. The following lemma states that the frequency is preserved.
\begin{lemma}\label{lemma-5.2}
Assume that
\begin{equation*}
({\rm H6})\qquad ||\sum^{\nu}_{i=0}f_{0,i}(\xi_{\nu})||_{\Lambda_{\nu}}\leq c\mu^{\frac{1}{2}}_{0}.
\end{equation*}
Then there exists a $\xi_{\nu+1}\in B_{c\mu_{\nu}}(\xi_{\nu})\subset \Lambda_{0}$ such that
\begin{equation*}
\omega(\xi_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1})=\omega(\xi_{0}).
\end{equation*}
\end{lemma}
\begin{proof}The proof is an induction on $\nu\in\mathbb{N}$. When $\nu=0$, obviously, $\omega(\xi_{0})=\omega(\xi_{0})$. When $\nu\geq 1$, let
\begin{equation}\label{eq-5.6}
\omega(\xi_{j})+\sum^{\nu-1}_{i=0}f_{0,i}(\xi_{j})=\omega(\xi_{0}),\qquad j=1,2,\cdots,\nu,
\end{equation}
then
\begin{equation}\label{eq-5.7}
\omega(\xi_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1})=\omega(\xi_{0})
\end{equation}
needs to be verified. Taking the assumptions $({\rm B1})$ and $({\rm B3})$, one has
\begin{equation}\label{DISA11}
\deg\left(\omega(\xi_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1}),\Lambda_{0},\omega(\xi_{0})\right)=\deg\left(\omega(\xi_{\nu+1}),\Lambda_{0},\omega(\xi_{0})\right)\neq0.
\end{equation}
This means that there exists at least one parameter $\xi_{\nu+1}\in\Lambda_{0}$ such that $\omega(\xi_{\nu+1})+\sum\limits^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1})=\omega(\xi_{0})$ holds.
Next, let us verify that $\xi_{\nu+1}\in B_{c\mu_{\nu}}(\xi_{\nu})\subset \Lambda_{\nu}$. From \eqref{eq-5.6} and \eqref{eq-5.7}, one has
\begin{equation*}
\omega(\xi_{\nu})+\sum^{\nu-1}_{i=0}f_{0,i}(\xi_{\nu})=\omega(\xi_{\nu+1})+\sum^{\nu}_{i=0}f_{0,i}(\xi_{\nu+1}),
\end{equation*}
i.e.,
\begin{equation*}
f_{0,\nu}(\xi_{\nu+1})=\omega(\xi_{\nu})-\omega(\xi_{\nu+1})+\sum^{\nu-1}_{i=0}(f_{0,i}(\xi_{\nu})-f_{i}(\xi_{\nu+1})).
\end{equation*}
Since $||f_{i}||_{D(h_{i+1})}\leq \gamma^{n+m+2}_{0}\mu_{i}$ for each $ i\geq 0$, one has $[f_{0,i}]_{\varpi_{1}}\leq c\mu_{i}$. Therefore,
\begin{equation*}
| f_{0,i}(\xi_{\nu})-f_{0,i}(\xi_{\nu+1})|\leq c\mu_{i}\varpi_{1}(| \xi_{\nu}-\xi_{\nu+1}|).
\end{equation*}
Following $({\rm A3})$, we have
\begin{align*}
| f_{0,\nu}(\xi_{\nu+1})|&=|\omega(\xi_{\nu})-\omega(\xi_{\nu+1})+\sum^{\nu-1}_{i=0}(f_{0,i}(\xi_{\nu})-f_{0,i}(\xi_{\nu+1}))|\\
&\geq|\omega(\xi_{\nu})-\omega(\xi_{\nu+1})|-\sum^{\nu-1}_{i=0}|f_{0,i}(\xi_{\nu})-f_{0,i}(\xi_{\nu+1})|\\
&\geq\varpi_{2}(| \xi_{\nu}-\xi_{\nu+1}|)-c\varpi_{1}(|\xi_{\nu}-\xi_{\nu+1}|)\sum^{\nu-1}_{i=0}\mu_{i}\\
&\geq\frac{\varpi_{2}(|\xi_{\nu}-\xi_{\nu+1}|)}{2}.
\end{align*}
The last inequality is due to $\varpi_{1}\leq \varpi_{2}$, and $\varepsilon$ is sufficiently small such that $c(\sum\limits^{\nu-1}_{i=0}\mu_{i})\leq \frac{1}{2}$. Thus,
\begin{equation}\label{2.8}
|\xi_{\nu}-\xi_{\nu+1}|\leq \varpi^{-1}_{2}(2|f_{0,\nu}(\xi_{\nu+1})|)\leq \varpi^{-1}_{2}(2c \mu_{\nu})\leq \varpi^{-1}_{1}(2c\mu_{\nu})\leq c\mu_{\nu},
\end{equation}
which is similar to \eqref{equation-4.33}. Moreover,
\begin{equation*}
| \xi_{\nu+1}-\xi_{0}|\leq\sum^{\nu}_{i=0}| \xi_{i+1}-\xi_{i}|\leq c\sum^{\nu}_{i=0}\mu_{i}\leq 2c\mu_{0}.
\end{equation*}
The above illustrates that $\{\xi_{\nu}\}$ is a Cauchy sequence and $\xi_{\nu+1}\in B_{c\mu_{\nu}}(\xi_{\nu})\subset \Lambda_{0}$.
\end{proof}
We now summarize the standard convergence. Denote
\[\mathscr{U}^{\nu+1}:=\mathscr{U}_{1}\circ\cdots\circ\mathscr{U}_{\nu+1},\]
and
\[\mathscr{W}^{\nu+1}:=\mathscr{W}_{1}\circ\cdots\circ\mathscr{W}_{\nu+1}:=(\mathscr{U}_{1}\circ\mathscr{V}_{1})\circ\cdots\circ(\mathscr{U}_{\nu+1}\circ\mathscr{V}_{\nu+1}).\]
Both of them are convergent, one therefore has $\mathscr{U}:=\lim\limits_{\nu\rightarrow\infty}\mathscr{U}^{\nu}$ and $\mathscr{W}:=\lim\limits_{\nu\rightarrow\infty}\mathscr{W}^{\nu}$. Moreover, we get
\[\omega(\xi_{\infty})+\sum\limits^{\infty}_{i=0}f_{0,i}(\xi_{\infty})=\omega(\xi_{0})=\omega(\xi_{*})=q\]
for $\xi_{*}\in \Lambda^{\circ}$ fixed. For \eqref{UUUU}, we know that the conjugation for dynamical system \eqref{equation-3.1} only focuses on the angular variable $ \theta \in \mathbb{T}^n $. As mentioned in \eqref{5.6}, the angular variable is unchanged under the translation. Therefore, one has
\begin{equation*}
\mathscr{U}\circ\mathscr{F}_{\infty}=\mathscr{F}\circ\mathscr{U},
\end{equation*}
here $\mathscr{F}_{\infty}$ denotes the limit of $\mathscr{F}_{\nu}$ on $\mathcal{D}_{\infty}$. More precisely, one obtains the integrable rotation
\begin{equation*}
\mathscr{F}_{\infty}:\theta^{1}_{\infty}=\theta_{\infty}+\omega(\xi_{*})
\end{equation*}
with frequency $ \omega(\xi_{*})=q $, where $ q $ is given in advance. In other words, we prove the KAM persistence with prescribed frequency-preserving. This completes the proof of Theorem \ref{theorem-2}.
\section*{Acknowledgements}
This work was supported by National Basic Research Program of China (grant No. 2013CB834100), National Natural Science Foundation of China (grant No. 11571065, 11171132, 12071175), Project of Science and Technology Development of Jilin Province, China (grant No. 2017C028-1, 20190201302JC), and Natural Science Foundation of Jilin Province (grant No. 20200201253JC).
|
2,877,628,088,617 | arxiv | \section{Introduction}
Spatial birth and death processes in which the birth and
death rates depend on the configuration of the system
were first studied by Preston (1975). His approach was
to consider the solution of the backward Kolmogorov
equation, and he worked under the restriction that there
were only a finite number of individuals alive at any
time. Under certain conditions, the processes exist and
are temporally ergodic, that is, there exists a unique
stationary distribution. The more general setting
considered here requires only that the number of points
alive in any compact set remains finite at all times.
Specifically, we assume that our population is
represented as a countable subset of points in
a complete, separable
metric space $S$ (typically, $S\subset{\mathbb R}^d$).
We will identify the
subset with the counting measure $\eta$ given by
assigning unit mass to each point, that is, $\eta (B)$ is the
number of points in a set $B\in {\mathcal B}(S)$.
(${\mathcal B}(S)$ will denote the Borel subsets of
$S$.)
We will use the terms
point process and random counting measure
interchangeably.
With this identification in mind, let
${\mathcal N}(S)$ be the collection of counting measures on the
metric space $S$. The state space for
our process will be some subset of ${\mathcal N}(S)$. All processes
and random variables are defined on a complete probability space
$(\Omega ,{\mathcal F},P)$.
The spatial birth and death process is specified in terms
of non-negative functions $\lambda :S\times {\mathcal N}(S)\rightarrow
[0,\infty )$ and
$\delta :S\times {\mathcal N}(S)\rightarrow [0,\infty )$ and a reference measure $
\beta$ on $S$
(typically Lebesgue measure $m_d$, if $S\subset {\Bbb R}^d$). $
\lambda$ is the
birth rate and $\delta$ the death rate. If the point
configuration at time $t$ is $\eta\in {\mathcal N}(S)$, then the probability
that a point in a set $B\subset S$ is added to the configuration
in the next time interval of length $\Delta t$ is approximately
$\int_B\lambda (x,\eta )\beta (dx)\Delta t$ and the probability that a point $
x\in\eta$ is
deleted from the configuration in the next time interval
of length $\Delta t$ is approximately $\delta (x,\eta )\Delta t$. Under these
assumptions, the generator of the process should be of
the form
\begin{equation}AF(\eta )=\int (F(\eta +\delta_x)-F(\eta ))\lambda
(x,\eta )\beta (dx)+\int (F(\eta -\delta_x)-F(\eta ))\delta (x,\eta
)\eta (dx)\label{gener}\end{equation}
for $F$ in an appropriate domain.
Following the work of Preston, spatial birth and death
processes quickly found application in statistics when
Ripley (1977) observed that spatial point patterns could
be simulated by constructing a spatial birth and death
process having the distribution of the desired pattern as
its stationary distribution and then simulating the birth
and death process for a long time, a procedure now
known as Markov chain Monte Carlo.
The two best-known classes of spatial point processes
are Poisson random measures and Gibbs distributions.
\subsection{Poisson random measures}
Let $\beta$ be a $\sigma$-finite measure on $S$, $(S,d_S)$ a complete,
separable metric space. $\xi$ is
a Poisson random measure on $S$ with mean
measure $\beta$ if for each $B\in {\mathcal B}(S)$, $\xi (B)$ has a
Poisson distribution with expectation $\beta (B)$ and $\xi (B)$ and
$\xi (C)$ are independent if $B\cap C=\emptyset$. Taking $\lambda
=\delta\equiv 1$,
then the Poisson random measure with mean measure $\beta$
gives the unique stationary distribution for the birth and
death process with generator
\begin{equation}AF(\eta )=\int (F(\eta +\delta_x)-F(\eta ))\beta
(dx)+\int (F(\eta -\delta_x)-F(\eta ))\eta (dx).\label{gen2}\end{equation}
Letting $\mu_{\beta}^0$ denote this distribution, the
stationarity can be checked by verifying that
\[\int_{{\mathcal N}(S)}AF(\eta )\mu_{\beta}^0(d\eta )=0.\]
This assertion follows from the standard identity
\begin{equation}E[\int_Sh(\xi -\delta_x,x)\xi (dx)]=E[\int_Sh(\xi
,x)\beta (dx)].\label{poisid}\end{equation}
See Daley and Vere-Jones (1988), p. 188, Equation (6.4.11).
\subsection{Gibbs distributions} Assume that $\beta (S)<\infty$.
Consider the class of spatial point processes specified
through a density (Radon-Nikodym derivative) with
respect to a Poisson point process with mean measure $\beta$,
that is, the distribution of the point process is given by
\begin{equation}\mu_{\beta ,H}(d\eta )=\frac 1{Z_{\beta ,H}}e^{-H
(\eta )}\mu_{\beta}^0(d\eta ),\label{ccol1}\end{equation}
where $H(\eta )$ is referred to as the {\em energy function}, $Z_{
\beta ,H}$ is a
normalizing constant, and $\mu_{\beta}^0$ is the law of a Poisson
process with mean measure $\beta$. Therefore, the
state space for this process is ${\mathcal S}=\{\eta\in {\mathcal N}(S);H
(\eta )<\infty \}$,
the set of configurations with positive density. We
assume that $H$ is hereditary in the sense of Ripley
(1977), that is $H(\eta )<\infty$ and $\tilde{\eta}\subset\eta$ implies $
H(\tilde{\eta })<\infty$.
Ripley showed that such a measure $\mu_{\beta ,H}$ is the
stationary
distribution of a spatial birth and death process. In fact,
there is more than one birth and death
process that has $\mu_{\beta ,H}$ as a
stationary distribution; we simply require that $\lambda (x,\eta
)>0$ if
$H(\eta +\delta_x)<\infty$ and that
$\lambda$ and $\delta$ satisfy
\begin{equation}\lambda (x,\eta )e^{-H(\eta )}=\delta (x,\eta +\delta_
x)e^{-H(\eta +\delta_x)}.\label{eqcol1}\end{equation}
This equation is a detailed balance condition which
ensures that births from $\eta$ to $\eta +\delta_x$ match deaths from
$\eta +\delta_x$ to $\eta$ and that the process is time-reversible with
\reff{ccol1} as its stationary distribution.
Again, this assertion can be verified by showing that
\[\int AF(\eta )\mu_{\beta ,H}(d\eta )=\frac 1{Z_{\beta ,H}}\int
AF(\eta )e^{-H(\eta )}\mu^0_{\beta}(d\eta )=0.\]
This
identity again follows from (\ref{poisid}).
Notice that equation (\ref{eqcol1}) says that any pair of
birth and death rates such that
\[\frac {\lambda (x,\eta )}{\delta (x,\eta +\delta_x)}=\exp\{-H(\eta
+\delta_x)+H(\eta )\}\]
will give rise to a process with stationary distribution
given by \reff{ccol1}. We can always
take $\delta (x,\eta )=1$, that is, whenever a point is added to the
configuration, it lives an exponential length of time
independently of the configuration of the process.
For example, consider a spatial point process on a
compact set $S\subset{\mathbb R}^d$ given by a Gibbs distribution with
pairwise interaction potential $\rho (x_1,x_2)\ge 0$, that is, for
$\eta =\sum_{i=1}^m\delta_{x_i}$,
\begin{eqnarray}
H_{\rho}(\eta )&=&\sum_{i<j}\rho (x_i,x_j)\label{eqcol1a}\\
&=&\frac 12[\int\int\rho (x,y)\eta (dx)\eta (dy)-\int\rho (x,x)\eta
(dx)]\}\nonumber\end{eqnarray}
and the distribution of the point process is absolutely
continuous with respect to the spatially homogeneous
Poisson process with constant intensity $1$ (or equivalently
with Lebesgue mean measure) on $S$.
Taking $\delta (x,\eta )\equiv 1$ and $\lambda (x,\eta )=\exp\{-\int
\rho (x,y)\eta (dy)\}$,
the distribution determined by
(\ref{eqcol1a}) is the stationary distribution for the birth
and death process
with infinitesimal generator
\begin{equation}\label{eqcol2}AF(\eta )=\int e^{-\int\rho (x,y)\eta
(dy)}(F(\eta +\delta_x)-F(\eta ))dx+\int (F(\eta -\delta_x)-F(\eta
))\eta (dx).\end{equation}
Another
example is the area-interaction point process introduced by
Baddeley and Van Lieshout (1995). This point process is
absolutely continuous with respect to the spatial Poisson
process with Lebesgue mean measure $m_d$ on $S\subset{\mathbb R}^d$ and
$H(\eta )=\eta (S)\log\rho -m_d(\eta\oplus G)$, so the
Radon-Nikodym derivative is given by
\begin{equation}L(\eta )=\frac 1Z\rho^{\eta (S)}\gamma^{-m_d(\eta
\oplus G)}.\label{eqcol3}\end{equation}
Again, $Z$ is the normalizing constant, $\rho$ and $\gamma$ are
positive parameters, and $G$ is a compact (typically
convex) subset of ${\mathbb R}^d$ referred to as the {\em grain}. The set
$\eta\oplus G$ is given by
\[\eta\oplus G=\cup \{x\oplus G;x\in\eta \}.\]
The parameter $\gamma$ controls the area-interaction among
the points of $\eta$: the process is \emph{attractive} if
$\gamma >1$ and \emph{repulsive} otherwise. (See Lemma
\ref{mono}.) If $\gamma =1$ the
point process is just the Poisson random measure with mean
measure $\rho m_d$. The case $\gamma >1$ is related
to the \emph{Widow-Rowlinson model} introduced by
Widow and Rowlinson (1970). The case of
\emph{area-exclusion} corresponds to a suitable limit
$\gamma\to 0$. A birth and death process with
stationary distribution given by the area-interaction
distribution can be obtained by taking the
unit death rate and the birth rate given by
\begin{equation}\label{eqcol4}\lambda (x,\eta )=\rho\,\gamma^{-m_d(
(x+G)\setminus (\eta\oplus G))}.\end{equation}
\subsection{Overview}
The spatial birth and death processes that correspond to
the Gibbs distributions discussed above involve finite
configurations and it is straightforward to see that they
are uniquely characterized by their birth and death
rates, for example, as solutions of the martingale
problem associated with the generator $A$ given in
(\ref{gener});
however, if the configurations are infinite and the
total birth and death rates are infinite,
the existence and uniqueness of the processes are not so
clear.
In Section \ref{section2}, we represent these processes
as solutions of a system of stochastic equations and give
conditions for existence and uniqueness of solutions for
the equations as well as for the corresponding
martingale problems. These equations are very
useful in studying the asymptotic behavior of the birth and
death processes,
including temporal and/or spatial-ergodicity and the speed of
convergence to the stationary distribution.
The uniqueness conditions given here are direct analogs
of Liggett's (1972) conditions for existence and uniqueness
for lattice indexed interacting particle systems.
Stochastic equations for lattice indexed systems were
formulated in Kurtz (1980) using time-changed Poisson
processes and existence and uniqueness given under
Liggett's conditions. Stochastic equations for spatial
birth and death processes of the type considered here
were formulated in Garcia (1995) using a spatial version
of the time-change approach. Existence and
uniqueness were again given under analogs of Liggett's
conditions.
One disadvantage to the time-change approach taken in
Kurtz (1980) and Garcia (1995) is that the filtration to
which the process is adapted depends on the solution.
A stochastic equation that
avoids this difficulty can be formulated by representing
the birth process as a thinning of a Poisson random
measure. Intuitively, the approach is analogous to the
rejection method for simulating random variables. The
fact that counting processes and more general marked
counting processes can be obtained by thinning Poisson
random measures is well known, particularly in the
context of simulation. (See, for example, Daley and
Vere-Jones (2003), Section 7.5.) Stochastic equations exploiting
this approach were formulated for
lattice systems in Kurtz and Protter (1996) and for
general spatial birth processes by Massouli\'e (1998). In
both cases, uniqueness was obtained under conditions
analogous to Liggett's.
Section \ref{sectiondeath} considers temporal
ergodicity for birth and death
processes in noncompact $S$ (more precisely, $S$ and $\beta$
with $\beta (S)=\infty$) and spatial ergodicity
for $S={\mathbb R}^d$ and translation invariant
birth rates.
We give conditions for ergodicity and
exponential convergence to the stationary distribution.
It is well known that these processes are
temporally and spatially ergodic if the birth and death
rates are constant (the stationary measure being
Poisson). More generally, in Theorem \ref{thte}, we show
that
if the birth rate
satisfies the conditions of Theorem \ref{thmeu} with
$M<1$ and the death rate is constant ($\delta\equiv 1$),
the system is temporally ergodic and for every initial
distribution, the distribution of the solution converges at an
exponential rate to the stationary distribution.
For $S={\mathbb R}^d$ and $\lambda$ translation invariant, from the
stochastic equation, we see that spatial ergodicity of the
initial distribution implies spatial ergodicity of the
solution at each time $0<t<\infty$.
Unfortunately, it is not
clear, in general, how to carry this
conclusion over to $t=\infty$, that
is, to the limiting distribution of the solution, although
in the case $M<1$,
spatial ergodicity holds for the unique stationary
distribution as well.
We give some additional
conditions under which spatial ergodicity of
the limiting distribution can be obtained.
Fern\'{a}ndez,
Ferrari and Garcia (2002) study ergodicity of spatial birth
and death processes using
a graphical representation
to construct the stationary distribution that is closely
related to the stochastic equations we consider here.
They give conditions for an exponential rate of
convergence to the stationary distribution and for
spatial ergodicity of the stationary distribution similar
to those given here, but for a more restricted class of
models.
Throughout, $\overline {C}(S)$ will denote the space of bounded
continuous functions on $S$ and ${\mathcal B}(S)$ the Borel subsets of
$S$.
The stochastic equations we consider will be driven by a Poisson
random measure $N$ on $U\times [0,\infty )$ for an appropriate space
$U$, having mean measure of the form $\nu\times m_1$, where $m_1$ is
Lebesgue measure on $[0,\infty )$. Then for $B\in {\mathcal B} (U)$ with
$\nu (B)<\infty$, $N(B,t)\equiv N(B\times [0,t])$ is just an ordinary
Poisson process with intensity $\nu (B)$. For a filtration $\{{\mathcal
F}_t \}$, we say that $N$ is {\em compatible\/} with $\{{\mathcal
F}_t\}$ if and only if for each $B\in {\mathcal B}(U)$ with $\nu
(B)<\infty$, $N(B,\cdot )$ is $\{{\mathcal F}_t\}$-adapted and
$N(B,t+s)-N(B,t)$ is independent of ${\mathcal F}_t$ for $s,t\geq 0$.
\setcounter{equation}{0}
\section{Spatial birth and death processes as solutions of
stochastic equations} \label{section2}
A birth and death process as described in the previous
section can be represented as the solution of a system
of stochastic equations. The approach is similar to
Garcia (1995) where such processes were obtained as
solutions of time-change equations. We assume that the
individuals in the birth and death process are
represented by points in a Polish space $S$. Typically, $S$
will be ${\mathbb R}^d$, $\Z^d$, or a subset of one of these, but we do
not rule out more general spaces. Let $K_1\subset K_2\subset\cdots$
satisfy $\cup_kK_k=S$, and let $c_k\in\overline {C}(S)$ satisfy $c_k\geq
0$ and
$\inf_{x\in K_k}c_k(x)>0$. ${\mathcal N}(S)$ will denote the collection of
counting measures on $S$ and ${\mathcal S}$ will denote
$\{\zeta\in {\mathcal N}(S):\int_Sc_k(x)\zeta (dx)<\infty ,k=1,2,\ldots
\}$. Without loss of
generality, we can assume that $c_1\leq c_2\leq\cdots$. Let
${\mathcal C}=\{f\in\overline {C}(S):|f|\leq ac_k\mbox{\rm \ for some }k\mbox{\rm \ and }
a>0\}$, and topologize ${\mathcal S}$
by the weak$*$ topology generated by ${\mathcal C}$, that is, $\zeta_
n\rightarrow\zeta$ if
and only if $\int_Sfd\zeta_n\rightarrow\int_Sfd\zeta$ for all $f\in
{\mathcal C}$. (Note that ${\mathcal C}$ is
linear and that, with this topology, ${\mathcal S}$ is Polish.) $D_{
{\mathcal S}}[0,\infty )$
will denote the space of cadlag ${\mathcal S}$-valued functions with
the Skorohod ($J_1$) topology. We assume that $\lambda$ and $\delta$ are
nonnegative,
Borel measurable functions on $S\times {\mathcal C}$.
Let $\beta$ be a $\sigma$-finite, Borel measure on $S$.
We assume
\begin{condition}\label{bound}For
each compact ${\mathcal K}\subset {\mathcal S}$, the birth rate $\lambda$ satisfies
\begin{equation}\sup_{\zeta\in {\mathcal K}}\int_Sc_k(x)\lambda (x,\zeta
)\beta (dx)<\infty ,\quad t>0,\quad k=1,2,\ldots ,\label{brest}\end{equation}
and
\begin{equation}\delta (x,\zeta )<\infty ,\quad\zeta\in {\mathcal S},
\quad x\in\zeta .\label{ficond}\end{equation}
\end{condition}
We also assume that $\lambda$ and $\delta$ satisfy the following
continuity condition.
\begin{condition}\label{contcnd}
If
\begin{equation}\lim_{n\rightarrow\infty}\int_Sc_k(x)|\zeta_n-\zeta
|(dx)=0,\label{ptwise}\end{equation}
for each $k=1,2,\ldots$, then
\begin{equation}\lambda (x,\zeta )=\lim_{n\rightarrow\infty}\lambda
(x,\zeta_n),\quad\delta (x,\zeta )=\lim_{n\rightarrow\infty}\delta
(x,\zeta_n).\label{ccond}\end{equation}
\end{condition}
Note that since \reff{ptwise} implies $\zeta_n$ converges to $\zeta$
in ${\mathcal S}$, the continuity condition \reff{ccond} is weaker than
continuity in ${\mathcal S}$; however, we have the following
condition under which convergence in ${\mathcal S}$ implies
(\ref{ptwise}).
\begin{lemma}\label{varconv}
Suppose $\zeta_0,\zeta_1,\zeta_2,\ldots\in {\mathcal S}$ and $\zeta_
n\leq\zeta_0$, $n=1,2,\ldots$. If
$\zeta_n\rightarrow\zeta$ in ${\mathcal S}$, then (\ref{ptwise}) holds.
\end{lemma}
\begin{proof}
$\zeta_n\leq\zeta_0$ implies that, considered as a measure, $\zeta_
n<<\zeta_0$
and $\frac {d\zeta_n}{d\zeta_0}=\frac {\zeta_n(\{x\})}{\zeta_0(\{
x\})}\leq 1$, almost everywhere $\zeta_0$. Furthermore,
$\zeta_n\rightarrow\zeta$ in ${\mathcal S}$ implies $\zeta_n(\{x\})\rightarrow
\zeta (\{x\})$ for each $x\in\zeta_0$, since the
support of $\zeta_0$ consists of a countable collection of
isolated points. Consequently,
\[c_k(x)\geq c_k(x)|\frac {\zeta_n(\{x\})}{\zeta_0(\{x\})}-\frac {
\zeta (\{x\})}{\zeta_0(\{x\})}|\rightarrow 0,\]
and since $\int_Sc_k(x)\zeta_0(dx)<\infty$, the dominated convergence
theorem implies
\[\lim_{n\rightarrow\infty}\int_Sc_k(x)|\zeta_n-\zeta |(dx)=\lim_{
n\rightarrow\infty}\int_Sc_k(x)|\frac {\zeta_n(\{x\})}{\zeta_0(\{
x\})}-\frac {\zeta (\{x\})}{\zeta_0(\{x\})}|\zeta_0(dx)=0.\]
\end{proof}
\begin{lemma}
Suppose ${\mathcal H}\subset {\mathcal S}$ and $\zeta_0\in {\mathcal S}$ satisfy $
\zeta\leq\zeta_0$, $\zeta\in {\mathcal H}$. If $\delta$
satisfies (\ref{ficond}) and Condition \ref{contcnd}, then
\[\sup_{\zeta\in {\mathcal H}}\delta (x,\zeta )<\infty .\]
\end{lemma}
\begin{proof}
${\mathcal H}$ is relatively compact in ${\mathcal S}$, so any sequence $
\{\zeta_n\}\subset {\mathcal H}$
has a subsequence that converges in ${\mathcal S}$
and, by
Lemma \ref{varconv},
satisfies (\ref{ptwise}).
Fix $x\in S$, and
let $\{\zeta_n\}$ satisfy $\lim_{n\rightarrow\infty}\delta (x,\zeta_
n)=\sup_{\zeta\in {\mathcal H}}\delta (x,\zeta )$. Then there
is a subsequence that converges to some $\widehat{\zeta}\in {\mathcal S}$ and
hence, $\sup_{\zeta\in {\mathcal H}}\delta (x,\zeta )=\delta (x,\widehat{
\zeta })<\infty$.
\end{proof}
\begin{lemma}
Suppose that for each $x\in S$, there exists $k(x)$ such that
$\lambda (x,\zeta +\delta_y)=\lambda (x,\zeta )$ for $y\notin K_{
k(x)}$. Then $\lambda$ satisfies
Condition \ref{contcnd}\ and similarly for $\delta$.
\end{lemma}
\begin{proof} Note that $\zeta\in {\mathcal S}$ implies $\zeta (K_{k(x)})<\infty$ and that
$\zeta_n\rightarrow\zeta$ implies that for $n$ sufficiently large, $
\zeta_n$ restricted
to $K_{k(x)}$ coincides with $\zeta$ restricted to $K_{k(x)}$, so
$\lambda (x,\zeta_n)=\lambda (x,\zeta )$. \hfill
\end{proof}
Let $N$ be a
Poisson random measure on $S\times [0,\infty )^3$ with mean
measure $\beta (dx)\times ds\times e^{-r}dr\times du$.
Let $\eta_0$ be an ${\mathcal S}$-valued random variable independent of $
N$,
and let $\widehat{\eta}_0$ be the point process on $S\times [0,\infty
)$ obtained by
associating to each ``count'' in $\eta_0$ an independent, unit
exponential random variable, that is, for $\eta_0=\sum_{i=1}^{\infty}
\delta_{x_i}$,
set
\begin{equation}\widehat{\eta}_0=\sum_{i=1}^{\infty}\delta_{(x_i,\tau_
i)},\label{etahat}\end{equation}
where the $\{\tau_i\}$ are independent unit exponentials,
independent of $\eta_0$ and $N$. The birth and death process $\eta$
should satisfy a stochastic equation of the form
\begin{eqnarray}
\label{stoch1a}\eta_t(B)&=&\int_{B\times [0,t]\times [0,\infty )^
2}\one_{[0,\lambda (x,\eta_{s-})]}(u)\one_{(\int_s^t\delta (x,\eta_
v)\,dv,\infty )}(r)N(dx,ds,dr,du) \nonumber \\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(\int_0^t\delta (x,\eta_s)\,ds,\infty )}(r)\widehat{\eta}_0(d
x,dr).\end{eqnarray}
To be precise, let $\eta$ be a process with sample paths in
$D_{{\mathcal S}}[0,\infty )$ that is adapted to a filtration $\{{\mathcal F}_
t\}$ with respect
to which $N$ is compatible. (Note that \reff{brest}
ensures that the integral with respect to $N$ on the right
exists and determines an ${\mathcal S}$-valued random variable, and
the continuity condition \reff{ccond} and the finiteness
of $\delta (x,\zeta )$ ensure that $\delta (s,\eta_t)$ is a cadlag function of $
t$, so
that the integrals $\int_s^t\delta (x,\eta_v)dv$ exist.) Then $\eta$ is a
solution of \reff{stoch1a} if and only if the identity
\reff{stoch1a} holds almost surely for all $B\in {\mathcal B}(S)$ and
$t\geq 0$ (allowing $\infty =\infty$).
\begin{lemma}
Suppose Condition \ref{bound} holds. If $\eta$ is a solution of
(\ref{stoch1a}), then for each $T>0$,
\begin{equation}\int_0^T\int_Sc_k(x)\lambda (x,\eta_s)\beta (dx)d
s<\infty\quad a.s.,\label{intbnd}\end{equation}
$\eta^{*}_T$ defined by
\[\eta^{*}_T(B)=\int_{B\times [0,T]\times [0,\infty )^2}\one_{[0,
\lambda (x,\eta_{s-})]}(u)N(dx,ds,dr,du)\]
is an element of ${\mathcal S}$,
\[\eta_t\leq\eta_T^{*}+\eta_0,\quad 0\leq t\leq T,\]
and
\[\lim_{s\rightarrow t+}\int_Sc_k(x)|\eta_s-\eta_t|(dx)=0,\quad t
\geq 0.\]
\end{lemma}
\begin{proof}
Since for almost every $\omega\in\Omega$, the closure of
$\{\eta_s:0\leq s\leq T\}$ is compact, Condition \ref{bound}\ implies
\[\sup_{s\leq T}\int_Sc_k(x)\lambda (x,\eta_s)\beta (dx)<\infty\quad
a.s.,\]
and hence (\ref{intbnd}). Letting
\[\tau_c=\inf\{t:\int_0^t\int_Sc_k(x)\lambda (x,\eta_s)\beta (dx)
ds>c\},\]
we have
\[E[\int_Sc_k(x)\eta^{*}_{T\wedge\tau_c}(dx)]=E[\int_0^{T\wedge\tau_
c}\int_Sc_k(x)\lambda (x,\eta_s)\beta (dx)ds]\leq c,\]
and since $\lim_{c\rightarrow\infty}\tau_c=\infty$ a.s., it follows that
$\int_Sc_k(x)\eta^{*}_T(dx)<\infty$, a.s. implying $\eta^{*}_T\in
{\mathcal S}$ a.s. The last
statement then follows by Lemma \ref{varconv}.
\end{proof}
If $\eta$ is a solution of \reff{stoch1a} and a point at $x$ was
born at time $s\leq t$, then the ``residual clock time''
$r-\int_s^t\delta (x,\eta_v)dv$ is an ${\mathcal F}_t$-measurable random variable. In
particular, the counting-measure-valued process given by
\begin{eqnarray}
\widehat{\eta}_t(B\times D)&=&\int_{B\times [0,t]\times [0,\infty )^2}\one_{
[0,\lambda (x,\eta_{s-})]}(u)\one_D(r-\int_s^t\delta (x,\eta_v)\,
dv)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty )}\one_
D(r-\int_0^t\delta (x,\eta_{s-})\,ds)\widehat{\eta}_0(dx,dr)\label{stoch2a}\end{eqnarray}
is $\{{\mathcal F}_t\}$-adapted.
Let $\widehat {{\mathcal S}}$ denote the collection of counting measures $
\zeta$ on
$S\times [0,\infty )$ such that $\zeta (\cdot\times [0,\infty ))\in
{\mathcal S}$. We can formulate an
alternative equation for the $\widehat {{\mathcal S}}$-valued process $\widehat{
\eta}$ by
requiring that
\begin{eqnarray}
&&\int_{S\times [0,\infty )}f(x,r)\widehat{\eta}_t(dx,dr)\label{stoch3a}\\
&&\qquad =\int_{S\times [0,\infty )}f(x,r)\widehat{\eta}_0(dx,dr)\nonumber\\
&&\qquad\qquad\qquad +\int_{S\times [0,t]\times [0,\infty )^2}f(x
,r)\one_{[0,\lambda (x,\eta_{s-})]}(u)N(dx,ds,dr,du)\nonumber\\
&&\qquad\qquad\qquad -\int_0^t\int_{S\times [0,\infty )}\delta (x
,\eta_s)f_r(x,r)\widehat{\eta}_s(dx,dr)ds,\nonumber\end{eqnarray}
for all $f\in\widehat {{\mathcal C}}$, where $\widehat {{\mathcal C}}$ is the collection of $
f\in\overline {C}(S\times [0,\infty ))$
such that $f_r\equiv\frac {\partial}{\partial r}f\in\overline {C}(S\times
[0,\infty ))$, $f(x,0)=0$,
$\sup_r|f(\cdot ,r)|,\sup_r|f_r(\cdot ,r)|\in {\mathcal C}$, and there exists $
r_f>0$ such
that $f_r(x,r)=0$ for $r>r_f$. Note that if $f\in\widehat {{\mathcal C}}$ and
\begin{equation}f^{*}(x,r)=\int_0^r|f_r(x,u)|du,\label{absderiv}\end{equation}
then $f^{*}\in\widehat {{\mathcal C}}$.
In \reff{stoch3a}, $\widehat{\eta}_0$ can be
any $\widehat {{\mathcal S}}$-valued random variable that is independent of $
N$.
\subsection{Martingale problems} Let ${\mathcal D}(\widehat {A})$ be the collection
of functions $F$ of the form
$F(\widehat{\zeta })=e^{-\int_{S\times [0,\infty )}f(x,r)\widehat{\zeta }
(dx,dr)}$, for non-negative $f\in\widehat {{\mathcal C}}$.
Suppose that $\widehat{\eta}$ is a solution of (\ref{stoch3a}) with
sample paths in $D_{\widehat {{\mathcal S}}}[0,\infty )$. Assuming Condition \ref{bound},
\begin{equation}\int_0^t\int_Sc_k(x)\lambda (x,\eta_s)\beta (dx)d
s<\infty ,\quad k=1,2,\ldots .\label{intest}\end{equation}
By It\^o's formula
\begin{eqnarray}
F(\widehat{\eta}_t)\hspace{-2mm}&=&\hspace{-2mm}F(\widehat{\eta}_0)+\int_{S\times [0,t]\times [0,\infty
)^2}F(\widehat{\eta}_{s-})(e^{-f(x,r)}-1)\one_{[0,\lambda (x,\eta_{s-}
)]}(u)N(dx,ds,dr,du)\nonumber\\
&&\qquad +\int_0^tF(\widehat{\eta}_s)\int_{S\times [0,\infty )}\delta
(x,\eta_s)f_r(x,r)\widehat{\eta}_s(dx,dr)ds,\label{ito1}\\
&=&\hspace{-2mm}F(\widehat{\eta}_0)+\int_{S\times [0,t]\times [0,\infty )^2}F(\widehat{
\eta}_{s-})(e^{-f(x,r)}-1)\one_{[0,\lambda (x,\eta_{s-})]}(u)\tilde {
N}(dx,ds,dr,du)\nonumber\\
&&\qquad +\int_0^tF(\widehat{\eta}_s)\Big(\int_{S\times [0,\infty )}\lambda
(x,\eta_s)(e^{-f(x,r)}-1)e^{-r}\beta (dx)dr\nonumber\\
&&\qquad\qquad\qquad\qquad +\int_{S\times [0,\infty )}\delta (x,\eta_
s)f_r(x,r)\widehat{\eta}_s(dx,dr)\Big)ds,\nonumber\end{eqnarray}
where
$\tilde {N}(dx,ds,dr,du)=N(dx,ds,dr,du)-\beta (dx)\times ds\times
e^{-r}dr\times du$.
It follows from \reff{intest} that the stochastic integral
term on the right is a local martingale, and since $f^{*}$
given by (\ref{absderiv}) is in $\widehat {{\mathcal C}}$, it follows that
\begin{equation}\int_0^t\int_{S\times [0,\infty )}\delta (x,\eta_
s)|f_r(x,r)|\widehat{\eta}_s(dx,dr)ds<\infty ,\quad t>0.\label{dthint}\end{equation}
Consequently,
defining
\begin{eqnarray}
\widehat {A}F(\widehat{\zeta })&=&F(\widehat{\zeta })\Big(\int_{S\times [0,\infty
)}\lambda (x,\zeta )(e^{-f(x,r)}-1)e^{-r}\beta (dx)dr\nonumber\\
&&\qquad\qquad +\int_{S\times [0,\infty )}\delta (x,\zeta )f_r(x,
r)\widehat{\zeta }(dx,dr)\Big),\label{gendef}\end{eqnarray}
any solution of \reff{stoch3a} must be a solution of the
local martingale problem for $\widehat {A}$. We say that $\widehat{\eta}$ is a
solution of the {\em local martingale problem\/} for $\widehat {A}$ if there
exists a filtration $\{{\mathcal F}_t\}$ such that $\widehat{\eta}$ is $\{
{\mathcal F}_t\}$-adapted and
\begin{equation}M_F(t)=F(\widehat{\eta}_t)-F(\widehat{\eta}_0)-\int_0^t\widehat {
A}F(\widehat{\eta}_s)ds\label{lmp}\end{equation}
is a $\{{\mathcal F}_t\}$-local martingale for each $F\in {\mathcal D}(\widehat {
A})$, that is, for
each $F$ of the form $F(\widehat{\zeta })=e^{-\int fd\widehat{\zeta}}$, $
f\in\widehat {{\mathcal C}}$, $f\geq 0$. In particular,
let $\overline {f}(x)=\sup_rf(x,r)$ and
\[\tau_{f,c}=\inf\{t:\int_0^t\int_S\overline {f}(x)\lambda (x,\eta_s)\beta
(dx)ds>c\}.\]
Then $M_F(\cdot\wedge\tau_{f,c})$ is a martingale. Note that $\tau_{
f,c}$ is a
$\{{\mathcal F}^{\eta}_t\}$-stopping time.
Conversely, if $\widehat{\eta}$ is a solution of the local martingale
problem for $\widehat {A}$ with sample paths in $D_{\widehat {{\mathcal S}}}
[0,\infty )$, then
under Condition \ref{bound}, (\ref{intest}) and (\ref{dthint})
hold. If $\gamma\in C_{{\Bbb R}}[0,\infty )$ has compact support and
$f(x,r)=\int_0^r\gamma (u)duc_k(x)$, then $f\in\widehat {{\mathcal C}}$ and it follows that
\[\int_0^t\int_Sc_k(x)\delta (x,\eta_s)\widehat{\eta}_s(dx,dr)ds<\infty
,\quad t>0.\]
To formulate the main theorem of this section, we need
to introduce the notion of a {\em weak solution\/} of a
stochastic equation.
\begin{definition}
{\rm A stochastic process $\tilde{\eta}$ with sample paths in $D_{\widehat {
{\mathcal S}}}[0,\infty )$ is
a {\em weak solution\/} of \reff{stoch2a} if there exists a
probability space $(\Omega ,{\mathcal F},P)$, a Poisson random measure $
N$
on $S\times [0,\infty )^3$ with mean measure $\beta (dx)\times ds
\times e^{-r}dr\times du$
and a stochastic process $\widehat{\eta}$ defined on $(\Omega ,{\mathcal F}
,P)$, such that
$\tilde{\eta}$ and $\widehat{\eta}$ have the same distribution on $D_{
\widehat {{\mathcal S}}}[0,\infty )$, $\widehat{\eta}$ is
adapted to a filtration with respect to which $N$ is
compatible, and $N$ and $\widehat{\eta}$ satisfy \reff{stoch2a}. }
\end{definition}
\begin{theorem}\label{equivthm}
Suppose that $\lambda$ and $\delta$ satisfy Conditions \ref{bound} and
\ref{contcnd}. Then each solution of the stochastic
equation \reff{stoch2a} (or equivalently, \reff{stoch3a})
is a solution of the local martingale problem for $\widehat {A}$
defined by \reff{gendef}, and
each solution of the local
martingale problem for $\widehat {A}$ is a weak solution of the
stochastic equation.
\end{theorem}
\begin{proof} The first part of the theorem follows from the
discussion above.
To prove the second part, we apply a Markov mapping
result of
Kurtz (1998). Let $\{D_i\}\subset {\mathcal B}(S\times [0,\infty )^2
)$ be countable, closed
under intersections, generate ${\mathcal B}(S\times [0,\infty )^2)$, and satisfy
$\int_{D_i}\beta (dx)e^{-r}\,dr\,ds<\infty$. Then $N$ is completely determined
by $N(D_i,t)$. Define
\[Z_i(t)=Z_i(0)(-1)^{N(D_i,t)},\]
where $Z_i(0)$ is $\pm 1$. Note that
\begin{equation}N(D_i,t)=-\frac 12\int_0^tZ_i(s-)dZ_i(s),\label{cdef}\end{equation}
and if the $Z_i(0)$ are iid with
$P\{Z_i(0)=1\}=P\{Z_i(0)=-1\}=\frac 12$ and independent of $N$, then
for each $t\geq 0$, the $Z_i(t)$ are iid and independent of $N$.
For $z\in \{-1,1\}^{\infty}$, we will let $(-1)^{{\bf 1}_D(x,r,u)}
z$ denote
\[(-1)^{{\bf 1}_D(x,r,u)}z=((-1)^{{\bf 1}_{D_1}(x,r,u)}z_1,(-1)^{
{\bf 1}_{D_2}(x,r,u)}z_2,\ldots ).\]
Then $Z=(Z_1,Z_2,\ldots )$ is a solution of the martingale
problems for
\[CG(z)=\int_{S\times [0,\infty )^2}(G((-1)^{{\bf 1}_D(x,r,u)}z)-
G(z))e^{-r}dr\,du\beta (dx).\]
We can take the domain for $C$ to be the collection of
functions that depend on only finitely many coordinates
of $z$. With this domain, the martingale problem for $C$ is
well-posed.
If $\widehat{\eta}$ is a solution of \reff{stoch3a}, then $(\widehat{\eta }
,Z)$ is a
solution of the local martingale problem for
\begin{eqnarray}
\lefteqn{\widehat{\mathbb A}(FG)(\widehat{\zeta },z) \, } \nonumber \\
&=&F(\widehat{\zeta })\Big(\int_{S
\times [0,\infty )^2}\Big(({\bf 1}_{[0,\lambda (x,\zeta )]}(u)e^{
-f(x,r)}+{\bf 1}_{(\lambda (x,\zeta ),\infty )}(u))G((-1)^{{\bf 1}_
D(x,r,u)}z) \nonumber \\
& & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,-\,
G(z)\Big)e^{-r}\beta (dx)
dr\,du\nonumber\\
&&\qquad\qquad\qquad -G(z)\int_{S\times [0,\infty )}\delta (x,\zeta
)f_r(x,r)\widehat{\zeta }(dx,dr)\Big).\label{gendef2}\end{eqnarray}
Let $\tilde{\eta}$ be a solution of the local martingale problem for
$\widehat {A}$. For $a=(a_1,a_2,\ldots )$ with $a_k>0$, $k=1,2,\ldots$,
define
\begin{eqnarray*}
\lefteqn{\tau_a(t)\,=} \\
& & \hspace{-3mm}\inf\{u:\int_0^u1\vee\sum_{k=1}^{\infty}a_k\left[\int_
Sc_k(x)\lambda (x,\tilde\eta_s)\beta (dx)+\int_{S\times S}c_k(x)\delta
(x,\tilde\eta_s)\tilde\eta_s(dx)\right]ds\geq t\},
\end{eqnarray*}
\[H_a(\zeta )=1\vee\sum_{k=1}^{\infty}a_k\left[\int_Sc_k(x)\lambda
(x,\zeta )\beta (dx)+\int_{S\times S}c_k(x)\delta (x,\zeta )\zeta
(dx)\right],\]
and $\tilde{\eta}^a_t=\tilde{\eta}_{\tau_a(t)}$. Then $\tilde{\eta}^
a$ is a solution of the martingale
problem for
\begin{equation}\widehat {A}^a\equiv\frac 1{H_a}\widehat {A}.\label{truncgen}\end{equation}
For $F\in\!{\mathcal D}(\widehat {A})$, $\widehat {A}^aF$ is bounded, and we can
select $a^n=(a_1^n,a_2^n,\ldots )$ so that $a^n\geq a^{n+1}$
and $\tau_{a^n}(t)\rightarrow t$ a.s.
Let $\mu (dz)=\prod_{k=1}^{\infty}(\frac 12\delta_{\{-1\}}(dz_k)+\frac
12\delta_{\{1\}}(dz_k))$, and set
$c_G=\int Gd\mu$. Then
\[\int\widehat {{\Bbb A}}(FG)(\widehat{\zeta },z)\mu (dz)=c_G\widehat {A}F(\widehat{
\zeta }),\]
and more generally,
\[\int\frac 1{H_a}\widehat {{\Bbb A}}A(FG)(\widehat{\zeta },z)\mu (dz)=c_
G\frac 1{H_a}\widehat {A}F(\widehat{\zeta }).\]
Applying Corollary 3.5 of Kurtz (1998) to $H_a^{-1}\widehat {{\Bbb A}}$ for
each $a$, we conclude that if $\tilde{\eta}$ is a solution of the local
martingale problem for $\widehat {A}$, then there exists a solution
$(\widehat{\eta },Z)$ of the local martingale problem for $\widehat {{\Bbb A}}$ such
that $\widehat{\eta}$ and $\tilde{\eta}$ have the same distribution. Finally,
applying \reff{cdef}, we can construct the corresponding
Poisson random measure $N$ and show that $\widehat{\eta}$ and $N$
satisfy \reff{stoch3a}. \hfill \end{proof}
The natural (local) martingale problem for $\eta$ is really the
martingale problem for $A$ given by \reff{gener};
however, there
will be solutions $\widehat{\eta}$ of the local martingale problem for $
\widehat {A}$
(and hence of the stochastic equation) such that the
corresponding $\eta$ is not a solution of the local martingale
problem for $A$. Intuitively, conditioned on
${\mathcal F}_t^{\eta}=\sigma (\eta_s:s\leq t)$, the residual clock times should be
independent unit exponentials, independent of ${\mathcal F}_t^{\eta}$. That
need not be the case, since we are free to pick the
residual clock times at time zero in any way we please.
It also need not be the case if the solution of the
martingale problem fails to be unique. The following
results clarify the relationship between the martingale
problems for $A$ and $\widehat {A}$.
\begin{proposition}\label{projprop}
Suppose that $\lambda$ and $\delta$ satisfy Conditions \ref{bound} and
\ref{contcnd}.
If $\widehat{\eta}$ is a solution of the local martingale
problem for $\widehat {A}$ and at each time $t$, the residual clock
times are independent of ${\mathcal F}^{\eta}_t$ and are independent unit
exponentials, then $\eta$ is a solution of the local
martingale problem for $A$.
\end{proposition}
\begin{proof}\ By assumption, we can write $\widehat{\eta}_t=\sum_i\delta_{
(X_i(t),R_i(t))}$,
where the $R_i(t)$ are independent unit exponentials,
independent of ${\mathcal F}^{\eta}_t$, and in particular, independent of $
\eta_t$.
For $f\in\widehat {{\mathcal C}}$ and $F(\widehat{\zeta })=e^{-\int_{S\times
[0,\infty )}f(x,r)\widehat{\zeta }(dx,dr)}$, since
(\ref{lmp}) can be localized by $\{{\mathcal F}_t^{\eta}\}$-stopping times, it
follows that
\[E[F(\widehat{\eta}_t)|{\mathcal F}_t^{\eta}]-E[F(\widehat{\eta}_0|{\mathcal F}^{
\eta}_0]-\int_0^tE[\widehat {A}F(\eta_s)|{\mathcal F}_s^{\eta}]ds\]
is a $\{{\mathcal F}_t^{\eta}\}$-local martingale. By the independence of the
$R_i(t)$,
\[E[F(\widehat{\eta}_t)|{\mathcal F}_t^{\eta}]=\prod_i\int_0^{\infty}e^{-
f(X_i(t),r)}e^{-r}dr=e^{-\int_Sg(x)\eta_t(dx)}\equiv G(\eta_t),\]
where $g$ is defined so that $e^{-g(x)}=\int_0^{\infty}e^{-f(x,r)}
e^{-r}dr$.
Integrating by parts gives
\[\int_0^{\infty}e^{-f(x,r)}f_r(x,r)e^{-r}dr=1-\int_0^{\infty}e^{
-f(x,r)}e^{-r}dr=1-e^{-g(x)},\]
and hence
\begin{eqnarray*}
E[\widehat {A}F(\eta_s)|{\mathcal F}_s^{\eta}]&=&G(\eta_s)\int_{S\times [
0,\infty )}\lambda (x,\eta_s)(e^{-f(x,r)}-1)e^{-r}\beta (dx)dr\\
&&\qquad +\sum_j\left(\prod_{i\neq j}\int_0^{\infty}e^{-f(X_i(s),
r)}e^{-r}dr\right)\\
&&\qquad\qquad\int_0^{\infty}e^{-f(X_j(s),r)}\delta (X_j(s),\eta_
s)f_r(X_j(s),r)e^{-r}dr\\
&=&G(\eta_s)\int_S\lambda (x,\eta_s)(e^{-g(x)}-1)\beta (dx)\\
&&\qquad +\sum_j\left(\prod_{i\neq j}e^{-g(X_i(s))}\right)\delta
(X_j(s),\eta_s)\left(1-e^{-g(X_j(t))}\right)\\
&=&AG(\eta_s),\end{eqnarray*}
and the proposition follows.\hfill \end{proof}
We have the following converse for the previous
proposition.
\begin{theorem}\label{projthm}
Suppose that $\lambda$ and $\delta$ satisfy Conditions \ref{bound} and
\ref{contcnd}.
If $\eta$ is a solution of the local martingale problem for $A$,
then there exists a solution $\widehat{\eta}$ of the local martingale
problem for $\widehat {A}$ such that $\eta$ and $\widehat{\eta }(\cdot\times
[0,\infty ))$ have the same
distribution on $D_{{\mathcal S}}[0,\infty )$ and at each time $t\geq
0$, the
residual clock times are independent, unit exponentials
that are independent of ${\mathcal F}^{\eta}_t$.
\end{theorem}
\begin{proof}
For $\zeta =\sum_i\delta_{x_i}\in {\mathcal S}$, let $\alpha (\zeta ,
d\widehat{\zeta })$ denote the
distribution on $\widehat {{\mathcal S}}$ of $\sum_i\delta_{(x_i,\tau_i)}$, where the $
\tau_i$ are
independent, unit exponential random variables. Then,
by the calculation in the proof of Proposition
\ref{projprop},
\[G(\zeta )=\int_{\widehat {{\mathcal S}}}F(\widehat{\zeta })\alpha (\zeta ,d
\widehat{\zeta })\qquad AG(\zeta )=\int_{\widehat {{\mathcal S}}}\widehat {A}F(\widehat{
\zeta })\alpha (\zeta ,d\widehat{\zeta }),\]
for $F\in {\mathcal D}(\widehat {A})$. More generally,
$A^aG(\zeta )=\int_{\widehat {{\mathcal S}}}\widehat {A}^aF(\widehat{\zeta })\alpha
(\zeta ,d\widehat{\zeta })$, where $\widehat {A}^a$ is defined as in
\reff{truncgen}. The theorem then follows by Corollary
3.5 of Kurtz (1998).\hfill
\end{proof}
\begin{corollary}
Let $\nu\in {\mathcal P}({\mathcal S})$, and define $\widehat{\nu}\in {\mathcal P}
(\widehat {{\mathcal S}})$ by
\[\int_{\widehat {{\mathcal S}}}hd\widehat{\nu }=\int_{{\mathcal S}}\int_{\widehat {{\mathcal S}}}
h(\widehat{\zeta })\alpha (\zeta ,d\widehat{\zeta })\nu (d\zeta ).\]
If uniqueness holds for
the martingale problem for $(\widehat {A},\widehat{\nu })$, or equivalently, weak
uniqueness holds for the stochastic equation
\reff{stoch3a}, then uniqueness holds for the martingale
problem for $(A,\nu )$.
\end{corollary}
\begin{proof}
If $\eta$ is a solution of the martingale problem for
$(A,\nu )$, then Theorem \ref{projthm} gives a corresponding
solution of the martingale problem for $(\widehat {A},\widehat{\nu })$.
Uniqueness for the latter then implies uniqueness of the
former.\hfill \end{proof}
\subsection{Existence}
We now turn to the question of existence of solutions of
(\ref{stoch1a}). We assume that Conditions \ref{bound}\
and \ref{contcnd} hold. The pair $(\lambda ,\delta )$ will be called
{\em attractive\/} if $\zeta_1\subset\zeta_2$ implies $\lambda (x
,\zeta_1)\leq\lambda (x,\zeta_2)$ and
$\delta (x,\zeta_1)\geq\delta (x,\zeta_2)$. If $(\lambda ,\delta
)$ is attractive and we set
$\eta^0\equiv 0$, then $\eta^n$ defined by
\begin{eqnarray}
\eta^{n+1}_t(B)&=&\int_{B\times [0,t]\times [0,\infty )^2}\one_{[
0,\lambda (x,\eta^n_{s-})]}(u)\one_{(\int_s^t\delta (x,\eta^n_v)\,
dv,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(\int_0^t\delta (x,\eta^n_s)\,ds,\infty )}(r)\widehat{\eta}_0
(dx,dr)\label{stocheqe}\end{eqnarray}
is monotone increasing and either $\eta^n$ converges to a
process with values in ${\mathcal S}$, or
\begin{equation}\int_0^T\int_Sc_k(x)\lambda (x,\eta^n_s)\beta (dx
)ds\rightarrow\infty ,\label{diverg}\end{equation}
for some $T$ and $k$. To see this, let
\[\tau_c^n=\inf\{t:\int_0^t\int_Sc_k(x)\lambda (x,\eta^n_s)\beta
(dx)ds>c\}.\]
Then
\begin{eqnarray*}
&&E[\sup_{t\leq T\wedge\tau_c^n}\left(\int_Sc_k(x)\eta_s^{n+1}(dx
)-\int_{A\times [0,\infty )}\one_{(\int_0^t\delta (x,\eta^n_s)\,d
s,\infty )}(r)\widehat\eta_0(dx,dr)\right)]\\
&&\qquad\leq E[\int_0^{T\wedge\tau_c^n}\int_Sc_k(x)\lambda (x,\eta^
n_s)\beta (dx)ds]\\
&&\qquad\leq c,\end{eqnarray*}
and $\tau_c^1\geq\tau_c^2\geq\cdots$. Either
\begin{equation}\lim_{c\rightarrow\infty}\lim_{n\rightarrow\infty}
\tau_c^n=\infty\label{tauinf}\end{equation}
or
(\ref{diverg}) holds for some $T$.
If (\ref{tauinf}) holds almost surely, the limit $\eta^{\infty}$ is the
minimal solution of (\ref{stoch1a}) in the sense that any
other solution $\eta$ will satisfy $\eta^{\infty}_t(B)\leq\eta_t(
B)$ for all
$B\in {\mathcal B}(S)$ and $t\geq 0$.
For an arbitrary pair $(\lambda ,\delta )$ satisfying Conditions \ref{bound} and
\ref{contcnd}, we define an attractive pair by setting
\[\overline{\lambda }(x,\zeta )=\sup_{\zeta'\subset\zeta}\lambda (x,\zeta'
)\qquad\underline {\delta}(x,\zeta )=\inf_{\zeta'\subset\zeta}\delta
(x,\zeta').\]
Let $\eta_0$ be an
${\mathcal S}$-valued random variable independent of $N$, and let $\widehat{
\eta}_0$ be
defined as in (\ref{etahat}).
We assume that $\overline{\lambda}$ satisfies (\ref{brest}), which implies
\begin{equation}\int c_k(x)\overline{\lambda }(x,\zeta )\beta (dx)<\infty
,\quad\zeta\in S,k=1,2,\ldots ,\label{atbnd}\end{equation}
and that
there exists a solution $\overline{\eta}$ for the
pair $(\overline{\lambda },\underline {\delta})$.
We consider a different sequence of approximate
equations. Let $\{K_n\}$ be the sets in the definition of ${\mathcal C}$,
and let $\eta^n$ satisfy
\begin{eqnarray}
\lefteqn{\eta^n_t(B) \, =} \nonumber \\
&&\hspace{-2mm}\int_{B\times [0,t]\times [0,\infty )^2}\one_{[0,\lambda
(x,\overline{\eta}_{s-}\cap K_n\cap\eta^n_{s-})]}(u)\one_{(\int_s^t\delta
(x,\overline{\eta}_v\cap K_n\cap\eta^n_v)\,dv,\infty )}(r)N(dx,ds,dr,d
u)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(\int_0^t\delta (x,\overline{\eta}_v\cap K_n\cap\eta^n_v)\,dv,
\infty )}(r)\widehat{\eta}_0(dx,dr).\label{stocheqe2}\end{eqnarray}
Existence and uniqueness for this equation follow from
the fact that only finitely many births can occur in a
bounded time interval in $K_n$. Consequently, the equation
can be solved from one such birth to the next.
Since $\lambda (x,\overline{\eta}_{s-}\cap K_n\cap\eta^n_{s-})\leq\overline{
\lambda }(x,\overline{\eta}_{s-})$ and
$\delta (x,\overline{\eta}_s\cap K_n\cap\eta^n_v)\geq\underline {\delta}
(x,\overline{\eta}_s)$, it follows that $\eta^n_t\subset\overline{\eta}_t$ and hence
that
\begin{eqnarray}
\eta^n_t(B)&=&\int_{B\times [0,t]\times [0,\infty )^2}\one_{[0,\lambda
(x,K_n\cap\eta^n_{s-})]}(u)\one_{(\int_s^t\delta (x,K_n\cap\eta^n_
v)\,dv,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(\int_0^t\delta (x,K_n\cap\eta^n_v)\,dv,\infty )}(r)\widehat{
\eta}_0(dx,dr).\label{stocheqe2b}\end{eqnarray}
Also, note that for $g\in {\mathcal C}$,
\begin{equation}\int_0^t\int_Sg(x)\delta (x,K_n\cap\eta^n_s)\eta^
n_s(dx)ds\leq\int_0^tg(x)r\one_{[0,\overline{\lambda }(x,\overline{\eta}_{s
-})]}(u)N(dx,ds,dr,du)<\infty .\label{dthest}\end{equation}
Define
$F(\widehat{\zeta })=e^{-\int_{S\times [0,\infty )}f(x,r)\widehat{\zeta }
(dx,dr)}$, $f\in\widehat {{\mathcal C}}$ nonnegative. Setting
\begin{eqnarray}
\widehat {A}_nF(\widehat{\zeta })&=&F(\widehat{\zeta })\Big(\int_{S\times [0,
\infty )}\lambda (x,K_n\cap\zeta )(e^{-f(x,r)}-1)e^{-r}\beta (dx)
dr\nonumber\\
&&\qquad\qquad +\int_{S\times [0,\infty )}\delta (x,K_n\cap\zeta
)f_r(x,r)\widehat{\zeta }(dx,dr)\Big),\label{apgendef}\end{eqnarray}
as in (\ref{ito1}),
\[F(\widehat{\eta}^n_t)-F(\widehat{\eta}_0)-\int_0^t\widehat {A}_nF(\widehat{\eta}^
n_s)ds\]
is a local martingale.
Uniqueness for (\ref{stocheqe2b}) implies that the
residual clock times at time $t$ are conditionally independent,
unit exponentials given ${\mathcal F}^{\eta}_t$. Consequently, as in
Proposition \ref{projprop}, for $G(\zeta )=e^{-\int_Sg(x)\zeta (d
x)}$, $g\in {\mathcal C}$
nonnegative,
and
\[A_nG(\zeta )=\int (G(\zeta +\delta_x)-G(\zeta ))\lambda (x,K_n\cap
\zeta )\beta (dx)+\int (G(\zeta -\delta_x)-G(\zeta ))\delta (x,K_
n\cap\zeta )\zeta (dx),\]
\begin{equation}G(\eta^n_t)-G(\eta_0^t)-\int_0^tA_nG(\eta^n_s)ds\label{lmgex}\end{equation}
is a local martingale.
Exploiting the fact that $\eta^n_t\subset\overline{\eta}_t$,
the relative compactness of $\{\eta^n\}$, in the sense of
convergence in distribution in $D_{{\mathcal C}}[0,\infty )$ follows.
\begin{proposition}Suppose that
Conditions \ref{bound}
and \ref{contcnd} hold.
If
$(x,\zeta )\rightarrow\lambda (x,\zeta )$ and $(x,\zeta )\rightarrow
\delta (x,\zeta )$ are continuous on $S\times {\mathcal C}$,
then $\zeta\rightarrow AG(\zeta )$ is continuous, and
any limit point of $\{\eta^n\}$ is a
solution of the local martingale problem for $A$, and hence
a weak solution of \reff{stoch2a}.
\end{proposition}
\begin{proof}
By (\ref{atbnd}), we can select $a_k$ so that
\[\Gamma (t)\equiv\int_0^t1\vee\sum_ka_k\int_Sc_k(x)\overline{\lambda }
(x,\overline{\eta}_s)\beta (dx)ds<\infty ,\quad\forall t>0\quad a.s.\]
and by (\ref{dthest}), it follows that
\[\tau_m=\inf\{t:\Gamma (t)\geq m\}\]
is a localizing sequence for (\ref{lmgex}) for all $g$ and $n$.
The estimates also give the necessary uniform
integrability to ensure that limit points of (\ref{lmgex})
are local martingales.
\end{proof}
\subsection{Existence and Uniqueness}
If $\sup_{\zeta\in {\mathcal S}}\int_S\lambda (x,\zeta )\beta
(dx)<\infty$, then a solution of
\reff{stoch1a} has only finitely many births per unit
time and it is easy to see that \reff{stoch1a} has a
unique solution. Condition \ref{bound}, however, only
ensures that there are finitely many births per unit
time in each $K_k$, and uniqueness requires
additional conditions. The conditions we use are
essentially the same as those used for existence and
uniqueness of the solution of the time change system in
Garcia (1995). From now on, we are going to assume
that $\delta (x,\eta )=1$, for all $x\in S$ and $\eta\in {\mathcal S}$.
Let $N$ be a Poisson random measure on $S\times [0,\infty )^3$ with
mean measure $\beta (dx)\times ds\times e^{-r}dr\times du$. Let $
\eta_0$ be an
${\mathcal S}$-valued random variable independent of $N$, and let $\widehat{
\eta}_0$ be
defined as in (\ref{etahat}). Suppose $\{{\mathcal F}_t\}$ is a filtration such that
$\widehat{\eta}_0$ is ${\mathcal F}_0$-measurable and $N$ is $\{{\mathcal F}_
t\}$-compatible. We
consider the equation
\begin{eqnarray}
\eta_t(B)&=&\int_{B\times [0,t]\times [0,\infty )^2}\one_{[0,\lambda
(x,\eta_{s-})]}(u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t,\infty )}(r)\widehat{\eta}_0(dx,dr).\label{stoch5}\end{eqnarray}
\begin{theorem}\
\label{thmeu}Assume Conditions \ref{bound} and
\ref{contcnd}. Suppose that
$$a(x,y)\geq\sup_{\eta}|\lambda (x,\eta +\delta_y)-\lambda (x,\eta
)|$$ and that there exists a
positive function $c$ such that
\[M=\sup_x\int_S\frac {c(x)a(x,y)}{c(y)}\beta (dy)<\infty .\]
Then, there exists a unique solution of \reff{stoch5}.
\end{theorem}\
\begin{example}{\rm Let $d(x,\eta )=\inf\{d_S(x,y):y\in\eta \}$, where
$d_S$ is a distance in $S$ such that $(S,d_S)$ is complete
separable metric space. Suppose $\lambda (x,\eta )=h(d(x,\eta
))$. Then $a(x,y)=\sup_{r>d_S(x,y)}|h(r)-h(d_S(x,y))|$. If $h$ is
increasing, then $a(x,y)=h(\infty )-h(d_S(x,y))$ and
\[|\lambda (x,\eta^1)-\lambda (x,\eta^2)|\leq\int (h(\infty )-h(d_
S(x,y)))|\eta^1-\eta^2|(dy).\]
If $h$ is decreasing, then $a(x,y)=h(d_S(x,y))-h(\infty )$ and
\[|\lambda (x,\eta^1)-\lambda (x,\eta^2)|\leq\int (h(d_S(x,y))-h(
\infty ))|\eta^1-\eta^2|(dy).\]
}
\end{example}
Theorem \ref{thmeu}\ is a consequence of the following lemmas
that hold under the conditions of the theorem.
\begin{lemma}\
\label{lemmaTV} For any $\eta^1,\eta^2\in {\mathcal S}$ we have
\begin{equation}|\lambda (x,\eta^1)-\lambda (x,\eta^2)|\le\int_Sa
(x,y)\,|\eta^1-\eta^2|(dy).\label{eqcolc1}\end{equation}
\end{lemma}
\begin{proof}
Since $\eta^1$ and $\eta^2$ contain countably many points,
there exist $\{y_1,y_2,\ldots \}$ and $\{z_1,z_2,\ldots \}$ such that
\[\eta^2=\eta^1+\sum_{i=1}^I\delta_{y_i}-\sum_{j=1}^J\delta_{z_j}\]
(where $I$ and $J$ may be infinity) and hence
\[|\eta^1-\eta^2|(B)=\sum_{i=1}^I\delta_{y_i}(B)+\sum_{j=1}^J\delta_{
z_j}(B).\]
By the definition of $a$ and Condition \ref{contcnd}
\begin{eqnarray}
|\lambda (x,\eta^1)-\lambda (x,\eta^2)|&=&\lim_{n\rightarrow\infty}
|\lambda (x,\eta^1)-\lambda (x,\eta^1+\sum_{i=1}^{I\wedge n}\delta_{
y_i}-\sum_{j=1}^{J\wedge n}\delta_{z_j})|\label{lamest}\\
&\le&\lim_{n\rightarrow\infty}\left(\sum_{i=1}^{I\wedge n}a(x,y_i
)+\sum_{j=1}^{J\wedge n}a(x,z_j)\right)\nonumber\\
&\le&\int_Sa(x,y)\,|\eta^1-\eta^2|(dy).\nonumber\end{eqnarray}
\hfill \end{proof}
Define
\[\eta_0(B,t)=\int_{B\times [0,\infty )}\one_{(t,\infty )}(r)\widehat{
\eta}_0(dx,dr).\]
Let $\eta$ be $\{{\mathcal F}_t\}$-adapted with sample paths in $D_{{\mathcal S}}
[0,\infty )$.
Then by Condition \ref{bound}
\begin{equation}\label{c302}\Phi\eta_t(B)=\eta_0(B,t)+\int_{B\times
[0,t]\times [0,\infty )^2}\one_{[0,\lambda (x,\eta_s)]}(u)\one_{(
t-s,\infty )}(r)N(dx,ds,dr,du)\end{equation}
defines a process adapted to $\{{\mathcal F}_t\}$ with sample paths in
$D_{{\mathcal S}}[0,\infty )$.
\begin{lemma}\label{lemma29}
Let $\eta^1$ and $\eta^2$ be adapted to $\{{\mathcal F}_t\}$ and have sample paths
in $D_{{\mathcal S}}[0,\infty$). Then
\begin{eqnarray}
&&\sup_xc(x)E[\int_Sa(x,y)|\Phi\eta^1(t)-\Phi\eta^2(t)|(dy)]
\label{eq7}\\
&&\qquad\le M\int_0^t\sup_x\,c(x)E[\int_Sa(x,y)|\eta^1_s-\eta^2_s
|(dy)]e^{-(t-s)}\,ds.\nonumber\end{eqnarray}
\end{lemma}
\begin{proof}
Let $\xi^i=\Phi\eta^i$. Then
\begin{eqnarray}
&&\sup_zc(z)E[\int_Sa(z,x)|\xi^1_t-\xi^2_t|(dx)]\nonumber\\
&&\quad\le\sup_zc(z)E[\int_{S\times [0,t]\times [0,\infty )^2}a(z
,x)|\one_{[0,\lambda (x,\eta^1_s)]}(u)-\one_{[0,\lambda (x,\eta^2_
s)]}(u)|\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\one_{(t-
s,\infty )}(r)N(dx,ds,dr,du)]\nonumber\\
&&\quad\le\sup_zc(z)E[\int_{S\times [0,t]}a(z,x)|\lambda (x,\eta^
1_s)-\lambda (x,\eta^2_s)|e^{-(t-s)}\beta (dx)ds]\nonumber\\
&&\quad\le\sup_zc(z)\,\int_{S\times [0,t]}a(z,x)E[\int_Sa(x,y)|\eta_
s^1-\eta^2_s|(dy)]\,e^{-(t-s)}\,\beta (dx)ds\nonumber\\
&&\quad\le\sup_zc(z)\,\int_S\frac {a(z,x)}{c(x)}\beta (dx)\,\int_
0^t\sup_x\,c(x)E[\int_Sa(x,y)|\eta^1_s-\eta^2_s|(dy)]e^{-(t-s)}\,
ds\nonumber\\
&&\quad\le M\,\int_0^t\sup_x\,c(x)E[\int_Sa(x,y)|\eta^1_s-\eta^2_
s|(dy)]e^{-(t-s)}\,ds.\label{lipest}\end{eqnarray}
\hfill \end{proof}
\begin{proof}\ (Theorem \ref{thmeu}) Uniqueness follows by
\reff{eq7} and Gronwall's inequality. To prove existence,
we proceed by iteration. Let $\eta^0_t=\eta_0(\cdot ,t)$, and for $
n\ge 1$,
define $\eta^{n+1}=\Phi\eta^n$. Then
\begin{eqnarray}
&&\sup_xc(x)E[\int_Sa(x,y)|\eta^{n+1}_t-\eta^n_t|(dy)]\nonumber\\
&&\quad\le M\int_0^t\sup_xc(x)E[\int_Sa(x,y)|\eta^n_s-\eta^{n-1}_
s|(dy)]e^{-(t-s)}\,ds\nonumber\\
&&\quad\le M^2\int_0^t\int_0^{s_1}\hspace{-2mm}\sup_xc(x)E[\int_Sa(x,y)|\eta^{
n-1}_{s_2}-\eta_{s_2}^{n-2}|(dy)]e^{-(s_1-s_2)}\,ds_2\,e^{-(t-s_1
)}\,ds_1\nonumber\\
&&\quad\le M^n\int_0^t\int_0^{s_1}\ldots\int_0^{s_n-1}\sup_xc(x)E
[\int_Sa(x,y)|\eta^1_{s_{n-1}}-\eta_{s_{n-1}}^0|(dy)\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad e^{-(s_{n-1}-s_n)}ds_n\,\ldots\,
e^{-(t-s_1)}\,ds_1.\nonumber\end{eqnarray}
Therefore, there exists $C>0$ such that
\[\sup_xc(x)E[\int_Sa(x,y)|\eta^{n+1}_t-\eta^n_t|(dy)]\leq\frac {
C^nt^n}{n!}\sup_{s\leq t}\sup_xc(x)E[\int_Sa(x,y)|\eta_s^1-\eta^0_
s|(dy)],\]
and the convergence of $\eta^n$ to a solution of \reff{stoch5}
follows. \hfill \end{proof}
\setcounter{equation}{0}
\section{Ergodicity for spatial birth and death
processes} \label{sectiondeath}
\subsection{Temporal ergodicity}
The statement that a Markov process is ergodic can
carry several meanings. At a minimum, it means that
there exists an unique stationary distribution for the
process. Under this condition, the corresponding
stationary process is ergodic in the sense of triviality of
its tail $\sigma$-algebra. A second, stronger meaning of
ergodicity for Markov processes is that for all initial
distributions, the distribution
of the process at time $t$ converges to the (unique) stationary
distribution as $t\rightarrow\infty$.
One approach to the first kind of ergodicity involves
using the stochastic equation to construct a ``coupling
form the past.'' Following an idea of Kendall and M\o ller
(2000),
for
$\eta^1\subset\eta^2$,
define
\[\overline{\lambda }(x,\eta^1,\eta^2)=\sup_{\eta^1\subset\eta\subset\eta^
2}\lambda (x,\eta )\qquad\underline {\lambda}(x,\eta^1,\eta^2)=\inf_{
\eta^1\subset\eta\subset\eta^2}\lambda (x,\eta ).\]
Note that for $\eta^1\subset\eta^2$
\[|\overline{\lambda }(x,\eta^1,\eta^2)-\underline {\lambda}(x,\eta^1,
\eta^2)|\leq\int_Sa(x,y)|\eta^1-\eta^2|(dy).\]
We assume that $N$ is defined on $S\times (-\infty ,\infty )\times
[0,\infty )^2$, that
is, for all positive and negative time, and consider a
system starting from time $-T$, that is, for $t\geq -T$
\begin{eqnarray}
\eta_t^{1,T}(B)&=&\int_{B\times [-T,t]\times [0,\infty )^2}\one_{
[0,\underline {\lambda}(x,\eta^{1,T}_{s-},\eta^{2,T}_{s-}))}(u)\one_{
(t-s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t+T,\infty )}(r)\widehat{\eta}_{-T}^{1,T}(dx,dr)\nonumber\\
\eta_t^{2,T}(B)&=&\int_{B\times [0,t]\times [0,\infty )^2}\one_{[
0,\overline{\lambda }(x,\eta^{1,T}_{s-},\eta_{s-}^{2,T})]}(u)\one_{(t-
s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t+T,\infty )}(r)\widehat{\eta}_{-T}^{2,T}(dx,dr),\label{coupsys}\end{eqnarray}
where we require $\eta^{1,T}_{-T}\subset\eta^{2,T}_{-T}$. Suppose $
\lambda (x,\eta )\leq\Lambda (x)$ for all $\eta$
and
\begin{equation}\int_Sc_k(x)\Lambda (x)\beta (dx)<\infty ,\qquad
k=1,2,\ldots ,\label{biglam}\end{equation}
which implies
Condition \ref{bound}, and suppose Condition \ref{contcnd}
holds.
Then we can obtain a
solution of (\ref{coupsys}) by iterating
\begin{eqnarray}
\eta_t^{1,T,n+1}(B)&=&\int_{B\times [-T,t]\times [0,\infty )^2}\one_{
[0,\underline {\lambda}(x,\eta^{1,T,n}_{s-},\eta^{2,T,n}_{s-}))}(
u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t+T,\infty )}(r)\widehat{\eta}_{-T}^{1,T}(dx,dr)\nonumber\\
\eta_t^{2,T,n+1}(B)&=&\int_{B\times [-T,t]\times [0,\infty )^2}\one_{
[0,\overline{\lambda }(x,\eta^{1,T,n}_{s-},\eta_{s-}^{2,T,n})]}(u)\one_{
(t-s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t+T,\infty )}(r)\widehat{\eta}_{-T}^{2,T}(dx,dr),\label{coupsys2}\end{eqnarray}
where we take $\eta^{1,T,1}_t\equiv\emptyset$ and
\begin{eqnarray*}
\eta_t^{2,T,1}(B)&=&\int_{B\times [-T,t]\times [0,\infty )^2}\one_{
[0,\Lambda (x)]}(u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du)\\
&&\quad\quad\quad\quad\quad\quad\quad\quad +\int_{B\times [0,\infty
)}\one_{(t+T,\infty )}(r)\widehat{\eta}_{-T}^{2,T}(dx,dr).\end{eqnarray*}
Note that $\eta^{1,T,n}\subset\eta^{2,T,n}$, $\{\eta^{1,T,n}\}$ is monotone increasing,
and $\{\eta^{2,T,n}\}$ is monotone decreasing, and the limit, which
must exist, will be a solution of (\ref{coupsys}).
For $C\subset{\mathbb R}$, define $(C+t)=\{(s+t):s\in C\}$, and define the
time-shift of $N$ by
$R_tN(B\times C\times D\times E)=N(B\times (C+t)\times D\times E)$.
Taking $T=\infty$ in (\ref{coupsys2}),
the iterates
\begin{eqnarray}
\eta_t^{1,\infty ,n+1}(B)\hspace{-2mm}&=&\hspace{-2mm}\int_{B\times (-\infty ,t]\times [0,\infty
)^2}\hspace{-2mm}\one_{[0,\underline {\lambda}(x,\eta^{1,\infty ,n}_{s-},\eta^{
2,\infty ,n}_{s-})}(u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du)\nonumber\\
\label{coupsys3}\\
\eta_t^{2,\infty ,n+1}(B)\hspace{-2mm}&=&\hspace{-2mm}\int_{B\times (-\infty ,t]\times [0,\infty
)^2}\hspace{-2mm}\one_{[0,\overline{\lambda }(x,\eta^{1,\infty ,n}_{s-},\eta_{s-}^{
2,\infty ,n})]}(u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du),\nonumber\end{eqnarray}
satisfy $\eta^{m,\infty ,n}_t=H^{m,n}(R_tN)$, $m=1,2$, for deterministic
mappings
$H^{m,n}:{\mathcal N}(S\times (-\infty ,\infty )\times [0,\infty )^2)
\rightarrow {\mathcal N}(S)$ and the limits $\eta_t^{m,\infty}$
satisfy
\begin{equation}\eta^{m,\infty}_t=H^m(R_tN),\label{map2}\end{equation}
where $H^m=\lim_{n\rightarrow\infty}H^{m,n}$. It
follows that $(\eta_t^{1,\infty},\eta_t^{2,\infty})$ is stationary and ergodic.
\begin{lemma}
Suppose that $\lambda$ satisfies (\ref{biglam}) and Condition
\ref{contcnd}.
Then
\[\eta^{1,\infty}_t\equiv\lim_{n\rightarrow\infty}\eta_t^{1,\infty
,n}\mbox{\rm \ and }\eta^{2,\infty}_t\equiv\lim_{n\rightarrow\infty}
\eta_t^{2,\infty ,n}\]
exist and are stationary.
\end{lemma}
Applying
Theorem \ref{equivthm}, any
stationary solution of the martingale problem can be
represented as a weak solution $\eta$ of the stochastic
equation on the doubly infinite time interval and hence
coupled to versions of $\eta^{1,\infty ,n}$ and $\eta^{2,\infty ,
n}$ so that
$\eta^{1,\infty ,n}_t\subset\eta_t\subset\eta^{2,\infty ,n}_t$, $
-\infty <t<\infty$. Consequently, we have
the following.
\begin{lemma}\label{uniquesd}
Suppose that $\lambda$ satisfies (\ref{biglam}) and Condition
\ref{contcnd}. If
\[\lim_{n\rightarrow\infty}\int_Sc_k(x)|\eta_t^{2,\infty ,n}-\eta^{
1,\infty ,n}_t|(dx)=0\quad a.s.\]
for $k=1,2,\ldots$, then $\eta\equiv\eta^{2,\infty}=\eta^{1,\infty}$ a.s. is a stationary
solution of (\ref{stoch5}) and the distribution
of $\eta_t^{2,\infty}$ is the unique stationary distribution for $
A$.
\end{lemma}
\begin{theorem}\
\label{contre} Let $\lambda :S\times {\mathcal N}(S)\rightarrow [0,\infty
)$ satisfy the conditions of
Theorem \ref{thmeu} with $M<1$.
Then $\eta\equiv\eta^{2,\infty}=\eta^{1,\infty}$ a.s. is a stationary
solution of (\ref{stoch5}) and the distribution
of $\eta_t^{2,\infty}$ is the unique stationary distribution for $
A$.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{thmeu},
\begin{eqnarray}
&&\sup_xc(x)E[\int_Sa(x,y)|\eta^{2,\infty ,n+1}_t-\eta^{1,\infty
,n+1}_t|(dy)]\nonumber\\
&&\quad\le M\int_{-\infty}^t\sup_xc(x)E[\int_Sa(x,y)|\eta^{2,\infty
,n}_s-\eta^{1,\infty ,n}_s|(dy)]e^{-(t-s)}\,ds\nonumber\\
&&\quad =M\sup_xc(x)E[\int_Sa(x,y)|\eta^{2,\infty ,n}_t-\eta^{1,\infty
,n}_t|(dy)],\nonumber\end{eqnarray}
where the equality follows by the stationarity of
$\eta^{2,\infty ,n}$ and $\eta^{1,\infty ,n}$. Since the expression on the left is
nonincreasing, its limit $\rho$ exists, and we have $0\leq\rho\leq
M\rho$.
But $M<1$, so $\rho =0$.
\end{proof}
\medskip
\begin{definition} {\rm
$\lambda (x,\cdot )$ is {\em nondecreasing}, if $\eta_1\subset\eta_
2$ implies $\lambda (x,\eta_1)\le\lambda (x,n_2)$. }
\end{definition}
Note that if $\lambda$ is nondecreasing, then for $\eta_1\subset\eta_
2$,
$\overline{\lambda }(x,\eta_1,\eta_2)=\lambda (x,\eta_2)$ and $\underline {
\lambda}(x,\eta_1,\eta_2)=\lambda (x,\eta_1)$. The following
lemma is immediate.
\begin{lemma}
Let $\lambda$ be nondecreasing and satisfy (\ref{biglam}) and
Condition \ref{contcnd}. Then $\eta^{1,\infty}_t\equiv\lim_{n\rightarrow
\infty}\eta_t^{1,\infty ,n}$ and
$\eta^{2,\infty}_t\equiv\lim_{n\rightarrow\infty}\eta_t^{2,\infty
,n}$ are, respectively, the minimal and
maximal stationary solutions of the martingale problem
for $A$.
\end{lemma}
For $\lambda$ nondecreasing, the minimal stationary distribution
can also easily be obtained as a temporal limit.
\begin{lemma}\ \label{mono}
If uniqueness holds for (\ref{stoch5}) and $\lambda (x,\cdot )$ is nondecreasing,
then the process $\eta_t$ is attractive, that is
\begin{equation}\label{r2}\eta_0^1\subset\eta_0^2\quad\mbox{\rm implies}
\quad\eta_t^1\subset\eta_t^2\end{equation}
for all $t\ge 0$.
\end{lemma}
\begin{proof}
The conclusion is immediate from coupling the two
processes using the same underlying Poisson random
measure. \end{proof}
\begin{theorem}\ Suppose $\lambda$ satisfies (\ref{biglam}) and
Condition \ref{contcnd}.
If $\lambda (x,\cdot )$ is nondecreasing and $\eta_0=\emptyset$, then the
distribution of $\eta_t$ converges to the
minimal stationary
distribution.
\end{theorem}\
\begin{proof}
Note that, if we set $\eta^t_{-t}=\emptyset$, then $\eta_t$ has the same
distribution as $\eta^t_0$, and by Lemma \ref{mono}, $\eta^t_s\subset
\eta^{1,\infty}_s$
for $s\geq -t$. Since for each $s\geq -t$, $\eta_s^t$ is monotone
increasing in $t$, $\tilde{\eta}_s=\lim_{t\rightarrow\infty}\eta^
t_s$ exists and must be a
stationary process. Since $\eta^{1,\infty}$ is the minimal stationary
process, we must have $\tilde{\eta}_s=\eta^{1,\infty}_s$.\hfill \end{proof}
The same argument gives the following result on the
maximal stationary distribution.
\begin{theorem}\ Suppose $\lambda$ satisfies (\ref{biglam}) and
Condition \ref{contcnd}.
If $\lambda (x,\cdot )$ is nondecreasing and
\[\eta_0(B)=\int_{B\times (-\infty ,0]\times [0,\infty )^2}\one_{
[0,\Lambda (x)]}(u)\one_{(t-s,\infty )}(r)N(dx,ds,dr,du),\]
then the
distribution of $\eta_t$ converges to the
maximal stationary
distribution.
\end{theorem}\
\begin{remark}
{\rm\ Note that $\eta_0$ is a Poisson random measure with
mean measure $\mu (B)=\int_B\Lambda (x)\beta (dx)$. }
\end{remark}
\medskip
We can also use the stochastic equation and estimates
similar to those used in the proof of uniqueness to give
conditions for ergodicity in the sense of convergence as
$t\rightarrow\infty$ for all initial distributions.
\begin{theorem}\
\label{thte} Let $\lambda :S\times {\mathcal N}(S)\rightarrow [0,\infty
)$ satisfy the conditions of
Theorem \ref{thmeu} with $M<1$. Then the process
obtained as a solution of the system of stochastic
equations \reff{stoch5} is temporally ergodic and the
rate of convergence is exponential.
\end{theorem}
\begin{proof}
Suppose $\eta^1$ and $\eta^2$ are solutions of the system
\reff{stoch5} with distinct initial configurations $\eta^1_0$ and $
\eta^2_0$
(equivalently, $\widehat{\eta}^1_0$ and $\widehat{\eta}^2_0$). Then, by exactly the same
argument as used for the proof of Lemma \ref{lemma29}
we obtain
\begin{eqnarray}
&&\sup_xc(x)\,E[\int_Sa(x,y)|\eta^1_t-\eta^2_t|(dy)]\label{cc311}\\
&&\qquad\le e^{-t}\sup_xc(x)E[\int_Sa(x,y)|\eta_0^1-\eta^2_0|(dy)
]\nonumber\\
&&\qquad\qquad +M\int_0^t\sup_x\,c(x)E[\int_Sa(x,y)|\eta^1_s-\eta^
2_s|(dy)]\,e^{-(t-s)}\,ds.\nonumber\end{eqnarray}
Multiply \reff{cc311} by $e^t$ and apply Gronwall's
inequality to obtain the exponential rate of convergence.
\end{proof}
\subsection{Spatial ergodicity}
In this section, we take $S={\mathbb R}^d$ and assume that $\lambda$ is
translation invariant in the following sense. For
arbitrary $x,y\in{\mathbb R}^d$ and $B\in {\mathcal B}({\mathbb R}^d)$, write
\[T_xy=x+y~~~\mbox{\rm and}~~~T_xB=B+x=\{y+x;y\in B\}.\]
Then, $T_x$ induces a transformation $S_x$ on ${\mathcal N}({\mathbb R}^d)$ through
the equation
\begin{equation}(S_x\eta )(B)=\eta (T_xB),~~~\eta\in {\mathcal N}({\mathbb R}^
d),B\in {\mathcal B}({\mathbb R}^d).\label{eqcolbsx}\end{equation}
Note that if $\eta =\sum\delta_{x_i}$, then $S_x\eta =\sum\delta_{
x_i-x}$.
\begin{definition} {\rm
We say that $\lambda$ is {\em translation invariant\/} if
$\lambda (x+y,\eta )=\lambda (x,S_y\eta )$ for $x,y\in{\mathbb R}^d,\eta\in
{\mathcal N}({\mathbb R}^d)$. }
\end{definition}
\begin{definition} {\rm
An ${\mathcal N}({\mathbb R}^d)$-valued random variable $\eta$ is
{\em translation invariant\/} if the distribution of $S_y\eta$ does not depend on
$y$. A probability distribution $\mu\in {\mathcal P}({\mathcal N}({\mathbb R}^d))$ is
{\em translation invariant\/} if $\int f(\eta )\mu (d\eta )=\int
f(S_y\eta )\mu (d\eta )$,
for all $y\in {\Bbb R}^d$ and all bounded, measurable functions $
f$. }
\end{definition}
\begin{definition}\label{spergdef}
Let $\eta$ be a translation
invariant,
${\mathcal N}({\mathbb R}^d)$-valued random variable.
A measurable subset $G\subset {\mathcal N}({\mathbb R}^d)$ is
{\em almost surely translation invariant\/} for $\eta$, if
\[\one_G(\eta )=\one_G(S_x\eta )\quad a.s.\]
for every $x\in{\mathbb R}^d$.
$\eta$ is {\em spatially ergodic\/} if $P\{\eta\in G\}$ is $0$
or $1$ for each almost surely translation invariant $G\subset {\mathcal N}
({\mathbb R}^d)$.
\end{definition}
Similarly, for $x\in{\mathbb R}^d$, we define $S_xN$ so that the spatial
coordinate of each point is shifted by $-x$. Almost sure
translation invariance of a set $G\subset {\mathcal N}({\mathbb R}^d\times [0
,\infty )^3)$ and
spatial ergodicity are defined analogously to Definition
\ref{spergdef}. Spatial ergodicity for $N$ follows from its
independence properties.
\begin{lemma}\label{trinv}
Suppose $\lambda$ is translation invariant. If $\eta_0$ is
translation invariant and spatially ergodic and the
solution of (\ref{stoch5}) is unique, then for each $t>0$,
$\eta_t$ is translation invariant and spatially ergodic.
\end{lemma}
\begin{proof}\ $\{S_y\eta_t,t\geq 0\}$ is the solution of (\ref{stoch5})
with $\eta_0$ replaced by $S_y\eta_0$ and $N$ replaced by $S_yN$. By
uniqueness, $S_y\eta_t$ must have the same distribution as $\eta_
t$
giving the stationarity. Also, by uniqueness, for
measurable $G\subset {\mathcal N}({\mathbb R}^d)$ there exists a measurable
$\widehat {G}\subset {\mathcal N}({\mathbb R}^d)\times {\mathcal N}({\mathbb R}^d\times [0,\infty
)^3)$ such that
\[\one_{\{S_x\eta_t\in G\}}=\one_{\{(S_x\eta_0,S_xN)\in\widehat {G}\}}
\quad a.s.\]
for all $x\in{\mathbb R}^d$. Consequently, spatial ergodicity for $\eta_t$
follows from the spatial ergodicity
of $(\eta_0,N)$.
\end{proof}
\begin{remark}
{\rm If $\eta$ is temporally ergodic and $\pi$ is the unique
stationary distribution, then it must be translation
invariant since $\{\eta_t\}$ stationary (in time) implies $\{S_x\eta_
t\}$ is
stationary.
}
\end{remark}
\begin{lemma}\label{sperg}
Suppose that $\lambda$ is translation invariant and
satisfies (\ref{biglam}) and Condition
\ref{contcnd}.
Then for each $t$, $\eta^{1,\infty}_t\equiv\lim_{n\rightarrow\infty}
\eta_t^{1,\infty ,n}$ and
$\eta^{2,\infty}_t\equiv\lim_{n\rightarrow\infty}\eta_t^{2,\infty
,n}$ are spatially ergodic.
\end{lemma}
\begin{proof}
As in (\ref{map2}), $\eta_t^{1,\infty}=H^1(R_tN)$
can be written as a
deterministic transformation $F(t,N)$ of $N$ and that
$S_y\eta_t^{1,\infty}=F(t,S_yN)$. The spatial ergodicity of $\eta_
t^{1,\infty}$ then
follows from the spatial ergodicity of $N$.\hfill \end{proof}
\begin{corollary}
If in
addition to the conditions of Lemma \ref{sperg},
$\lambda$ satisfies the conditions of Lemma
\ref{uniquesd}, then the unique stationary distribution is
spatially ergodic. In particular, if $\lambda$ satisfies the
conditions of Theorem \ref{thmeu}\ with $M<1$, then the
unique stationary distribution is spatially ergodic.
\end{corollary}
\begin{corollary}
If in addition to the conditions of Lemma \ref{sperg}, $\lambda$ is
nondecreasing, then the minimal and maximal stationary
distributions are spatially ergodic.
\end{corollary}
\vskip5mm
\noindent {\bf Acknowledgments} This project was conducted
during several visits of NLG to the CMS - University of Wisconsin.
This project was partially supported by CNPq Grant 301054/93-2 and
FAPESP 1995/4996-3 (NLG) and DMS 02-05034 and DMS
05-03983 (TGK). This material is based upon work supported by, or
in part by, the U. S. Army Research Laboratory and the U. S. Army
Research Office under contract, grant number DAAD19-01-1-0502. and by
NSF Grant DMS 02-05034. \\
\noindent {\bf References}
|
2,877,628,088,618 | arxiv | \section{Introduction}
The goal of this brief communication is to illuminate the group theoretic origin of a certain
quadratic $r$-matrix structure on the associative algebra $\cG:= \gl(n,\R)$.
This Poisson structure is associated with the QU factorization
and it appeared in the theory of integrable systems \cite{LP,OR}.
Like the corresponding linear $r$-matrix bracket, it can be restricted to
the Poisson submanifolds consisting of symmetric and of tridiagonal symmetric matrices \cite{OR},
thereby producing the bi-Hamiltonian structures of the full symmetric and of the usual (open)
Toda lattices \cite{DLNT,SurB}.
It is well known (see e.g. \cite{STS2}) that the linear
$r$-matrix bracket on $\cG$ is a reduction of the canonical Poisson bracket
of the cotangent bundle of the group $G:= \GL(n,\R)$.
Our observation is that $T^*G$ carries also a quadratic Poisson bracket that
descends to the relevant quadratic bracket on $\cG$ via the same reduction
procedure which works in the linear case.
The idea arises from \cite{FeAHP,FeNlin}, where bi-Hamiltonian structures
for spin Sutherland models were obtained by reducing bi-Hamiltonian structures
on the cotangent bundle of $\GL(n,\C)$.
We now recall the necessary background information about linear and quadratic
$r$-matrix Poisson brackets on $\cG$.
This is a specialization of general results
found in \cite{LP,OR} (see also \cite{SurPLA}).
Let $R$ be a linear operator on $\cG$ that solves the modified classical Yang-Baxter
equation\footnote{For reviews on $r$-matrices and their use, one may consult, for example, \cite{STS2,SurB}.}.
Decompose $R$ as the sum of its anti-symmetric and symmetric parts, $R_a$ and $R_s$, with
respect to the non-degenerate bilinear form,
\be
\langle X,Y \rangle := \tr(XY), \quad \forall X,Y\in \cG,
\label{I1}\ee
and suppose that $R_a$ solves the same equation as $R$.
For a smooth real function on $\cG$ let $df$ denote its gradient defined using the trace form \eqref{I1},
and introduce the `left- and right-derivatives' $\nabla f$ and $\nabla'f$ by
\be
\nabla f(L):= L df(L), \qquad \nabla' f(L):= df(L) L.
\label{I2}\ee
Then the following formula defines a Poisson bracket on $\cG$:
\be
\{f,h\}_2 := \langle \nabla f, R_a \nabla h \rangle - \langle \nabla' f, R_a \nabla' h \rangle
+ \langle \nabla f, R_s \nabla' h \rangle - \langle \nabla' f, R_s \nabla h \rangle.
\label{I3}\ee
The Lie derivative of this quadratic $r$-matrix bracket along the vector field $V(L):= \1_n$
is the linear $r$-matrix bracket,
\be
\{ f, h \}_1(L) = \langle L, [Rdf(L),dh(L)] + [ df(L), R dh(L)]\rangle,
\label{I4}\ee
and thus the two Poisson brackets are compatible. The Hamiltonians $h_k(L) := \frac{1}{k} \tr(L^k)$
are in involution with respect to both brackets. They enjoy
the relation
\be
\{ f, h_k\}_2 = \{f, h_{k+1}\}_1, \qquad \forall f\in C^\infty(\cG),
\label{I5}\ee
and their Hamiltonian vector fields engender
bi-Hamiltonian Lax equations:
\be
\partial_{t_k}(L) := \{ L, h_k\}_2 = \{ L, h_{k+1}\}_1 = [R(L^k), L],
\qquad \forall k\in \N.
\label{I6}\ee
Turning to the example of our interest, let us decompose any $X\in \cG$ as
\be
X= X_> + X_0 + X_<,
\label{I7}\ee
where $X_>$, $X_0$ and $X_<$ are the strictly upper triangular, diagonal and strictly lower triangular parts of the matrix $X$,
respectively.
Denote $\cA < \cG$ the Lie subalgebra of skew-symmetric matrices and $\cB < \cG$ the
subalgebra of upper triangular matrices.
They enter the vector space direct sum
\be
\cG = \cA + \cB,
\label{I8}\ee
and, using the projections $\pi_\cA$ onto $\cA$ and $\pi_\cB$ onto $\cB$,
yield the $r$-matrix
\be
R= \half (\pi_\cB - \pi_\cA).
\label{I9}\ee
In terms of the triangular decomposition \eqref{I7},
\be
\pi_\cA(X) = X_< - (X_<)^T, \quad
\pi_\cB(X) = X_> + X_0 + (X_<)^T,
\label{I10}\ee
and
\be
R(X) = \half (X_> + X_0- X_<) + (X_<)^T,
\,\,
R_a(X)=\half (X_> - X_<),
\,\,
R_s(X) =\half X_0 + (X_<)^T.
\label{I11}\ee
This $r$-matrix $R$ satisfies the conditions stipulated above, and
we are going to derive its quadratic bracket \eqref{I3} by reduction of a Poisson structure on $T^*G$.
\begin{remark}
The matrix space ${\mathrm{mat}}(n\times n, \R)$ is primarily an associative algebra, and the notation
$\gl(n,\R)$ is usually reserved for its induced Lie algebra structure.
In this paper $\gl(n,\R)$ is understood to carry
both algebraic structures, i.e., we identify $\gl(n,\R)$ with ${\mathrm{mat}}(n\times n,\R)$ when
using the associative product.
This should not lead to any confusion.
\end{remark}
\section{The $r$-matrix brackets from Poisson reduction}
We start with the manifold
\be
\fM:= G \times \cG = \{ (g,L)\mid g\in G, \, L\in \cG\},
\label{T1}\ee
which is to be viewed as a model of $T^*G$ obtained via right-translations
and the identification $\cG^* \simeq \cG$ given by the trace form.
For smooth real functions $F, H\in C^\infty(\fM)$,
the following formulae define two compatible Poisson brackets:
\be
\{ F,H\}_1(g,L) = \langle \nabla_1 F, d_2 H\rangle - \langle \nabla_1 H, d_2 F \rangle +
\langle L, [d_2 F, d_2 H]\rangle,
\label{PB1}\ee
and
\bea
&& \{ F, H\}_2(g,L) =
\langle R_a \nabla_1 F, \nabla_1 H \rangle - \langle R_a \nabla'_1 F, \nabla'_1 H \rangle
+\langle \nabla_2 F - \nabla_2' F, r_+\nabla_2' H - r_- \nabla_2 H
\rangle \nonumber \\
&& \qquad\qquad\qquad \quad
+\langle \nabla_1 F, r_+ \nabla_2' H - r_-\nabla_2 H \rangle
- \langle \nabla_1 H, r_+ \nabla_2' F - r_- \nabla_2 F \rangle,
\label{PB2}\eea
where
\be
r_\pm := R_a \pm \half \id.
\label{T4}\ee
The derivatives are taken at $(g,L)$,
\be
\langle \nabla_1 F(g,L), X\rangle = \dt F(e^{tX} g, L),
\quad
\langle \nabla_1' F(g,L), X\rangle = \dt F(ge^{tX}, L), \quad \forall X\in \cG,
\label{T5}\ee
and $\nabla_2 F(g,L) := L d_2 F(g,L)$, $\nabla_2' F(g,L) := d_2F(g,L) L$ with $d_2 F$ denoting
the gradient with respect to the second argument. The first bracket is just the canonical one.
The second one is obtained by a change of variables from the Heisenberg double \cite{STS1} of
the Poisson--Lie group $G$ equipped with the Sklyanin bracket that appears in the first two terms of \eqref{PB2}.
In the corresponding complex holomorphic case, this is explained in detail in \cite{FeAHP}.
The compatibility of the two brackets also follows by the same Lie derivative argument that works in the complex case \cite{FeAHP}.
We are interested in the restriction of the Poisson brackets \eqref{PB1} and \eqref{PB2} to
those functions on $\fM$ that are invariant with respect to the group
\be
S:= A \times B \quad \hbox{with}\quad A:= {\mathrm{O}}(n,\R),\,\, B:= \exp(\cB),
\label{T6}\ee
whose factors correspond to the Lie algebras $\cA$ and $\cB$ in \eqref{I8}. That is,
$B$ consists of the upper triangular elements of $G$ having positive diagonal entries.
The action of $S$ on $\fM$ is given by letting any $(a,b)\in A \times B$
act on $(g,L)\in \fM$ by the diffeomorphism
\be
(g,L) \mapsto (a g b^{-1}, a L a^{-1}).
\label{T7}\ee
Due to the QU factorization\footnote{That is, due to the fact that the matrix multiplication $m: A \times B \to G$
is a diffeomorphism.}, every $S$ orbit in $\fM$ admits a unique representative of
the form $(\1_n,L)$. Therefore, we may associate to any smooth, $S$ invariant functions $F,H$
on $\fM$ unique smooth functions $f,h$ on $\cG$ according to the rule
\be
f(L):= F(\1_n,L), \quad h(L):= H(\1_n,L).
\label{T8}\ee
Provided that the invariant functions close under the Poisson brackets on $\fM$, we
may define the reduced Poisson brackets on $C^\infty(\cG)$ by setting
\be
\{ f,h\}_i^\red (L) := \{ F, H\}_i(\1_n,L), \qquad i=1,2.
\label{T9}\ee
In other words,
in this situation the Poisson brackets descend to the quotient space $\fM/S \simeq \cG$.
The closure is obvious for the first Poisson bracket, and for the second one we prove it below.
\begin{proposition} If $F$ and $H$ are invariant with respect to the $S$ action \eqref{T7}, then
their second Poisson bracket \eqref{PB2} takes the simplified form
\be
2\{ F, H\}_2 =
\langle \nabla_2 F ,\nabla_2' H \rangle
- \langle \nabla_2 H ,\nabla_2' F \rangle
+ \langle \nabla_1 F, \nabla_2' H +\nabla_2 H \rangle
- \langle \nabla_1 H, \nabla_2' F + \nabla_2 F \rangle.
\label{PB2LR}\ee
This formula implies that the Poisson bracket of two $S$ invariant functions is again $S$ invariant.
\end{proposition}
\begin{proof}
The invariance of $F$ with respect to the action of one parameter subgroups of $A$ and $B$ leads to
the conditions
\be
\langle \nabla_1' F, X \rangle = 0, \quad \forall X\in \cB
\quad \hbox{and}\quad \langle \nabla_1 F + \nabla_2 F - \nabla_2'F, Y \rangle =0, \quad \forall Y\in \cA.
\label{T11}\ee
The first condition means that $\nabla_1' F(g,L)$ is strictly upper triangular, and since the same holds for $H$ we get
\be
\langle R_a \nabla_1' F, \nabla_1' H \rangle = 0.
\label{T12}\ee
By using the second condition in \eqref{T11}, we are going to show that the contributions
containing $R_a$ cancel from all other terms of \eqref{PB2} as well.
To do this, it proves useful to employ the direct sum decomposition
$\cG = \cA + \cA^\perp$,
where $\cA^\perp$ consists of the symmetric matrices in $\cG$.
Accordingly, we may decompose any element $Z\in \cG$ as
\be
Z = Z^+ + Z^-
\quad \hbox{with}\quad
Z^+\in \cA, \, Z^- \in \cA^\perp.
\label{T13}\ee
Then the second condition in \eqref{T11} means that
\be
(\nabla_1 F)^+ = (\nabla_2' F - \nabla_2 F)^+.
\label{T14}\ee
By using this together with the anti-symmetry of $R_a$ and that $R_a$ maps $\cA$ into $\cA^\perp$ and $\cA^\perp$ into $\cA$,
we derive the equalities,
\be
\langle R_a \nabla_1 F, \nabla_1 H\rangle =
\langle R_a (\nabla_1 H)^-, (\nabla_2 F - \nabla_2' F)^+ \rangle
-\langle R_a (\nabla_1 F)^-, (\nabla_2 H - \nabla_2' H)^+ \rangle,
\label{S1}\ee
and
\be
\langle \nabla_1 F, R_a (\nabla_2' H - \nabla_2 H) \rangle
= \langle R_a (\nabla_1 F)^-, (\nabla_2 H - \nabla_2' H)^+ \rangle
+ \langle R_a (\nabla_2' F - \nabla_2 F)^+, (\nabla_2 H - \nabla_2' H)^- \rangle.
\label{S2}\ee
By adding up \eqref{S1} and
the terms in \eqref{S2} together with (minus one times) their counterparts having $F$ and $H$ exchanged, one
precisely cancels
$\langle \nabla_2 F - \nabla_2' F, R_a (\nabla_2' H - \nabla_2 H) \rangle$ in \eqref{PB2}.
Then the formula \eqref{PB2LR} results directly from \eqref{PB2}.
Having derived \eqref{PB2LR}, one sees that the right-hand side of this expression is invariant under the action \eqref{T7} of $S$.
Indeed, this is a consequence of the fact that the derivatives of invariant functions are equivariant,
meaning for example that we have
\be
\nabla_1 F (a g b^{-1}, a L a^{-1}) =a (\nabla_1 F(g,L)) a^{-1},
\quad
\nabla_2 F(a g b^{-1}, aL a^{-1})= a (\nabla_2 F(g,L)) a^{-1}.
\label{T17}\ee
This and the conjugation invariance of the trace imply the claim.
\end{proof}
The following lemma will be important below.
\begin{lemma}
The $S$ invariant function $F$ on $\fM$ and the function $f$ on $\cG$ related by \eqref{T8} satisfy the relations
\be
\nabla_1 F(\1_n,L) = (r_+ - R_s)(\nabla' f(L) - \nabla f(L)),
\quad
d_2 F(\1_n,L) = df(L),
\label{rel**}\ee
where $R_s$ and $r_+$ are given by \eqref{I11} and \eqref{T4}.
\end{lemma}
\begin{proof}
The second relation is obvious, and it implies the identities $\nabla_2 F(\1_n,L) = \nabla f(L)$ and
$\nabla'_2 F(\1_n,L) = \nabla' f(L)$.
Since $\nabla_1 F(\1_n,L) = \nabla'_1 F(\1_n,L)$ by \eqref{T5}, we see from \eqref{T11} that
\be
\nabla_1 F(\1_n,L) = (\nabla_1 F(\1_n,L))_>,
\label{T19}\ee
where we applied the triangular decomposition \eqref{I7}.
Then, noting that the anti-symmetric part of any $X\in \cG$ is $X^+ = \frac{1}{2} ( X - X^T)$,
it follows from the equality \eqref{T14} that
\be
(\nabla_1 F(\1_n,L))_> = 2 ((\nabla_1 F(\1_n,L))^+)_> = (\nabla' f(L) - \nabla f(L))_> - ((\nabla' f(L) - \nabla f(L))_<)^T.
\label{T20}\ee
Because $r_+ X = X_> + \frac{1}{2} X_0$ and $R_s X = \frac{1}{2} X_0 + (X_<)^T$ by \eqref{I11} and \eqref{T4},
the statement \eqref{rel**} is obtained by combining \eqref{T19} and \eqref{T20}.
\end{proof}
We now prove our claim about the reduction origin of the quadratic bracket
\eqref{PB2}, which we could not find in the literature.
For completeness, we also show that the linear $r$-matrix bracket \eqref{I4} descends from \eqref{PB1},
which is a classical result \cite{STS2}.
\begin{theorem} The reductions \eqref{T9} of the Poisson brackets \eqref{PB1} and \eqref{PB2} on the cotangent bundle
$\fM\equiv T^*\GL(n,\R) $ \eqref{T1}
defined by taking quotient by the action \eqref{T7} of the group $S$ \eqref{T6}
give the linear \eqref{I4} and quadratic \eqref{I3} $r$-matrix brackets on $\cG = \gl(n,\R)$, respectively.
\end{theorem}
\begin{proof}
We have to evaluate the expressions \eqref{T9} for $f$ and $h$ related to the $S$ invariant functions $F$ and $H$ by \eqref{T8}.
We start with the second bracket, relying on \eqref{PB2LR}.
Substitution of the relations \eqref{rel**} into \eqref{PB2LR} gives
\be
\langle \nabla_1 F, \nabla'_2 H + \nabla_2 H\rangle - \cE(F,H) = \langle r_+(\nabla' f - \nabla f) - R_s (\nabla' f - \nabla f), \nabla' h + \nabla h\rangle
- \cE(f,h),
\ee
where $\cE(F,H)$ stands for the terms obtained by exchanging the roles of $F$ and $H$, and similarly for $f$ and $h$.
Writing $r_+ = R_a + \half \id$, we find
\be
\half \langle \nabla' f - \nabla f , \nabla' h + \nabla h\rangle - \cE(f,h) =
\langle \nabla' f ,\nabla h \rangle
-\langle \nabla f ,\nabla' h \rangle.
\ee
The terms containing $R_a$ and $R_s$ contribute
\be
\langle R_a(\nabla' f - \nabla f) , \nabla' h + \nabla h\rangle - \cE(f,h) =
2 \langle R_a \nabla' f ,\nabla' h \rangle
-2 \langle R_a\nabla f ,\nabla h \rangle,
\ee
and
\be
\langle R_s (\nabla f - \nabla' f) , \nabla' h + \nabla h\rangle - \cE(f,h) =
2 \langle R_s \nabla f ,\nabla' h \rangle
-2 \langle R_s\nabla' f ,\nabla h \rangle.
\ee
Plugging these identities into \eqref{PB2LR}, we obtain the result
\be
\{f,h\}_2^\red =
\langle \nabla f ,R_a\nabla h \rangle - \langle \nabla' f ,R_a \nabla' h \rangle
+ \langle \nabla f ,R_s \nabla' h \rangle
- \langle \nabla' f ,R_s \nabla h \rangle,
\ee
which reproduces the quadratic $r$-matrix bracket \eqref{PB2}.
To continue, we evaluate \eqref{T9} for $i=1$.
Substitution of \eqref{rel**} now gives, at the appropriate arguments,
\be
\langle \nabla_1 F, d_2 H \rangle = \langle (R_a + \frac{1}{2}\id - R_s)[df,L], dh\rangle =
\langle L, [ df, R dh] \rangle - \frac{1}{2} [df, dh]\rangle.
\ee
Here, $R= R_a + R_s$ and we used the standard invariance properties of the trace form \eqref{I1}.
Consequently, we get
\be
\langle \nabla_1 F, d_2 H \rangle - \langle \nabla_1 H, d_2 F \rangle + \langle L, [d_2 F, d_2 H]\rangle
= \langle L, [R df, dh] + [df, R dh]\rangle.
\ee
The right-hand side gives $\{f,h\}_1^\red$, which coincides with the linear $r$-matrix bracket \eqref{PB1}.
\end{proof}
\begin{remark}
Let us recall \cite{LP,OR} that $\cG$ carries also a cubic $r$-matrix Poisson bracket which is compatible with the linear and quadratic ones.
It can be obtained from the linear bracket by performing the densely defined change of variables $L \mapsto L^{-1}$, and then extending
the result to the full of $\cG$. For completeness, we note that the same change of variables is applicable on $T^*G$, too, and the so-obtained
Poisson bracket then leads to the cubic bracket on $\cG$ by the reduction procedure described in the above.
\end{remark}
\section{Discussion}
We explained that the quadratic $r$-matrix bracket \eqref{I3} of the `generalized Toda hierarchy'
\eqref{I6} on $\gl(n,\R)$ is a reduction of a quadratic Poisson bracket on $T^* \GL(n,\R)$.
This observation escaped previous attention, probably because the convenient form \eqref{PB2} of the relevant parent Poisson bracket
came to light only recently \cite{FeAHP}.
The integrability of the system \eqref{I6} was thoroughly studied
in \cite{DLT} (see also \cite{LP}), together with two other related hierarchies.
These are of the form \eqref{I6}, but instead of $R$ \eqref{T11} use either $R'$ given by
$R'(X) := \frac{1}{2} (X_> + X_0 - X_<)$
or $R'':= R_a$ (which gives the anti-symmetric part of $R'$, too).
We can show that the quadratic $r$-matrix brackets obtained from \eqref{I3}
by replacing $R$ with $R'$ or $R''$ are also reductions of the bracket \eqref{PB2} on $\fM$,
similarly to how the linear $r$-matrix brackets descend \cite{STS2} from \eqref{PB1}.
In the case of $R'$ one may use the group $S':= A' \times B$, where $A'$ is the exponential of the strictly lower triangular subalgebra of $\cG$. In the case of $R''$ the reduction group is $S''< (G\times G)$ having elements of the form
$(a,b) = (e^{X_0} e^{X_<}, e^{-X_0} e^{Y_>})$
which act in the same way as \eqref{T7}. (The notation refers to \eqref{I7} with arbitrary $X,Y\in \cG$.)
To be precise, in these cases one needs to restrict the starting system to $T^* \check G$, where the leading
principal minors of the elements of $\check G$ are positive, otherwise the reduction procedure is identical to the presented case, even the crucial equations
\eqref{PB2LR} and \eqref{rel**} keep their form for the corresponding invariant functions.
The open Toda phase space is well known \cite{SurB} to be a Poisson submanifold with respect to the linear $r$-matrix brackets
for any of $R$, $R'$ and $R''$.
However, in contrast to the case of $R$ \eqref{I11}, it is not a Poisson submanifold
with respect to the quadratic brackets associated with $R'$ and $R''$.
It would be interesting to find
the reduction origin of the modified quadratic $r$-matrix brackets of Suris \cite{SurPLA,SurB} that are free from this difficulty.
Another open problem is to extend our treatment
of the quadratic brackets to spectral parameter dependent $r$-matrices.
\bigskip
\bigskip
\begin{acknowledgements}
We wish to thank Maxime Fairon for useful remarks on the manuscript.
This work was supported in part by the NKFIH research grant K134946.
\end{acknowledgements}
|
2,877,628,088,619 | arxiv | \section{Introduction}
In this paper we continue the investigation of the infrared
structure of quantum electrodynamics based on an algebraic
model proposed earlier by one of us (see Ref.\ \cite{her98} and
papers cited therein; see also \cite{her05}). This model is
supposed to describe asymptotic fields in the quantum
Maxwell-Dirac system, including the Gauss' law constraint (as
opposed to the crossed product of free fields).
In a recent paper \cite{her08} this model was investigated in
respect of the localization properties of fields. It was shown
that one needs an extension of the localization regions:
infrared/charge structure is encoded in unbounded regions. It was
argued that from the point of view of scattering theory, the
natural choice for extended localization regions consists of
`fattened lightcones', unions of intersecting: a future- and a
past-lightcone. The test-functions of electromagnetic fields have
well-defined asymptotes encoding the information on the long
distance structure.
In the present article we show that the algebra can be
localized in any `time-slice' which is fattening under constant
inclination towards infinity. In addition, the localization of
electromagnetic field may be restricted to `fattened
symmetrical spacelike cones': the unions of a spacelike cone
and its reflection with respect to a~point in its inside.
Similar restriction seems to be ruled out, even asymptotically,
for charged fields. This seems to contradict general wisdom on the
expected behavior of fields in full electrodynamics, see e.g.\
the assumptions on which Buchholz \cite{bu82} bases his selection
criterion of representations in quantum electrodynamics. Whether this points to some incompleteness of the model is an open question; see the discussion at the beginning of Section \ref{locdir} below and in Conclusions. On the other hand, we show that in the present model, in agreement with the general expectation, one can superpose two appropriately ``dressed'' Dirac fields carrying opposite charges to obtain a local observable.
This article should be regarded as a continuation of Refs.
\cite{her98} and \cite{her08}, and we refer the reader to these
references for more detail and a wider background.
However, we briefly summarize notation and the formulation of the model in the next two sections. We obtain spacelike localization of fields in Sections 4 and 5, and discuss the results in concluding Section 6.
\setcounter{equation}{0}
\section{Geometrical preliminaries}
The geometry of the spacetime is given by the affine Minkowski
space $\M$. If a~ref\-erence point $O$ is chosen, then each
point $P$ in $\M$ is represented by a~vector $x$ in the
associated Minkowski vector space $M$ according to \mbox{$P=O+x$}. We mostly keep $O$ fixed and use this representation. The Minkowski product is denoted by a dot, $x\cdot y$, and we write $x^2=x\cdot x$. If a~Minkowski basis $(e_0,\ldots,e_3)$ in $M$ is chosen, then we
denote \mbox{$x=x^ae_a$}. We also then use the standard
multi-index notation $x^\al=(x^0)^{\al_0}\!\ldots
(x^3)^{\al_3}$, \mbox{$|\al|=\al_0+\ldots+\al_3$},
$D^\be=\p_0^{\be_0}\!\ldots\,\p_3^{\be_3}$, where $\p_a=\p/\p
x^a$. We associate with the chosen Minkowski basis a Euclidean
metric with unit matrix in that basis, and denote by $|x|$ the
norm of $x$ in that metric. We briefly recall the definitions
of test functions spaces used in \cite{her08}. Let $\phi(x)$ be
a smooth tensor or spinor field (with vector representation of
points) and define for $\ka\geq0$, $l=0,1,\dots$ the seminorms
\begin{equation}
\|\vp\|_{\ka,l}=\sup(1+|x|)^\ka|D^\be\vp_j(x)|\,,
\end{equation}
where supremum is taken over $x\in M$, all $\be$ such that
$|\be|=l$ and $j$ running over the components of the field.
Then $\Sc_\ka$ is the space of all smooth fields of a given
geometrical type for which all seminorms $\|.\|_{\ka+l,l}$ with
fixed $\ka$ are finite. Denote moreover the operators on smooth
functions $H=x\cdot\partial$ and $H_\ka=H+\ka\id$. Then the
space $\Sc^\ka_{\ka+\ep}$ consists of all fields which under
the action of $H_\ka$ fall into $\Sc_{\ka+\ep}$. Each field
$\vp\in\Sc_{\ka+\ep}^\ka$ has an asymptote
\begin{equation}
\vp_\as(x)=\lim_{R\to\infty}R^\ka\vp(Rx)\,.
\end{equation}
The inversion formulas are
\begin{equation}
\vp(x)=\int_0^1u^{\ka-1}[H_\ka\vp](ux)\,du\,,\quad
\vp_\as(x)=\int_0^\infty u^{\ka-1}[H_\ka\vp](ux)\,du\,.
\end{equation}
The subspaces $\Sc_\ka(\W)$, $\Sc_{\ka+\ep}^\ka(\W)$ consist of
functions supported in $\W$. All spaces, as well as asymptotes,
are independent of the choice of an origin and a basis.
Next, we recall some notation for Lorentz invariant
hypersurfaces. We denote by $l$ vectors on the future
lightcone, and we also introduce
\[
L_{ab}=l_a(\p/\p l^b)-l_b(\p/\p l^a)\,,
\]
which is an operator conveniently expressing differentiation on
the lightcone. We denote by $d^2l$ the invariant measure on the
set of null directions, which is applicable to functions $f(l)$
homogeneous of degree $-2$: the integral
\begin{equation}\label{d2l}
\int f(l)\,d^2l=\int f(e_0+\vec{\vspace{1pt}l})\,d\W(\vec{\vspace{1pt}l})\,,
\end{equation}
where $d\W(\vec{\vspace{1pt}l})$ is the solid angle measure in the
direction of the unit 3-vector $\vec{\vspace{1pt}l}$, is
independent of the choice of Minkowski basis, and satisfies
\begin{equation}\label{parts}
\int L_{ab}f(l)\,d^2l=0\,.
\end{equation}
We denote by $H_+$ the hyperboloid $v^2=1$, $v^0>0$. The
differentiation within the hyperboloid is conveniently expressed
by the action of the operator $\delta_a$, and integration with the
use of invariant measure $d\mu$, defined respectively by
\[
\delta_b=v^a\big[v_a(\p/\p v^b)-v_b(\p/\p v^a)\big]\,,\quad
d\mu(v)=2\theta(v^0)\delta(v^2-1)\,d^4v\,.
\]
We note that for a differentiable function $f(v)$ vanishing for
$v^0\to\infty$ as $o((v^0)^{-3})$, we have
\begin{equation}\label{vsto}
\int(\delta-3v)f(v)\,d\mu(v)=0\,.
\end{equation}
For $x$ inside the future lightcone, one can write $x=\la v$, $\la>0$, and then differentiation and integration over the inside of the future lightcone may be written as
\begin{gather}
\p/\p x^a=v_a\p_\la+(1/\la)\delta_a\,,\label{vd}\\
\int F(x)\,d^4x=\int F(\la v)\la^3\,d\la\,d\mu(v)\,.\label{vi}
\end{gather}
Similarly, for the hyperboloid $H_-$ formed by $z^2=-1$, the
differentiation operator and the integration measure are defined,
respectively, by
\[
\delta_b=-z^a\big[z_a(\p/\p z^b)-z_b(\p/\p z^a)\big]\,,\quad
d\nu(z)=2\delta(z^2+1)\,d^4z\,.
\]
For $f(z)$ vanishing for $|\vec{\vspace{1pt}z}|\to\infty$ as
$o(|\vec{\vspace{1pt}z}|^{-3})$, there is
\begin{equation}\label{zsto}
\int(\delta+3z)f(z)\,d\nu(z)=0\,,
\end{equation}
and for $x=\la z$ ($\la>0$) running over the outside of the lightcone, the
analogues of \eqref{vd} and \eqref{vi} are
\begin{gather}
\p/\p x^a=-z_a\p_\la+(1/\la)\delta_a\,,\label{zd}\\
\int F(x)\,d^4x=\int F(\la z)\la^3\,d\la\,d\nu(z)\,.\label{zi}
\end{gather}
Finally, we define some spacetime sets used in the article. For
$\ga>0$ and $\delta\in(0,1)$ we shall denote by
$R_{\ga,\delta}$ the region
$|x^0|\leq\ga+\delta|\vec{\vspace{1pt}x}|$ and by $R_\delta$
the region $|x^0|\leq\delta|\vec{\vspace{1pt}x}|$. We note that
\begin{equation}\label{loreu}
-x^2\geq\frac{1-\delta^2}{1+\delta^2}\,|x|^2\quad\text{for}\quad
x\in R_\delta\,.
\end{equation}
By a \emph{spacelike cone} we shall mean a closed (solid) cone
in $\M$ such that all vectors going from the apex to other
points of the cone are spacelike. A~\emph{symmetrical spacelike
cone} will be the union of such cone with its reflection with
respect to its apex, and a \emph{fattened symmetrical spacelike
cone} -- the union of such cone with its reflection with
respect to a point inside the cone. An open version of any of
the defined cones will be its interior.
\setcounter{equation}{0}
\section{The model}\label{alg}
We briefly summarize the model formulated in \cite{her98}. The
choice of the test functions spaces is slightly modified.
\subsection{Electromagnetic test functions}
Let $V(s,l)$ be a real vector function of a real variable $s$ and a
future-pointing lightlike vector $l$. We shall understand
differentiability of functions $V_a$ in the sense of the action
of $L_{ab}$ and $\p_s=\p/\p s$, and denote
$\dV(s,l)=\p_sV(s,l)$. Let~$\V_\ep$ be the real vector space
of $\C^\infty$ functions $V_a(s,l)$ which satisfy the following
additional conditions:
\begin{gather}
V(\mu s,\mu l)=\mu^{-1}V(s,l)\,,\quad \mu>0\,,\label{hom}\\[.7ex]
l\cdot V(s,l)=0\,,\label{ortog}\\
|L_{b_1c_1}\ldots
L_{b_kc_k}\dV_a(s,l)|\leq\frac{\con(t,k)}{(t\cdot
l)^2(1+|s|/t\cdot l)^{1+\ep}}\,,\quad k\in\mN\,,\label{falloff1}\\[.5ex]
V(+\infty,l)=-V(-\infty,l)\equiv\tfrac{1}{2}\Delta V(l)\,,\label{infty}\\[.5ex]
L_{[ab}\Delta V_{c]}(l)=0\,,\label{DV}
\end{gather}
where the third condition holds for an arbitrarily chosen unit
timelike, future-pointing vector $t$; the bounds are then true
for any other such vector (with some other constants).
Moreover, with the use of homogeneity \eqref{hom}, the bounds
are generalized to
\begin{equation}\label{falloff2}
|L_{b_1c_1}\ldots L_{b_kc_k}\p_s^nV_a(s,l)|
\leq\frac{\con(t,n,k)}{(t\cdot l)^2(1+|s|/t\cdot l)^{n+\ep}}\,,\quad
n,k\in\mN\,,
\end{equation}
It follows from the property \eqref{DV} that
\begin{equation}\label{VPhi}
l_a\Delta V_b(l)-l_b\Delta V_a(l)=-L_{ab}\Phi_V(l)\,,
\end{equation}
where
\begin{equation}\label{Phi}
\Phi_V(l)=-\frac{1}{4\pi}\int\frac{l\cdot \Delta V(l')}{l\cdot l'}\,d^2l'
\end{equation}
is a smooth homogeneous function. If
$\Delta V(l)=l\al(l)$, then
\[
\Phi_V(l)=-\frac{1}{4\pi}\int\al(l')\,d^2l'=\con\,.
\]
We also note for later use that for $v\in H_+$, there is
\begin{equation}\label{intVPhi}
\int\frac{v\cdot\Delta V(l)}{v\cdot l}\,d^2l
=-\int\frac{\Phi_V(l)}{(v\cdot l)^2}\,d^2l\,.
\end{equation}
The spaces $\V_\ep$ form an increasing family for $\ep\searrow
0$, so their union is a~vector space,
\begin{equation}
\V=\bigcup_{\ep>0}\V_\ep\,.
\end{equation}
This vector space, when viewed as an Abelian group, allows the
following sub- and quotient groups:
\begin{gather}
\V^0_\as=\{V\in\V\mid l\wedge V(s,l)=0\ \text{and}\
\Phi_V(l)=n(2\pi/e),\ n\in\mZ\}\,,\label{V0as}\\
L=\V/\V^0_\as\,;\label{L}
\end{gather}
the elements of the latter will be denoted by $[V]$. The space
$\V$ is equipped with a symplectic form
\begin{equation}\label{symplectic}
\{V_1,V_2\}=\frac{1}{4\pi}\int(\dV_1\cdot V_2-\dV_2\cdot V_1)(s,l)\,ds\,d^2l\,,
\end{equation}
which is also consistently transferred to $L$.
For each $V\in\V$, the formula
\begin{equation}\label{freeV}
A(x)=-\frac{1}{2\pi}\int \dV(x\cdot l,l)d^2l
\end{equation}
gives the Lorentz potential of a free electromagnetic field
with well-defined null asymptotes:
\begin{equation}
\lim_{R\to\infty}RA(x\pm Rl)=\pm V(x\cdot l,l)
-\tfrac{1}{2}\Delta V(l)
\end{equation}
and a long-range tail of electric type. This is the class of
fields which are produced in typical scattering processes
\cite{her95}. For each spacelike $x$ and any fixed $a$, the
spacelike tail is given by
\begin{equation}\label{asymptote}
A^\as(x)=\lim_{R\to\infty}RA(a+Rx)=-\frac{1}{2\pi}
\int \Delta V(l)\,\delta(x\cdot l)\,d^2l=A^\as(-x)\,,
\end{equation}
where $\delta$ is the Dirac measure. Let $F^\as_{ab}$ be the
electromagnetic field of this asymptotic potential. The condition
\eqref{DV} implies that $x^{}_{[a}F^\as_{bc]}(x)=0$, so this field
is of electric type. If $F^\as=0$, we shall say that the field is
infrared-regular, otherwise it will be called infrared-singular.
The symplectic form \eqref{symplectic} is a natural extension, to
the class considered here, of the usual symplectic form of free,
infrared-regular electromagnetic fields.
\subsection{Matter test functions}
We denote by $\Sc(H_+)$ the space of smooth $4$-spinor functions on $H_+$ for
which all seminorms
\begin{equation}
\|f\|^{H_+}_{\al,\be}=\sup |v^\al \delta^\be f(v)|
\end{equation}
are finite (with the usual multi-index notation, and supremum
over $v$ and components of the field).
For $f\in \Sc(H_+)$ the Fourier representation in the form of
the formula
\begin{equation}\label{freef}
\psi(x)=\left(\frac{m}{2\pi}\right)^{3/2}
\int e^{\txt-imx\cdot v\,\gamma\cdot v}\gamma\cdot v f(v)\,d\mu(v)
\end{equation}
gives a smooth Dirac field, with the timelike asymptote
determined by
\begin{equation}
f(v)=\lim_{\la\to\infty}\la^{3/2}ie^{\txt i(m\la+\pi/4)\gamma\cdot v}\psi(\la v)\,.
\end{equation}
One has the usual scalar product in the space of these fields
\begin{equation}
(f_1,f_2)=\int\ov{f_1(v)}\gamma\cdot v f_2(v)\,d\mu(v)=
\int_\Sigma\ov{\psi_1}\gamma^a\psi_2(x)\,d\sigma_a(x)\,,\label{scalarpr}
\end{equation}
where the second integral is over any Cauchy surface $\Sigma$.
We denote by $\K$ the Hilbert space completion of $\Sc(H_+)$
with respect to this product.
\subsection{The algebra}\label{algebra}
The ${}^*$-algebra $\B$ of the model is generated by elements
$W([V])$, $[V]\in L$, which for simplicity will also be written as
$W(V)$, elements $\Psi(f)$, $f\in\Sc(H_+)$, and a unit $E$ by
\begin{gather}
\begin{split}
W(V_1)W(V_2)&= e^{\txt -\frac{i}{2}\{ V_1,V_2\}} W(V_1+V_2)\,,\\
W(V)^* &= W(-V)\,,\ W(0)=E\,,
\end{split}\label{weyl}\\[1ex]
[\Psi(f_1),\Psi(f_2)]_+ =0\,,\quad
[\Psi(f_1),\Psi(f_2)^*]_+ =(f_1, f_2)E\,,\label{ferm}\\[1ex]
W(V)\Psi(f)=\Psi(S_{\Delta V}f)W(V)\,,\label{com}
\end{gather}
where
\begin{equation}\label{S}
(S_{\Delta V} f)(v)=
\exp\left(\dsp -\frac{ie}{4\pi}
\int\frac{v\cdot\Delta V(l)}{v\cdot l}\,d^2l\right)\,f(v)\,.
\end{equation}
Note that the exponent function in the last formula is a multiplier in $\Sc(H_+)$,
so the operator $S_{\Delta V}$ is a~linear automorphism of
$\Sc(H_+)$. This can be easily seen: since for $t\cdot l=1$ and
$v\in H_+$ there is
$|v\cdot l|^{-1}<|v^0|+|\vec{\vspace{1pt}v}|$, so $\left|\dsp
\int\frac{\Delta V^a(l)l^{\al}}{(v\cdot l)^{|\al|+1}}\,d^2l\,\right|$ is
polynomially bounded for any multi-index $\al$. Note also that, by the identity \eqref{intVPhi} and
definitions \eqref{V0as} and \eqref{L}, there is $S_{\Delta
V_2}=S_{\Delta V_1}$ for $V_2\in[V_1]\in L$, so the algebra is
properly defined.
The elements $\Psi(f)$ generate a subalgebra $\B^+$ of the
\rm{CAR} type, and the elements $W(V)$ -- a subalgebra $\B^-$ of
the \rm{CCR} type. We denote by $\beta_V$ the automorphisms of
$\B^+$ defined by
\begin{equation}
\beta_V(C)=W(V)CW(-V)\,,
\end{equation}
forming a group, $\beta_{V_1}\beta_{V_2}=\beta_{V_1+V_2}$.
Regular, translationally covariant, positive energy
representations of $\B$ are shown, up to a unitary equivalence, to
form a class defined in the following way. Let $\pi_F$ be the
standard positive energy Fock representation of $\B^+$ on $\Hc_F$
with the Fock vacuum vector $\W_F$, and $\pi_r$ be any regular,
translationally covariant, positive energy representation of
$\B^-$ on $\Hc_r$. Define operators $\pi(A)$ on
$\Hc=\Hc_F\otimes\Hc_r$ by
\begin{equation}\label{rep}
\begin{split}
\pi(C)&=\pi_F(C)\otimes\id_r\,,\quad C\in\B_\as^+\,,\\
\pi(W(V))[\pi_F(B)\W_F\otimes\varphi]&=
\pi_F(\be_VB)\W_F\otimes\pi_r(W(V))\varphi\,,\quad B\in\B_\as^+\,.
\end{split}
\end{equation}
Then $\pi$ extends to a regular, translationally covariant
positive energy representation of $\B$. We add one further
demand to our selection criterion, that
$\pi_r(W(V_1))=\pi_r(W(V_2))$ whenever $l\wedge V_1=l\wedge
V_2$, which is related to the gauge invariance.
One shows that all representations from the class thus defined
determine the same $C^*$-norm on $\B$; the completion of
$\B$ in this norm is the $C^*$-algebra $\F$ of the model.
\setcounter{equation}{0}
\section{Spacelike localization of electromagnetic
fields}\label{locelm}
We now want to equip the elements of the algebra with spacetime
localization properties. We start with the electromagnetic
fields, which have direct observable status. The way to ascribe
spacetime properties to them is to represent the classical test
fields $A$ in \eqref{freeV} as
\begin{equation}\label{freeJ}
A(x)=4\pi\int D(x-y)\,J(y)\,d^4y\,.
\end{equation}
Here $J$ is a classical conserved test current field, and
$D(x)=D(0,x)$, with
\begin{equation}
D(m,x)=\frac{i}{(2\pi)^3}\int\sgn p^0\delta(p^2-m^2)e^{-ip\cdot x}\,dp\,.
\end{equation}
We want the supports of $J$ to be contained between two Cauchy
surfaces. This may be interpreted as a generalized time-slice
property.
We shall be concerned with conserved test currents $J$ which
are elements of $\Sc^3_{3+\ep}(R_{\ga,\delta})$. Then the
asymptote $J_\as$ has the support in $R_\delta$. For such
currents the integral in \eqref{freeJ} is absolutely convergent
and determines a~corresponding $A$. We want to find out whether
this potential is of the type given by \eqref{freeV}. We start
with a useful subsidiary result.
\begin{lem}
Let $J_\as$ be a homogeneous of degree $-3$ vector function, smooth
outside the origin, with support in $R_\delta$.
The following statements are equivalent.\\
\hspace*{1em}(i) The continuity equation
\begin{equation}\label{ascont}
\p\cdot J_\as(x)=0
\end{equation}
is satisfied distributionally.\\
\hspace*{1em}(ii) $J_\as$ satisfies the following conditions on
$H_-$
\begin{gather}
\delta\cdot J_\as(z)+3z\cdot J_\as(z)=0\,,\label{contdif}\\
\int z\cdot J_\as(z)\,d\nu(z)=0\,.\label{contzero}
\end{gather}
\hspace*{1em}(iii) $J_\as$ is an asymptote of some conserved
current $J\in\Sc^3_{3+\ep}(R_{\ga,\delta})$.
In particular, these conditions are satisfied for $J_\as$ of
the special form
\begin{equation}\label{zg}
J_\as(x)=xg(x)\quad \txt{with}\quad
\int g(z)\,d\nu(z)=0\,,
\end{equation}
where $g$ is a scalar function homogeneous of degree $-4$, smooth outside the origin.
\end{lem}
\begin{proof}
The condition \eqref{contdif} is equivalent to \eqref{ascont}
for $x$ outside the origin (use \eqref{zd}). If it holds, then
we have for any test function $\vp$
\begin{multline}
\int J_\as^b(x)\p_b\vp(x)\,d^4x=
\lim_{\epsilon\to
0}\int_{x^2=-\epsilon^2}\vp(x)J_\as^b(x)\,d\sigma_b(x)\\
=\vp(0)\int z\cdot J_\as(z)\,d\nu(z)\,,
\end{multline}
which proves the equivalence of (i) and (ii). Let $\rho$ be a
smooth function with support in $|x|\leq\ga/\sqrt{2}$ and such
that \mbox{$\int\rho(x)\,d^4x=1$}. The vector function
\begin{equation}\label{Jrho}
J_\rho=\rho*J_\as
\end{equation}
is easily shown to be in $\Sc^3_{3+\ep}(R_{\ga,\delta})$ with
the asymptote $J_\as$, and if (i) is true, then it satisfies
the continuity equation. Conversely, if $J_\as$ is the
asymptote of a~conserved $J\in\Sc^3_{3+\ep}(R_{\ga,\delta})$,
then it is supported in $R_\delta$ and \eqref{contdif} is the
limit of the continuity equation $\p\cdot J(x)=0$ for
$x^2\to-\infty$. Integrating the latter equation over the
region $x^2\geq-R^2$ and taking the limit $R\to\infty$ one
arrives at \eqref{contzero}. The statement concerning
\eqref{zg} is easily checked.
\end{proof}
We note for future use that by \eqref{zsto} and \eqref{contdif}
one has for any continuously differentiable function $f(z)$
\begin{equation}\label{Jdot}
\int J_\as\cdot\delta f(z)\,d\nu(z)=0\,.
\end{equation}
\begin{thm}
Let $J\in\Sc^3_{3+\ep}(R_{\ga,\delta})$ be a conserved current.
Then the function
\begin{equation}\label{VR}
\dV(s,l)=\frac{1}{s}\big(V_0(s,l)-V_0(0,l)\big)\,,
\end{equation}
where
\begin{equation}\label{V0}
V_0(s,l)=\int\delta(s-x\cdot l)H_3J(x)\,d^4x\,,
\end{equation}
satisfies conditions \eqref{hom} and \eqref{ortog}, and $J$ and
$V$ generate the same $A$ according to \eqref{freeJ} and
\eqref{freeV} respectively. If the asymptote of $J$ is odd:
\begin{equation}
J_\as(-x)=-J_\as(x)\,,
\end{equation}
then $V_0(0,l)=0$, so $V$ satisfies also \eqref{falloff1}, and
it may be then obtained by
\begin{equation}\label{VlimR}
V(s,l)=\lim_{R\to\infty}V^R(s,l)\,,\quad
V^R(s,l)=\int_{x^2\geq-R^2}\delta(s-x\cdot l)J(x)\,d^4x
\end{equation}
with $V^R(s,l)$ uniformly bounded and with
\begin{equation}\label{DeltaV}
\Delta V(l)=\int\frac{J_\as(z)}{z\cdot l}\,d\nu(z)
\end{equation}
(the integral in the principal value sense). If in addition
$L_{[ab}\Delta V_{c]}(l)=0$, then $V\in\V_\ep$. This is, in
particular, fulfilled for $J_\as$ of the type given by \eqref{zg}
with even~$g(z)$.
If $J_1$ and $J_2$ are two currents satisfying all the above
assumptions, then
\begin{equation}\label{sympl}
\{V_1,V_2\}=\lim_{R\to\infty}\tfrac{1}{2}
\int_{x^2\geq-R^2}[J_1\cdot A_2-J_2\cdot A_2](x)\,d^4x\,.
\end{equation}
\end{thm}
\begin{proof}
We first observe that as $H_3J(x)$ vanishes as $|x|^{-3-\ep}$ in
infinity, the integral \eqref{V0} is absolutely convergent, and
relations \eqref{hom} and \eqref{ortog} are easily seen to hold for $V_0$. Moreover,
with $X_{ab}=x_a\p/\p x^b-x_b\p/\p x^a$, we have
\begin{multline}\label{estV0}
|L_{a_1b_1}\ldots L_{a_kb_k}V_0(s,l)|=\left|\int\delta(s-x\cdot l)
X_{a_1b_1}\ldots X_{a_kb_k}H_3J(x)\,d^4x\right|\\
\leq\con\int\delta(s-x\cdot l)(1+|x|)^{-3-\ep}\,d^4x
\leq\frac{\con}{t\cdot l(1+|s|/t\cdot l)^{\ep}}\,.
\end{multline}
If $A$ is generated by $J$, then one finds easily that $H_1A$ is
generated by $H_3J$. It is then also easily seen, using the
representation
\[
D(x)=-(1/8\pi^2)\int\delta'(x\cdot l)\,d^2l\,,
\]
that $\dV_0$ generates $H_1A$ by \eqref{freeV}. But then it
follows that $A$ may be obtained by \eqref{freeV} from $\dV$
defined by \eqref{VR}.
We want to obtain another form of $V_0$. For any $R>0$ we have
\begin{multline}
\p\cdot\bigg\{x\,\delta(s-x\cdot l)
\Big[J(x)-\theta(-x^2\!-R^2)J_\as(x)\Big]\bigg\}\\
=-s\,\delta'(s-x\cdot l)
\Big[J(x)-\theta(-x^2\!-R^2)J_\as(x)\Big]\\
+\delta(s-x\cdot l)H_3J(x)
-2R^2\delta(s-x\cdot l)\delta(x^2+R^2)J_\as(x)\,.
\end{multline}
The l.h.s.\ yields zero when integrated over whole space, so we
find
\begin{multline}\label{V0R}
V_0(s,l)=s\,\p_s\int\delta(s-x\cdot l)
\Big[J(x)-\theta(-x^2\!-R^2)J_\as(x)\Big]\,d^4x\\
+\int\delta\Big(\frac{s}{R}-z\cdot l\Big)J_\as(z)\,d\nu(z)\,.
\end{multline}
Setting here $s=0$, we find
\begin{equation}
V_0(0,l)=\int\delta(z\cdot l)J_\as(z)\,d\nu(z)\,,
\end{equation}
so if $J_\as$ is odd, what we assume from now on, there is
$V_0(0,l)=0$, and then $V$ satisfies the bounds \eqref{falloff1} (use \eqref{estV0}).
We note that if $V_0(0,l)\neq0$, then $\dV(s,l)$ falls off only as
$1/|s|$ and is outside the class~$\V$.
We integrate \eqref{VR} with the use of \eqref{V0R}, and find
\begin{multline}\label{VsR}
V(s,l)-V(-\infty,l)=V^R(s,l)
+\int_{-\infty}^{s/R}\frac{1}{\tau}
\bigg\{\int\delta(\tau-z\cdot l)
J_\as(z)\,d\nu(z)\bigg\}\,d\tau\\
+\int_{x^\leq-R^2}\delta(s-x\cdot l)
(J-J_\as)(x)\,d^4x\,,
\end{multline}
with $V^R$ as defined in \eqref{VlimR}. The last term vanishes
both in the limit \mbox{$R\to\infty$} as well as $|s|\to\infty$,
and $V^R(s,l)$ vanishes for $|s|\to\infty$; the uniform
boundedness of $V^R(s,l)$ is also easily seen. We write down the
limit versions of \eqref{VsR} for $R\to\infty$ and for $s\to\infty$,
respectively (remember that
$V(+\infty,l)=-V(-\infty,l)=\tfrac{1}{2}\Delta V(l)$)
\begin{equation}\label{VDeltaV}
V(s,l)+\tfrac{1}{2}\Delta V(l)=\lim_{R\to\infty}V^R(s,l)
+\int_{-\infty}^0\frac{1}{\tau}
\bigg\{\int\delta(\tau-z\cdot l)
J_\as(z)\,d\nu(z)\bigg\}\,d\tau\,,
\end{equation}
\begin{equation}\label{DeltaV'}
\Delta V(l)=\int_{-\infty}^{+\infty}
\frac{1}{\tau}
\bigg\{\int\delta(\tau-z\cdot l)
J_\as(z)\,d\nu(z)\bigg\}\,d\tau
=\lim_{\ep\to0}\int_{|z\cdot l|\geq\ep}
\frac{J_\as(z)}{z\cdot l}\,d\nu(z)\,.
\end{equation}
The last equation gives \eqref{DeltaV}. Due to the oddness of
$J_\as$ the second term on the r.h.s.\ of \eqref{VDeltaV} is
then $\tfrac{1}{2}\Delta V(l)$, and we thus obtain
\eqref{VlimR}. If \eqref{DV} is satisfied, then $V\in\V_\ep$. We
note that the differentiation on the cone is transferred to the
differentiation on the hyperboloid:
\begin{equation}
L_{ab}\int\frac{J_\as(z)}{z\cdot l}\,d\nu(z)=
\int\frac{(z_a\delta_b-z_b\delta_a)J_\as(z)}{z\cdot l}\,d\nu(z)\,,
\end{equation}
therefore $\Delta V$ is smooth, and for $J_\as=zg(z)$ the
condition \eqref{DV} is satisfied automatically.
The last point concerns the symplectic form. We have
\begin{multline}
\frac{1}{4\pi}\int(\dV_1\cdot V^R_2-\dV_2\cdot
V^R_1)(s,l)\,ds\,d^2l\\
=\frac{1}{4\pi}
\int_{x^2\geq-R^2}\Big[\dV_1(x\cdot l)J_2(x)-
\dV_2(x\cdot l)\cdot J_1(x)\Big]\,d^2l\,d^4x\\
=\frac{1}{2}\int_{x^2\geq-R^2}(J_1\cdot A_2-J_2\cdot A_1)(x)\,,d^4x
\end{multline}
due to the representation \eqref{freeV}. As $V^R_i(s,l)$ are
uniformly bounded, by the Lebesgue theorem the l.h.s.\ has a
finite limit $\{V_1,V_2\}$ for $R\to\infty$, so also the
r.h.s.\ has a finite limit, and one arrives at \eqref{sympl}. We
note, however, that the integrand of the r.h.s.\ is not
absolutely integrable on the whole space. The mechanism of the
convergence in the limit relays on the fact that the asymptotes
of $J_i$ are odd, while those of $A_i$ are even, so their
products do not contribute, if integration is done in the above
sense.
\end{proof}
A particular test current
$J_\rho\in\Sc^3_{3+\ep}(R_{\ga,\delta})$ with the given
asymptote $J_\as$ supported in $R_\delta$ was given in
\eqref{Jrho}. We want to find its corresponding function
$V_\rho$. We start with the following geometrical observation:
for $y\in R_\delta$ and $|x-y|\leq\ga$ there is
\begin{equation}\label{theta}
|\theta(x^2+R^2)-\theta(y^2+R^2)|
\leq\theta\big(-y^2-(R-R_1)^2\big)
\theta\big(y^2+(R+R_2)^2\big)
\end{equation}
for $R\geq R_1$, with some $\ga$- and $\delta$-dependent
constants $R_1$, $R_2$. This seems rather intuitive, but we
give a formal proof in Appendix. It is then easy to see that
instead of formula \eqref{VlimR} one can use
$V_\rho=\lim_{R\to\infty}V^{'R}_\rho$ with
\begin{equation}
V^{'R}_\rho(s,l)=\int\delta(s-w\cdot l-y\cdot l)\rho(w)
\theta(y^2+R^2)J_\as(y)\,d^4w\,d^4y\,.
\end{equation}
If we denote
\begin{gather}
H(s,l)=\int\sgn(s-x\cdot l)\rho(x)\,d^4x\,,\\
V_\as^R(s,l)=\int\delta(s-x\cdot l)\theta(x^2+R^2)J_\as(x)\,d^4x\,,
\end{gather}
then we have
\begin{equation}\label{VsR'}
V^{'R}_\rho(s,l)=\tfrac{1}{2}\int \dot{H}(s-\tau,l)V_\as^R(\tau,l)\,d\tau\,.
\end{equation}
Using in the following first step \eqref{zi} and the homogeneity
of $J_\as(x)$, and in the second step oddness of $J_\as(x)$, we
find
\begin{multline}
V_\as^R(\tau,l)=\int\theta\left(\frac{\tau}{z\cdot l}\right)
\theta\left(R-\frac{\tau}{z\cdot l}\right)
\frac{J_\as(z)}{|z\cdot l|}\,d\nu(z)\\
= \tfrac{1}{2}\sgn(\tau)
\int_{|z\cdot l|\geq\frac{|\tau|}{R}}\frac{J_\as(z)}{z\cdot l}\,d\nu(z)\,.
\end{multline}
Thus for $R\to\infty$ the absolute value of \eqref{VsR'} remains
bounded, and one finds
\begin{equation}\label{VrhoH}
V_\rho(s,l)=H(s,l)\tfrac{1}{2}\Delta V(l)\,,
\end{equation}
with $\Delta V(l)$ given by \eqref{DeltaV}. Note that
$H(\pm\infty,l)=\pm1$.
Assume now that $\Delta V(l)$ satisfies \eqref{DV} and is
therefore determined up to a gauge by $\Phi_V(l)$.
\begin{prop}\label{PhiJ}
For $\Delta V(l)$ given by \eqref{DeltaV} there is
\begin{equation}
\Phi_V(l)=\int z\cdot J_\as(z)\log|z\cdot l|\,d\nu(z)\,.
\end{equation}
\end{prop}
\begin{proof}
We observe that the formula \eqref{Phi} defines in fact a
continuous, homogeneous function $\Phi_V(x)$ for $x$ in the closed
future lightcone. For $x$ inside the cone, and with
$v=x/\sqrt{x^2}$, one finds
\begin{equation}\label{Phix}
\Phi_V(x)=-\int\frac{v\cdot J_\as(z)}{\sqrt{(v\cdot z)^2+1}}
\log\Big[\sqrt{(v\cdot z)^2+1}+v\cdot z\Big]\, d\nu(z)\,,
\end{equation}
where we used the following formula valid for $v^2=1$, $v^0>0$
and $z^2=-1$:
\begin{equation}
\int\frac{d^2l}{v\cdot l\,z\cdot l}=
\frac{4\pi}{\sqrt{(v\cdot z)^2+1}}
\log\Big[\sqrt{(v\cdot z)^2+1}+v\cdot z\Big]\,.
\end{equation}
We observe that $\delta_a^{(z)}(v\cdot z)=v_a+v\cdot z\, z_a$,
which allows us to write the integrand in \eqref{Phix} as
\begin{multline}
\tfrac{1}{2}J_\as(z)\cdot\delta\left(\log\Big[\sqrt{(v\cdot z)^2+1}+v\cdot z\Big]\right)^2\\
+z\cdot J_\as(z)\bigg[1-\frac{|v\cdot z|}{\sqrt{(v\cdot
z)^2+1}}\bigg]\log\Big[\sqrt{(v\cdot z)^2+1}+|v\cdot z|\Big]\\
-z\cdot J_\as(z)\log\tfrac{1}{2}\Big[\sqrt{(x\cdot z)^2+x^2}+|x\cdot z|\Big]
+z\cdot J_\as(z)\log\tfrac{1}{2}\sqrt{x^2}\,,
\end{multline}
where we used the fact that
$\xi\log\big[\sqrt{\xi^2+1}+\xi\big]=|\xi|\log\big[\sqrt{\xi^2+1}+|\xi|\big]$.
The first and the last terms give no contribution to the
integral (use \eqref{Jdot} and \eqref{contzero} respectively).
We consider the other terms in the limit $x\to l$. In this
limit \mbox{$|v\cdot z|$} tends to $+\infty$ almost everywhere,
and the second term remains bounded by \mbox{$\con|z\cdot
J_\as(z)|$} and tends to zero almost everywhere, so the
contribution to the integral vanishes in this limit. Finally,
the third term gives the thesis.
\end{proof}
The above result has an interesting consequence.
\begin{prop}\label{Jzg}
Let $J\in\Sc^3_{3+\ep}(R_{\ga,\delta})$ be a conserved current
with an odd asymptote $J_\as$ and the corresponding function $V(s,l)$.
Let $L_{[ab}\Delta V_{c]}(l)=0$, so that $V\in\V_\ep$.
Then there exists a current $J'$ of the same type, but whose
asymptote is of the particular form $J'_\as(z)=zg(z)$, such
that the corresponding function $V'(s,l)$ satisfies
\begin{equation}
l\wedge (V'\fm V)(s,l)=0\quad\quad\text{and}\quad\quad
\Phi_{V'}(l)=\Phi_V(l)\,.
\end{equation}
Thus, in particular, $V'(s,l)-V(s,l)\in\V^0_\as$ and
$[V']=[V]\in L$.
\end{prop}
\begin{proof}
We set $J'=J+\rho*(J'_\as-J_\as)$, where $J'_\as$ is
homogeneous of degree $-3$ and on the unit hyperboloid given by
$J'_\as(z)=-z\,z\cdot J_\as(z)$, which then indeed is the
asymptote of $J'$. Then by \eqref{VrhoH} there is
\begin{equation}
(V'\fm V)(s,l)=H(s,l)\tfrac{1}{2}(\Delta V'\fm \Delta V)(l)
\end{equation}
and by
Proposition~\ref{PhiJ}: $\Phi_{V'}(l)=\Phi_V(l)$. Therefore
$l\wedge(\Delta V'\fm\Delta V)(l)=0$, which completes the proof.
\end{proof}
The net result of the present section to this point is the
identification of a~class of currents giving rise to test
elements $[V]\in L$ of our electromagnetic Weyl algebra. Now we
want to show that the whole group $L$ is covered in this way,
and even more, that the class may be still narrowed. We start
with an auxiliary result.
\begin{lem}
Let a smooth function $W(s,l)$ be homogeneous of degree $n-2$,
$W(\mu s,\mu l)=\mu^{n-2}W(s,l)$ ($\mu>0$), and satisfy the falloff
conditions
\begin{equation}
|L_{b_1c_1}\ldots
L_{b_kc_k}W(s,l)|\leq\con(k)\,\frac{(t\cdot l)^{n-2}}
{(1+|s|/t\cdot l)^\ep}\,,\quad k\in\mN\,.
\end{equation}
Denote $W^{(k)}(s,l)=\p_s^kW(s,l)$ and set
\begin{equation}
K(x)=-\frac{1}{2\pi}\int W^{(n)}(x\cdot l,l)\,d^2l\,.
\end{equation}
Then for each fixed $\delta\in(0,1)$ one has in the region
$R_\delta$ the bounds
\begin{equation}
|K(a+x)|\leq\con(\delta)\,(1+|x|)^{-n-\ep}\,.
\end{equation}
\end{lem}
\begin{proof}
It is sufficient to show this for $a=0$, as the properties of
$W$ are conserved under translations. For $n=0$ and $x\in R_\delta$, we have
\begin{equation*}
|K(x)|\leq\con\int_{-1}^1
\frac{du}{\big(1+\big||x^0|+|\vec{\vspace{1pt}x}|u\big|\big)^\ep}
\leq\con(\delta)\,(1+|x|)^{-\ep}\,,
\end{equation*}
We proceed by induction with respect to $n$. If we denote
\[
\tilde{x}(t,l)=(t\cdot l)^{-1}x+(t\cdot l)^{-2}t\cdot x\,l\,,
\]
then we have the identity
\begin{multline}
L_{ab}\Big[t^a\tilde{x}^bW^{(n-1)}(x\cdot l,l)\Big]\\
= x^2W^{(n)}(x\cdot l,l)
+ \left[t^a\tilde{x}^b L'_{ab}
+\frac{x\cdot l}{(t\cdot l)^2}\right]W^{(n-1)}(x\cdot l,l)\,,
\end{multline}
where $L'_{ab}W^{(n-1)}(x\cdot
l,l)=L_{ab}W^{(n-1)}(s,l)|_{s=x\cdot l}$. The integral of the
l.h.s.\ over $l$ vanishes, so by induction we have
\begin{multline}
|K(x)|\leq\min\left\{\con,\con(\delta)\frac{|x|}{|x^2|}(1+|x|)^{-n+1-\ep}\right\}\\
\leq
\con(\delta)(1+|x|)^{-n-\ep}\,.
\end{multline}
\end{proof}
We can now prove our main result of this subsection.
\begin{thm}\label{AfromJ}
Let $A$ be given by the formula \eqref{freeV} with
$V\in\V_\ep$, and chose an arbitrary set of the type $R_{\ga,\delta}$.
Then:\\
\hspace*{1em}(i) There exists $V'\in\V_\ep$ such that
$[V']=[V]$ and the corresponding potential $A'$ may be
represented as a radiation potential of a test current \linebreak
$J'\in\Sc^3_{3+\ep}(R_{\ga,\delta})$ with the asymptote of the form $J'_\as(x)=x\rho(x)$, with \linebreak
\mbox{$\rho(-x)=\rho(x)$}, supported in~$R_\delta$.\\
\hspace*{1em}(ii) The test current $J'$ may be represented
as a sum of currents with the same properties, but in addition
each of the currents is supported in a fattened symmetrical
spacelike cone contained in $R_{\ga,\delta}$. For each cover of
the set $R_{\ga,\delta}$ with such cones there is a
corresponding split of $J'$.
\end{thm}
\begin{proof}
For a given $A$ and $V$, we define
\begin{equation}\label{C}
C^a(x)=-\frac{1}{2\pi}\int\frac{V^a(x\cdot l,l)}{t\cdot l}\,d^2l\,,\quad
B^{ab}=C^at^b-C^bt^a\,.
\end{equation}
Then $\Box B^{ab}(x)=0$ and $A^a(x)=\p_bB^{ab}(x)$. Moreover,
with the use of the above lemma one finds easily that for $x\in R_\delta$, there is
\begin{equation}\label{falloffC}
|D^\al H_0C(a+x)|\leq\con(a,\delta,\al)(1+|x|)^{-|\al|-\ep}\,.
\end{equation}
Let now $F$ be a smooth function on the spacetime which for
$|x|\geq\ga$,
for some $\ga>0$, satisfies:\\
\hspace*{1em} (i) $F(\mu x)=F(x)$ for all $\mu\geq1$ (homogeneity),\\
\hspace*{1em} (ii) $F(-x)=-F(x)$,\\
\hspace*{1em} (iii) $F(x)=1/2$ for $x^0\geq \delta|\vec{\vspace{1pt}x}|$ for
some $\delta\in(0,1)$.\\
Note that the supports of derivatives of $F$ are contained in
$R_{\ga,\delta}$. We claim that
\begin{equation}
B^{ab}(x)=4\pi\int D(x-y)\vp^{ab}(y)\,d^4y\,,
\end{equation}
where $\vp^{ab}(y)=\Box(F(y)B^{ab}(y))$. Indeed, the support of
$\vp$ is contained in $R_{\ga,\delta}$, and for $x$ in the
future of $R_{\ga,\delta}$ the r.h.s. may be written as
\begin{equation*}
4\pi\int D_\mathrm{ret}(x-y)\Box\Big([F(y)+\tfrac{1}{2}]B^{ab}(y)\Big)\,d^4y\,,
\end{equation*}
which yields the l.h.s. upon integration by parts. But both
sides satisfy the wave equation, so the equality holds
everywhere.
The fall-off properties \eqref{falloffC} now easily imply that
$\vp\in\Sc^2_{2+\ep}$. Moreover, the support of $\vp$ is contained
in $R_{\ga,\delta}$ and that of the asymptote $\vp_\as$ in
$R_\delta$, and the asymptote is even: $\vp_\as(-x)=\vp_\as(x)$.
The potential $A$ has now the representation \eqref{freeJ} with
the test current $J^a=\p_b\vp^{ab}$, which is an element of
$S^3_{3+\ep}$, has similar support properties as $\vp$, and its
asymptote is odd: $J_\as(-x)=-J_\as(x)$. Thus $J$~satisfies all
the assumptions of the Proposition~\ref{Jzg}. The current $J'$
defined in the proof of this proposition may be written in the
present case as $J'=J'_\reg+J'_\sing$ with
\begin{gather}
J^{'a}_\reg=\p_b\vp^{ab}_\reg\,,\quad
\vp_\reg=\vp-\rho*\vp_\as\,,\label{J'reg}\\
J'_\sing=\rho*J'_\as\,,\quad
J'_\as(x)=x\,\bigg(\frac{x_c\p_b\vp_\as^{cb}(x)}{x^2}\bigg)\,.\label{J'sing}
\end{gather}
This completes the proof of (i).
To show (ii), we apply the above construction to
$R_{\ga',\delta'}$ with $\ga'<\ga$, $\delta'<\delta$, and note that
the two parts $J'_\reg$ and $J'_\sing$ may be considered
separately. For the first part we note a rather obvious fact: for
each cover of $R_{\ga',\delta'}$ with open fattened symmetrical
spacelike cones contained in $R_{\ga,\delta}$ there exist a
decomposition of unity on $R_{\ga',\delta'}$ with smooth functions
$f_k$ supported in the respective fattened symmetrical cones, taking values
in $\<0,1\>$ and with bounded all derivatives. The currents
\mbox{$J^{'a}_{\reg,k}=\p_b(f_k\vp_\reg^{ab})$} satisfy the
thesis. For the second part we note that the intersection of $H_-$
with $R_{\delta'}$ may be covered by arbitrarily small symmetrical
patches, which are open as subsets of $H_-$ and are contained in
$R_\delta$. For each such cover there exists a corresponding
decomposition of unity on $R_{\delta'}\cap H_-$ with smooth, even
functions $g_k(z)$ supported in the respective patches, taking
values in $\<0,1\>$ and with bounded derivatives. We extend these
functions by homogeneity and define
\begin{equation}
J'_{\sing,k}=\rho*J'_{\as,k}\,,\quad
J'_{\as,k}(x)=x\,
\bigg(\frac{x_c\p_b\vp_{\as,k}^{cb}(x)}{x^2}\bigg)\,,\quad
\vp_{\as,k}^{ab}=g_k\vp_\as^{ab}\,.\label{Jk'sing}
\end{equation}
The asymptotes $J'_{\as,k}$ are odd and satisfy
\begin{equation}
\int z\cdot J'_{\as,k}(z)\,d\nu(z)=
\int\delta_b\left(z_c\vp_{\as,k}^{cb}(z)\right)\,d\nu(z)=0
\end{equation}
by \eqref{zsto}, so $J'_{\sing,k}$ are conserved currents by
\eqref{contzero}. Their sum yields $J'_\sing$, which ends the
proof.
\end{proof}
\setcounter{equation}{0}
\section{Localization of Dirac fields and
observables}\label{locdir}
Fields carrying charge do not represent observables. Even more, in
full electrodynamics they undergo local gauge transformations,
thus to form an observable with the use of them one has to
compensate not only the global, but also local gauge scaling. If
$\Psi(x)$ and $A(x)$ represent `local quantum spacetime fields',
then a way to achieve this is to give a precise meaning (by
smearing, renormalization etc.) to the heuristically formed
quantities
$\ov{\Psi}(x)\exp\bigg(-ie\int\limits_x^yA(z)dz\bigg)\Psi(y)$.
Localization of this quantity, if it can be defined, should be
determined by spacetime points $x$ and $y$ and the integration
path between them.
Single fields creating or annihilating a physical charged
particle, on the other hand, interpolate between different
representations of observables. However, because of the Gauss
law they cannot be local. Staying at the adopted heuristic
level, the best that one can do is to cut the above quantity in
two and obtain
$\exp\bigg(-ie\int\limits_\infty^yA(z)dz\bigg)\Psi(y)$,
where the path goes to spacelike infinity. The expectation then
would be that the effect of this operation is invisible in the
region spacelike to the localization of the integration path.
The above naive picture has its more refined counterpart in the
algebraic analysis of the superselection sectors in quantum
electrodynamics made by Buchholz~\cite{bu82}. The idea behind the
selection criterion adopted in this analysis is that by an
appropriate choice of the `radiation cloud' superimposed on a
charged state one can concentrate at a given time the electric
flux at spacelike infinity in an arbitrarily chosen patch on the
2-sphere in the infinity of 3-space. The causal influence of the
presence of the charge in this state may be thus made to vanish in
the causal complement of some spacelike cone in Minkowski space.
We shall now investigate this question in the model defined here.
Our algebra is an algebra of fields, not only observables, thus we
formulate the problem in their terms. We shall ask whether, in
representations defined in Section \ref{algebra}, by composing the
charged field $\pi(\Psi(f))$ with some radiation cloud and a
subsequent rescaling (to push the cloud to spacelike infinity), one
can obtain a modified field restricted to a fattened symmetrical
spacelike cone. The infrared tails are symmetric in the class of
fields considered in the model, thus the replacement of spacelike
cones by fattened symmetrical spacelike cones is unavoidable.
We shall see that the answer to this question is negative for a
rather general construction reflecting in an obvious way the
above idea. This seems to disagree
also with expectations based on perturbative calculations in QED.
The `perturbative axiomatic' construction of the physical state
space by Steinmann \cite{ste} may be seen as the strongest
indication in this direction. We postpone the discussion of this
point to the concluding section.
On the other hand, the same construction will allow us to construct
local observables formed as products of `dressed' Dirac fields
and their adjoints.
\subsection{Spacelike test functions}
To ascribe localization to elements $\Psi(f)$, we first have to
interpret test functions in spacetime terms; this will be done in
this subsection. However, this will not give the full answer to
the question because of noncommutativity with observables $W(V)$.
We treat then the addition of the clouds in further subsections.
The first step is achieved, in analogy to the electromagnetic
case, by representing the classical test field $\psi$ in
\eqref{freef} as
\begin{equation}\label{freeX}
\psi(x)=\frac{1}{i}\int S(m,x-y)\chi(y)\,d^4y\,,
\end{equation}
where $\chi$ is a classical test 4-spinor field and
$S(m,x)=(i\gamma\cdot\p+m)D(m,x)$. We want the support $\chi$
to be contained between two Cauchy surfaces.
It is easy to show that the Fourier representation of $S(m,x)$
may be written as
\begin{equation}
S(m,x)=i\left(\frac{m}{2\pi}\right)^3\int
e^{\txt-imx\cdot v\,\gamma\cdot v}\gamma\cdot v\,d\mu(v)
\end{equation}
and then the Fourier connection between $f(v)$ and $\chi(x)$ in
the integral representations of the Dirac field $\psi$ given
respectively by \eqref{freef} and \eqref{freeX} takes the form
\begin{equation}\label{fchi}
f(v)=\left(\frac{m}{2\pi}\right)^{3/2}\int
e^{\txt imv\cdot x\,\gamma\cdot v}\chi(x)\,d^4x\,.
\end{equation}
It is clear that if $\chi\in\Sc(\M)$, the Schwartz functions
space, then $f\in\Sc(H_+)$. For the converse statement we note
first the following analogue of the `regular wave packet'
property.
\begin{prop}
If $f\in\Sc(H_+)$, then for each $\delta\in(0,1)$ the Dirac
field $\psi$ formed by \eqref{freef} satisfies in the region $R_\delta$ the bounds
\begin{equation}
|D^\beta \psi(x)|\leq\con(\delta,|\beta|,n)(1+|x|)^{-n}
\end{equation}
for each $\beta$ and each $n\in\mN$.
\end{prop}
\begin{proof}
The representation \eqref{freef} is proportional to the sum of
two terms $\int e^{\mp imv\cdot x}f_\pm(v)\,d\mu(v)$ with
$f_\pm=P_\pm(v)f(v)$, $P_\pm(v)=\tfrac{1}{2}(1\pm \ga\cdot v)$.
It is clear that application of $D^\beta$ only modifies
functions $f_\pm$. Now, for any $g\in\Sc(H_+)$ and $x^2<0$, we have the identity
\begin{multline}
\int e^{\pm imv\cdot x}g(v)\,d\mu(v)\\=
\Big(\frac{\pm i}{m}\Big)^n
\int \frac{e^{\pm imv\cdot x}}{[x^2-(v\cdot x)^2]^n}
\left[\,\prod_{k=1}^n
x\cdot\Big(\delta+(2k-3)v\Big)\right]g(v)\,d\mu(v)\,,
\end{multline}
where the operators under the product sign are ordered from
right to left with increasing $k$. This is easily shown by
induction with respect to $n$ (integrate the r.h.s.\ by parts
with the use of \eqref{vsto}). But using \eqref{loreu} we have
$|x^2-(v\cdot x)^2|\geq\con(\delta)\,|x|^2$ for $x\in R_\delta$.
This leads easily to the thesis.
\end{proof}
\begin{thm}
Let $\psi$ be given by the formula \eqref{freef} with
$f\in\Sc(H_+)$, and chose an arbitrary set of the type $R_{\ga,\delta}$
(here $\delta=0$ is also admitted).
Then there exists $\chi\in\Sc(R_{\ga,\delta})$ which generates
$\psi$ by \eqref{freeX} (and, therefore, generates $f$ by \eqref{fchi}).
\end{thm}
\begin{proof}
Let $F$ be the function defined in the proof of Theorem
\ref{AfromJ}, and set \mbox{$\chi=(\ga\cdot\p+im)(F\psi)$}. This
function has support in $R_{\ga,\delta}$, and with the use of the last proposition one then easily shows that it is a Schwartz function. Using the method employed in the proof of Thm.\,\ref{AfromJ}, one finds that $\chi$ generates~$\psi$.
\end{proof}
\subsection{`Dressed' charged fields}
We now want to add radiation clouds to the Dirac fields. We first
treat the problem heuristically, and write the Dirac field in the
`integrational' notation as
\mbox{$\Psi(f)=\int\ov{f(v)}\gamma\cdot v\,\Psi(v)\,d\mu(v)$}.
For each four-velocity of the particle $v$ we choose an
electromagnetic cloud profile $V_v(s,l)\in\V$, and form a modified
field
\mbox{$\Psi(f,V_*)=\int\ov{f(v)}\gamma\cdot v\,W(V_v)\Psi(v)\,d\mu(v)$}.
This, of course, has only a heuristic value, but one can expect
that this field can be constructed in the von Neumann algebra
of a representation (from the class defining the
$C^*$-algebra~$\F$). Let us write, still at this informal
level, the commutation relation of this field with the
electromagnetic field. We find
\begin{equation}\label{heur}
W(V_1)\Psi(f,V_*)=\Psi(S_{V_1,V_*}f,V_*)W(V_1)\,,
\end{equation}
where
$\big(S_{V_1,V_*}f\big)(v)=\exp\big[i\vp_{V_1,V_*}(v)\big]\,f(v)$
with
\begin{equation}\label{Smod}
\vp_{V_1,V_*}(v)
=-\frac{e}{4\pi}\int\frac{v\cdot\Delta V_1(l)}{v\cdot l}\,d^2l
+\{V_1,V_v\}\,.
\end{equation}
The problem of compensating the Coulomb field by the cloud field
in some region is now the problem of choosing $V_v$ so as to
compensate the first term in \eqref{Smod} by the second term,
for $V_1$ in some class. However, we note that the symplectic form
reduces to zero when restricted to any of the two subspaces of
functions $V(s,l)$ which are even or odd in $s$ respectively. But
$\Delta V_1(l)$ is the characteristic of the odd part of
$V_1(s,l)$. Thus the odd part of $V_v(s,l)$ has no influence on
this expected cancellation, and therefore may be assumed to
vanish. In consequence, $V_v$ has no long-range tail, and the
field $W(V_v)$ is infrared-regular. This brings in an important
simplification: in all representations in our class there is
$\pi(W(V_v))=\id_F\otimes\pi_r(W(V_v))$ and this operator is
independent of $\pi(\Psi(f))=\pi_F(\Psi(f))\otimes\id_r$. Our
informal modified field is now
$\dsp\Psi_\pi(f,V_*)
=\int\ov{f(v)}\gamma\cdot v\,\pi_F(\Psi(v))\otimes
\pi_r(W(V_v))\,d\mu(v)$.
The use of representations for further construction is
unavoidable. We shall need some general additional assumptions
on their properties needed in the construction, as well as some
conditions on the `clouds' profiles $V_*$. We formulate these
assumptions in the present section successively, and test them
in a large class of representations in the next subsection.
\begin{assum}\label{measurable}
The profiles $V_v(s,l)\in\V$ are smooth functions of all their arguments $(v,s,l)$, even in $s$. For each pair of
vectors \mbox{$\vp,\chi\in\Hc_r$} the function
$v\mapsto(\vp,\pi_r(W(V_v))\chi)_r$ is measurable.
\end{assum}
\noindent Smoothness implies, in particular, that for each $V_1$ the function $\vp_{V_1,V_*}(v)$ in \eqref{Smod} is smooth, and the operator $S_{V_1,V_*}$ in \eqref{heur} is well defined in $\Sc(H_+)$.
Motivated by the above discussion we choose an orthonormal
basis $\{e_j\}$ of the Hilbert space $\K$ formed of functions $e_j\in\Sc(H_+)$, and `expand' $\pi_F(\Psi(v))$ in that basis. This
leads us to the definition
\begin{equation}\label{psipi}
\Psi_\pi(f,V_*)=\sum_{j=1}^\infty\pi_F(\Psi(e_j))\otimes
W_{\pi_r}(V_*,\ov{f}\,\Gamma e_j)\,,
\end{equation}
where $\Gamma$ is the operator defined by $(\Gamma
f)(v)=\gamma\cdot vf(v)$, and $W_{\pi_r}(V_*,\rho)$ is defined
by
\begin{equation}\label{Wweak}
W_{\pi_r}(V_*,\rho)
=\int\pi_r(W(V_v))\rho(v)\,d\mu(v)\,,
\end{equation}
integration in the weak sense: the operators are sandwiched in
$(\vp,.\,\chi)_r$ before integration. We note that
$|(\vp,\pi_r(W(V_v))\chi)_r|\leq\|\vp\|_r\|\chi\|_r$, so it is
sufficient that $\rho$ be integrable. Note also that all operators
$W_{\pi_r}(V_*,\rho)$ commute with each other, as all $V_v$ are
even.
\begin{prop}\label{psipiconv}
The series defining $\Psi_\pi(f,V_*)$ by \eqref{psipi}
converges ${}^*$-strongly to a~bounded operator independent of the
choice of the basis $\{e_j\}$ in $\Sc(H_+)$.
\end{prop}
\begin{proof}
If we denote by $\Psi^{(n)}_\pi(f,V_*)$ the series
truncated to the first $n$ terms, set
$C_{mn}=\Psi^{(n)}_\pi(f,V_*)-\Psi^{(m)}_\pi(f,V_*)$,
and use the anticommutation relations for $\Psi(e_j)$, we find
\begin{equation}
C_{mn}C_{mn}^*+C_{mn}^*C_{mn}=\id_F\,\otimes\!\sum_{j=m+1}^nw_j^*w^{}_j\,,
\end{equation}
where $w_j=W_{\pi_r}(V_*,\ov{f}\,\Gamma e_j)$. Now, using
\eqref{Wweak} it is easy to see that
\begin{multline}\label{Wj}
\big(\vp,W_{\pi_r}(V_*,\ov{f}\,\Gamma e_j)\chi\big)_r
=\Big(f,\big(\vp,\pi_r(W(V_*))\chi\big)_re_j\Big)\\
=\Big(\ov{\big(\vp,\pi_r(W(V_*))\chi\big)_r}\,f,e_j\Big)\,,
\end{multline}
so if we choose any orthonormal basis $\vp_k$ of $\Hc_r$, we find
\begin{multline}
\sum_{j=1}^\infty(\chi,w_j^*w_j\chi)_r=\sum_{j,k=1}^\infty|(\vp_k,w_j\chi)_r|^2\\
=\sum_{k=1}^\infty\int|(\vp_k,\pi_r(W(V_v))\chi)_r|^2
\ov{f(v)}\gamma\cdot v f(v)\,d\mu(v)=\|f\|^2\|\chi\|_r^2\,,
\end{multline}
the last step by the Lebesgue theorem. As $\sum_{j=1}^nw_j^*w_j$
is an increasing sequence of operators, this calculation shows
that $\sum_{j=1}^\infty w_j^*w_j=\|f\|^2\id_r$ in the
$\sigma$-strong sense. This is sufficient for the ${}^*$-strong convergence of the series \eqref{psipi} and the bound of the norm of the limit. The independence of the basis follows from the action of the limit operator on product vectors. It is easy to see with the use of \eqref{Wj} that
\begin{equation}\label{psipi2}
(\xi_1\otimes\chi_1,\Psi_\pi(f,V_*)\,\xi_2\otimes\chi_2)
=\Big(\xi_1,\pi_F\Big(\Psi\big(\,\ov{(\chi_1,\pi_r(W(V_*))\chi_2)_r}f\,\big)\Big)\xi_2\Big)_F\,.
\end{equation}
\end{proof}
The (anti-) commutation relations of the `dressed' Dirac fields
are:
\begin{gather}
[\Psi_\pi(f,V_*),\Psi_\pi(f',V'_*)]_+=0\,,\label{psipsi}\\[1ex]
[\Psi_\pi(f,V_*),\Psi_\pi(f',V'_*)^*]_+
=\id_F\otimes W_{\pi_r}(V_*-V'_*,\ov{f'}\,\Gamma f)\,,\label{psipsistar}\\[1ex]
\pi(W(V_1))\Psi_\pi(f,V_*)=\Psi_\pi(S_{V_1,V_*}f,V_*)\pi(W(V_1))\,,\label{Wpsi}
\end{gather}
where $S_{V_1,V_*}$ is given, as in the heuristic introduction,
by \eqref{Smod}. These relations are straightforwardly
calculated with the use of the definition \eqref{psipi}. For
the second and third identity use the technique of the above
proof and the independence of basis $\{e_j\}$ respectively.
Setting $V'_*=V_*$ we find that dressed fields with a fixed
profile $V_*$ satisfy the usual {\rm CAR} relations among
themselves. It follows thus by a~standard argument (see e.g.\
\cite{bra}) that $\|\Psi_\pi(f,V_*)\|=\|f\|$.
To investigate the long-range behaviour of the dressed fields, we
scale their radiation clouds. The profile $V_v$ in the element
$W(V_v)$ may be assumed to result from a conserved current $J_v$
supported in $R_{\gamma,\delta}$, having vanishing asymptote, even
with respect to the reflection: $J_v(-x)=J_v(x)$. As then, in
loose terms, $W(V_v)=\exp[-iA(J_v)]$ and $A(J_v)=\int
A(x)J_v(x)d^4x$, scaling the electromagnetic field observable to
spacelike infinity means replacing $J_v$ by
$J^R_v(x)=R^{-3}J_v(x/R)$ and taking the limit $R\to\infty$ (cf.\
\cite{bu86}). This scaling induces a simple scaling law for $V_v$.
Thus we set
\begin{equation}\label{VvR}
V^R_v(s,l)=V_v(s/R,l)\,\\[1ex]
\end{equation}
\begin{assum}\label{wlimit}
There exist weak limits
\begin{equation}\label{ass2}
\wlim_{R\to\infty}\pi_r(W(V^R_v))=\N_{\pi_r}(V_v)\,W_{\pi_r}^\infty(V_v)\,,
\end{equation}
such that $W_{\pi_r}^\infty(V_v)$ are unitary operators in
$\Hc_r$, and the real, positive functions
\mbox{$v\mapsto \N_{\pi_r}(V_v)>0$} are smooth and such that
$1/\N_{\pi_r}(V_v)$ are multipliers\linebreak in~$\Sc(H_+)$.
\end{assum}
Note that it follows from Assumptions \ref{measurable} and
\ref{wlimit} that $\N_{\pi_r}(V_v)\leq1$ and functions
$v\mapsto (\vp,W_{\pi_r}^\infty(V_v)\chi)$ are measurable for
all $\vp,\chi\in\Hc_r$. Also, the operators
$W_{\pi_r}^\infty(V_v)$ commute with each other.
Mimicking the definitions \eqref{Wweak} and \eqref{psipi} we
now define
\begin{gather}
W_{\pi_r}^\infty(V_*,\rho)
=\int W_{\pi_r}^\infty(V_v)\rho(v)\,d\mu(v)\,,\label{Wweakinf}\\
\Psi_\pi^\infty(f,V_*)=\sum_{j=1}^\infty\pi_F(\Psi(e_j))\otimes
W_{\pi_r}^\infty(V_*,\ov{f}\,\Gamma e_j)\,,\label{psipiinf}
\end{gather}
and note that also the analogue of \eqref{psipi2} holds:
\begin{equation}
(\xi_1\otimes\chi_1,\Psi_\pi^\infty(f,V_*)\,\xi_2\otimes\chi_2)
=\Big(\xi_1,\pi_F\Big(\Psi\big(\,\ov{(\chi_1,W_{\pi_r}^\infty(V_*) \chi_2)_r}f\,\big)\Big)\xi_2\Big)_F\,.
\end{equation}
The correctness and independence of basis of the definition
\eqref{psipiinf} is shown as in the proof of Proposition
\ref{psipiconv}. It is now easy to show that (the order of limits in the second relation is irrelevant)
\begin{gather}
\wlim_{R\to\infty}W_{\pi_r}(V^R_*,\rho/\N_{\pi_r}(V_*))
=W_{\pi_r}^\infty(V_*,\rho)\,,\label{Winf}\\[1ex]
\begin{split}
\wlim_{R\to\infty}\lim_{R'\to\infty}
W_{\pi_r}\big(V^R_*-V'_*{}^{R'}&,\rho/[\N_{\pi_r}(V_*)\N_{\pi_r}(V'_*)]\big)\\[-1ex]
&=\int W_{\pi_r}^\infty(V_v)W_{\pi_r}^\infty(V'_v)^*\,\rho(v)\,d\mu(v)\,,
\end{split}\label{WinfWinf}
\\[1ex]
\wlim_{R\to\infty}\Psi_\pi(f/\N_{\pi_r}(V_*),V^R_*)
=\Psi_\pi^\infty(f,V_*)\,;\label{psipiinf2}
\end{gather}
for the last relation use \eqref{psipi2} and the uniform
boundedness of the norms of the operators under the limit. To find
the (anti-) commutation relations of the dressed fields, we use
their representation \eqref{psipiinf2} and the relations
\eqref{psipsi} -- \eqref{Wpsi}, with the use of \eqref{WinfWinf}
on the r.h.s.\ of \eqref{psipsistar}. Setting now $V'_*=V_*$ we
find
\begin{gather}
[\Psi_\pi^\infty(f,V_*),\Psi_\pi^\infty(f',V_*)]_+=0\,,\\[1ex]
[\Psi_\pi^\infty(f,V_*),\Psi_\pi^\infty(f',V_*)^*]_+
=(f,f')_\K \id\,,\\[1ex]
\pi(W(V_1))\Psi_\pi^\infty(f,V_*)
=\Psi_\pi^\infty(S^\infty_{V_1,V_*}f,V_*)\pi(W(V_1))\,,\label{WPsiinf}
\end{gather}
where
\begin{gather}
\big(S^\infty_{V_1,V_*}f\big)(v)
=\exp\big[i\vp^\infty_{V_1,V_*}(v)\big]\,f(v)\,,\label{SVinfty}\\[1.5ex]
\vp^\infty_{V_1,V_*}(v)=-\frac{e}{4\pi}
\int\frac{v\cdot\Delta V_1(l)}{v\cdot l}\,d^2l
+\frac{1}{2\pi}\int V_v(0,l)\cdot\Delta V_1(l)\,d^2l\,.
\end{gather}
To show \eqref{WPsiinf}, one notes first that
$\dsp\lim_{R\to\infty}\vp_{V_1,V^R_*}(v)=\vp^\infty_{V_1,V_*}(v)$
and then observes that while taking the weak limit of
$\Psi_\pi(S_{V_1,V^R_*}f/\N_{\pi_r}(V_*),V^R_*)$ one can replace
$S_{V_1,V^R_*}$ by $S^\infty_{V_1,V_*}$ as the difference vanishes
in norm.
We note that the dependence of $\vp^\infty_{V_1,V_*}(v)$ on $V_1$
is only through its infrared tail $\Delta V_1$. In spacetime terms
it means that the dependence on the test current $J_1$ giving rise
to $V_1$ is only through its asymptote $J^\as_1$, which may be
assumed to be of the form $J^\as_1(z)=z\rho_1(z)$, in accordance
with Proposition~\ref{Jzg}. Thus using \eqref{DeltaV} we can write
\begin{gather}
\vp^\infty_{V_1,V_*}(v)=\int\rho_1(z) F_v(z)\,d\nu(z)\,,\\
F_v(z)=\frac{1}{2\pi}\int\frac{1}{z\cdot l}\,z\cdot\left[V_v(0,l)
-\frac{e\,v}{2\,v\cdot l}\right]\,d^2l\,,\label{Fv}
\end{gather}
the second integral in the principal value sense.
The negative result mentioned at the beginning of Section
\ref{locdir} is now the following.
\begin{thm}
There is no choice of profiles $V_v(0,l)$ such that
$S^\infty_{V_1,V_*}f=f$ would hold for any test function $f$
and for all $J^\as_1(z)=z\rho_1(z)$ supported in any given fixed
symmetrical spacelike cone.
\end{thm}
\begin{proof}
The asymptote $\rho_1(z)$ is subject to two conditions: it must
be an even function and satisfy $\int\rho_1(z)d\nu(z)=0$ (cf.\
\eqref{contzero}). The only way to achieve
\mbox{$\exp[i\vp^\infty_{V_1,V_*}(v)]=1$} for some $v$ and all admissible $\rho_1$ supported in a given symmetrical spacelike cone would be that $F_v(z)=\con.$ on the patch of hyperboloid defining this cone (note that $F_v(z)$ is also even). This, however, is impossible for the following reason. It is
easily seen that $F_v(z)$ extends naturally to an even,
homogeneous function $F_v(x)$ of degree $0$ for all $x^2<0$ (by
simply replacing $z$ by $x$ in \eqref{Fv}). Now $F_v(z)=\con.$ in
a patch iff $F_v(x)=\con.$ in the corresponding cone. This,
however, is impossible, as we shall see that $\Box F_v(x)=2e/x^2$.
To show this, we first use the result of Appendix A of
\cite{her08}. Each possible profile $V_v(0,l)$ must be orthogonal
to $l$ and thus satisfies the conditions on $V(l)$ of this
Appendix. Using Eq.\,(A.4) one finds
\begin{multline}
F^{(1)}_v(x)\equiv
\frac{1}{2\pi}\int\frac{x\cdot V_v(0,l)}{x\cdot l}\,d^2l\\
=-\frac{1}{2\pi}\int \p\cdot V_v(0,l)\,\log\frac{|x\cdot l|}{v\cdot l}\,d^2l
+\frac{1}{2\pi}\int\frac{v\cdot V_v(0,l)}{v\cdot l}\,d^2l\,.
\end{multline}
This implies $\Box F^{(1)}_v(x)=0$. On the other hand one
explicitly calculates
\begin{equation}
F^{(2)}_v(x)=-\frac{e\,v\cdot x}{4\pi}
\int\frac{d^2l}{x\cdot l\,v\cdot l}
=-e\, \frac{v\cdot x}{\sqrt{(v\cdot x)^2-x^2}}
\artanh\frac{v\cdot x}{\sqrt{(v\cdot x)^2-x^2}}
\end{equation}
and $\Box F^{(2)}_v(x)=2e/x^2$.
\end{proof}
This result shows that it is impossible to choose $V_v(s,l)$ in
such a way that the exponential factor in \eqref{SVinfty} vanishes
for all test functions and test currents supported in symmetrical
spacelike cones. However, one can find $V_v(s,l)$ which makes this
exponential factor independent of $v$. Let
\begin{equation}
V_v(s,l)=
\left(\frac{e}{2}\right)\left(\frac{v}{v\cdot l}
-\frac{t}{t\cdot l}\right)\eta\left(\frac{s}{t\cdot l}\right)\,,\label{profiles}
\end{equation}
where $t$ is a timelike unit vector and $\eta(s)$ is a smooth
function satisfying: \mbox{$0\leq \eta(s)\leq 1$}, $\eta(0)=1$,
$\eta(s)=\eta(-s)$ and there exist $s_0>0$ such that
$\eta(s)=0$ for $s>s_0$. For this profile, if it satisfies
Assumptions \ref{measurable} and \ref{wlimit} (beside smoothness, which is obvious), it follows that:
\begin{equation}
\vp^\infty_{V_1,V_*}=-\frac{e}{4\pi}
\int\frac{t\cdot\Delta V_1(l)}{t\cdot l}\,d^2l\,,\qquad
S^\infty_{V_1,V_*}=\exp\big[i\vp^\infty_{V_1,V_*}\big]\id\,.
\end{equation}
The commutation relation \eqref{WPsiinf} and its adjoint take
now the following simple form:
\begin{align}
\pi(W(V_1))\Psi_\pi^\infty(f,V_*)
&=e^{i\vp^\infty_{V_1,V_*}} \Psi_\pi^\infty(f,V_*)\pi(W(V_1))\,,\label{psipiV}\\
\pi(W(V_1))\Psi_\pi^\infty(f,V_*)^*
&=e^{-i\vp^\infty_{V_1,V_*}} \Psi_\pi^\infty(f,V_*)^*\pi(W(V_1))\,.\label{psipiVstar}
\end{align}
It is now possible to restrict the scope of test functions $f$
to those resulting from compactly supported four-spinor test
fields $\chi$ in \eqref{fchi}. Then the observables
$\Psi_\pi^\infty(f,V_*)^*\Psi_\pi^\infty(f',V_*)$ form a local
net commuting with the electromagnetic field, with localization
determined by the union of the supports of $\chi$ and $\chi'$.
These are the asymptotic incarnations, in our model, of the
quantities discussed at the beginning of Section \ref{locdir}.
\begin{assum}\label{VV'}
For any two profiles $V_v$, $V_v'$ of the form \eqref{profiles}
(with possibly different vectors $t$ and functions $\eta$) the
unitary operator
$W^\infty_{\pi_r}(V_v)W^\infty_{\pi_r}(V'_v)^*$ formed by the
operators defined by Assumption \ref{wlimit} is independent of
$v$.
\end{assum}
With this assumption it is now easy to see that the observables
defined above do not depend on a particular choice of the
profile $V_v$ in the class \eqref{profiles}.
\subsection{Special choice of representation}\label{repr}
In this subsection we show that Assumptions \ref{measurable},
\ref{wlimit} and \ref{VV'} are fulfilled for profiles
\eqref{profiles} in a class of representations $\pi_r$ in
\eqref{rep} constructed in earlier papers.
Consider the vector space of equivalence classes of real,
smooth vector functions $f_a(l)$ on the cone, homogeneous of
degree $-1$, with $l\cdot f(l)=0$. The equivalence relation is
introduced by: $f_{1}\sim f_2 \Leftrightarrow
f_{1a}(l)=f_{2a}(l)+\beta(l)l_a$. The completion of this space
with respect to the scalar product
\[
(f_1,f_2)_0=-\int f_1(l)\cdot f_2(l)\,d^2l
\]
is a~real Hilbert space denoted $\Hc_0$. The closure of the
subspace of (equivalence classes of) smooth functions satisfying
$L\wedge f=0$ forms a Hilbert space denoted by $\Hc_{IR}$. Let
$H(s,l)$ be a homogeneous of degree 0, smooth function, such that
$\lim\limits_{s\rightarrow \pm\infty}H(s,l)=\pm1$ and
$\dot{H}(s,l)$ satisfies the falloff condition analogous to
\eqref{falloff1}. We denote $h(s,l)=\pi \dot{H}(s,l)$ and fix
notation for Fourier transform with respect to $s$ by
\begin{equation}
\ti{h}(\w,l)=\frac{1}{2\pi}\int e^{i\w s}h(s,l)\,ds\,,\label{fourier}
\end{equation}
so $\ti{h}(0,l)=1$. Following the notation of \cite{her98} and
\cite{her08} we set
\[
p(\dV)=\ti{\dV}(0,l)=\frac{1}{2\pi}\Delta V\,,
\]
the long range characteristic of $V(s,l)$, and denote by
$r_h(\dV)$ the orthogonal projection of
$\frac{1}{2}\int \dV H(s,l)ds$ onto~$\Hc_{IR}$.
We split function $V(s,l)$ into the IR-regular and IR-singular
part by setting:
\begin{multline}
\ti{\dV}(\w,l)
=\Big[\ti{\dV}(\w,l)-\ti{\dV}(0,l)\ti{h}(\w,l)\Big]
+\ti{\dV}(0,l)\ti{h}(\w,l)\\
=\ti{\dV}_\reg(\w,l)+\ti{\dV}(0,l)\ti{h}(\w,l)\,.\label{divideV}
\end{multline}
In particular, $p(\dV)=0$ means that $\dV$ is IR-regular, i.e.
the field has no `long range tail'.
Further, we consider the Weyl algebra generated by the
elements\linebreak \mbox{$w(g\oplus k)$}, where $g\oplus k$
belongs to the vector space $\C^{\infty}_{IR}\oplus\C^\infty_{IR}$
($\C^\infty_{IR}:=\C^\infty\cap\Hc_{IR}$, differentiability is
understood in the sense of $L_{ab}$) with the symplectic
structure:
\begin{equation}
\{g_1\oplus k_1,g_2\oplus k_2\}_{IR}
:= (g_1,k_2)_{IR}-(k_1,g_2)_{IR}\,.
\end{equation}
Algebraic relations satisfied by elements $w(g\oplus k)$ are
\begin{align}
w(g_1\oplus k_1)w(g_2\oplus k_2)
&=e^{-\frac{i}{2}\{g_1\oplus k_1,g_2\oplus k_2\}_{IR}}
w\big((g_1+g_2)\oplus(k_1+k_2)\big)\label{w1}\,,\\
w(g\oplus k)^*&=w(-(g\oplus k))\,.\label{w2}
\end{align}
Let $\pi_{\mathrm{sing}}$ be a cyclic representation of this
algebra derived by GNS construction from the state
\begin{gather}
\w_\mathrm{sing}\big(\w(g\oplus k)\big)=
\exp\Big(-\tfrac{1}{4}(g,C^{-1}g)_{IR}-\tfrac{1}{4}(k,Ck)_{IR}\Big)\,,\label{wsing}
\end{gather}
with the corresponding Hilbert space $\Hc_\sing$ and the cyclic vector
$\W_\sing$.\linebreak Here $C$ is any positive, trace-class
operator such that $\C^\infty_{IR}\subset C^{1/2}\Hc_{IR}$,
$\ov{C^{-1/2}\C^\infty_{IR}}^{\Hc_{IR}}=\Hc_{IR}$. Denote by
$\pi_0$ the standard positive energy Fock representation of
infrared-regular fields, generated by GNS construction from the
vacuum state
\begin{gather}
\w_0(W(V_\reg))
=\exp\left(-\tfrac{1}{2}F(\dV_\reg,\dV_\reg)\right)\,,\\[1ex]
F(\dV_1,\dV_2)=\int_{\w\geq0}\left(-\ov{\ti{\dV}_1(\w,l)}\cdot
\ti{\dV}_2(\w,l)\right)\frac{d\w}{\w}d^2l\,, \label{FVV}
\end{gather}
with the corresponding Hilbert space $\Hc_\reg$ and cyclic
vector $\W_0$. Then the formula
\begin{equation}
\pi_r(W(V))=\pi_\sing\big(w(p(\dV)\oplus
r_h(\dV))\big)\otimes\pi_0(W(V_\reg)),\label{pir}
\end{equation}
determines a regular, translationally covariant positive energy
representation of $\B^-$ on
$\Hc_r=\Hc_{\mathrm{sing}}\otimes\Hc_\reg$ \cite{her98}. Now one
has to prove that assumptions \ref{measurable}, \ref{wlimit} and
\ref{VV'} are fulfilled for this choice of $\pi_r$.
It was shown in \cite{her98} that the representation $\pi_r$
does not depend on the concrete shape of $H(s,l)$. Therefore,
for the convenience of the proof of proposition \ref{exrep}, we
shall assume, from now on, a special choice of this function.
We put $H(s,l)=H_t\left(s/t\cdot l\right)$ for a
timelike unit vector $t$, and a smooth function $H_t$ such that
for some $u_0>0$ there is $H_t(u)=1$ for $u>u_0$, and
$H_t(u)=-1$ for $u<-u_0$.
\begin{prop}\label{exrep}
For the representation $\pi_r$ defined by \eqref{pir} and the
profiles $V_v$ given by \eqref{profiles}, Assumptions
\ref{measurable}, \ref{wlimit} and \ref{VV'} are satisfied.
\end{prop}
\begin{proof}
(Assumption \ref{measurable}) To prove the measurability, it
suffices to show that $(y,\pi_r(W(V_v))x)_r$ is continuous in
$v$ for vectors from a total set, those of the form
\begin{align*}
&x=\pi_\sing\big(w(g_1\oplus k_1)\big)\W_{\sing}\otimes \pi_0\big(W(V_1)\big)\W_0\,,\\
&y=\pi_\sing\big(w(g_2\oplus k_2)\big)\W_{\sing}\otimes \pi_0\big(W(V_2)\big)\W_0\,,
\end{align*}
where $V_i$, $i=1,2$, are IR-regular. As~$V_v$~is
IR-regular, so
\begin{equation}
\pi_r(W(V_v))=\pi_\sing(w(0\oplus r_h(\dV_v)))\otimes\pi_0(W(V_v))\,.\label{piWVv}
\end{equation}
One obtains:
\begin{multline}
(y,\pi_r(W(V_v))x)_r
=\w_\sing\big(w(g_2\oplus k_2)^*w(0\oplus r_h(\dV_v))w(g_1\oplus k_1)\big)\times\\
\times \w_0\big(W(-V_2)W(V_{v})W(V_1)\big)\,.\label{yWVvx}
\end{multline}
From the algebraic relations it follows that:
\begin{multline}
\w_0\big(W(-V_2)W(V_v)W(V_1)\big)=\\
=\exp\Big[-\tfrac{1}{2}F(\dV_1-\dV_2+\dV_v,\dV_1-\dV_2+\dV_v)
-\tfrac{i}{2}\{V_v,V_1+V_2\}-\tfrac{i}{2}\{V_1,V_2\}\Big]\,.\label{comm}
\end{multline}
Since $F(\dV_v,\dV_v)$ and $F(\dV_v,\dV_k)$ ($k=1,2$), as easily shown, are smooth in $v$, so is the r.h.s. of \eqref{comm}. Now we turn to $\w_\sing$. Using \eqref{w1},
\eqref{w2} and \eqref{wsing}, one finds:
\begin{multline}
\w_\sing\big(w(g_2\oplus k_2)^*w(0\oplus r_h(\dV_v))w(g_1\oplus k_1)\big)=\\
\exp\Big[-\tfrac{1}{4}\big(\Delta g,C^{-1}\Delta g\big)_{IR}-\tfrac{1}{4}\big(\Delta k+r_h(\dV_v),
C[\Delta k+r_h(\dV_v)]\big)_{IR}\Big]\\
\times \exp\Big[\tfrac{i}{2}\big(r_h(\dV_v),g_1+g_2\big)_{IR}
+\tfrac{i}{2}(g_2,k_1)_{IR}-\tfrac{i}{2}(g_1,k_2)_{IR}\Big]\,,\label{wsingx}
\end{multline}
where $\Delta g=g_1-g_2$, $\Delta k=k_1-k_2$.
To prove that the r.h.s of \eqref{wsingx} is indeed a continuous
function in $v$, it suffices to show that terms of the form:
$\big(r_h(\dV_v),C\,r_h(\dV_v)\big)_{IR}$, $\big(r_h(\dV_v),k\big)_{IR}$,
$\big(k,C\,r_h(\dV_v)\big)_{IR}$ are continuous in $v$ for
$k\in\C_{IR}^{\infty}$. As $C$ is a bounded operator, it is
sufficient to show that $r_h(\dV_v)$, as an
element of $\Hc_{IR}$, is norm-continuous in $v$ . Since
\begin{equation}\label{rhVv}
\tfrac{1}{2}\int \dV_v(s,l)H_t\left(\frac{s}{t\cdot l}\right)ds
=\frac{e}{4}\int\dot{\eta}(u)H_t(u)du\,
\Big(\frac{v}{v\cdot l}-\frac{t}{t\cdot l}\Big)=r_h(\dV_v)(l)\,,
\end{equation}
we have:
\begin{equation}
||r_h(\dV_v)-r_h(\dV_{v'})||_{IR}^2
=\left(\frac{e}{4}\int \dot{\eta}(u)H_t(u)du\right)^2
\bigg[-\int\left(\frac{v}{v\cdot l}-\frac{v'}{v'\cdot l}\right)^2 d^2l\,\bigg]\,.
\end{equation}
The last integral can be calculated explicitly:
\begin{multline}
-\int\Big(\frac{v}{v\cdot l}-\frac{v'}{v'\cdot l}\Big)^2d^2l
=\int\Big[2\frac{v\cdot v'}{(v\cdot l)(v'\cdot l)}
-\frac{1}{(v\cdot l)^2}-\frac{1}{(v'\cdot
l)^2}\Big]\,d^2l=\\[.5ex]
=8\pi\bigg\{\frac{v\cdot v'}{\sqrt{(v\cdot v')^2-1}}
\log\left(v\cdot v'+\sqrt{(v\cdot
v')^2-1}\right)-1\bigg\}\label{intd2l}\,.
\end{multline}
Because \eqref{intd2l} converges to $0$ for $v\rightarrow v'$,
$r_h(\dV_v)$ is norm continuous. Finally we can conclude that
\eqref{wsingx} is a continuous function of $v$. This ends the
proof of Assumption \ref{measurable}.
\\
\hspace*{1em}(Assumptions \ref{wlimit} and \ref{VV'}) First we
show the existence of the weak limit
\mbox{$\wlim_{R\to\infty}\pi_r(W(V^R_v))$}. The norms of
$\pi_r(W(V^R_v))$ are uniformly bounded, so it is sufficient to
obtain the weak limit for operators sandwiched between vectors
from a total set chosen as in the proof of Assumption
\ref{measurable}. We have to investigate the limit of
the expressions \eqref{comm} and \eqref{wsingx} in which $V_v$ has been replaced by $V^R_v$, for $R\rightarrow\infty$.
From \eqref{VvR} and \eqref{fourier} one has $\ti{\dV}{}_v^R(\w,l)=\ti{\dV}_v(R\w,l)$.
As $\ti{\dV}_k(0,l)=0$, $k=1,2$, it follows by the Lebesgue dominated convergence theorem that $\lim\limits_{R\rightarrow\infty}F(\dV^{R}_{v},\dV_k)=0$ (see \eqref{FVV}), and
since $\{V^{R}_{v},V_k\}=2\,\mathfrak{Im}
\big(F(\dV^{R}_{v},\dV_k)\big)$, also
$\lim\limits_{R\rightarrow\infty}\{V^{R}_{v},V_k\}=0$.
On the other hand, by a change of the integration variable $\w$ one finds
\begin{equation}
F(\dV^{R}_{v},\dV^{R}_{v})
=F(\dV_{v},\dV_{v})\,.
\end{equation}
In this way, for the scaled version of \eqref{comm} we obtain:
\begin{equation}
\lim\limits_{R\rightarrow\infty}\w_0\left(W(-V_2)W(V^R_{p})W(V_1)\right)
=\N_{\pi_r}(V_v)\,\w_0(W(-V_2)W(V_1))\,,
\end{equation}
where
\begin{equation}\label{Nmod}
\N_{\pi_r}(V_v)=\exp\Big(\tfrac{1}{2}
\int_{\w\geq0}\ov{\ti{\dV}_v(\w,l)}\cdot\ti{\dV}_{v}(\w,l)\frac{d\w}{\w}d^2l\Big)\,.
\end{equation}
Thus
\begin{equation}
\wlim\limits_{R\to\infty}\pi_0(W(V^R_v))=\N_{\pi_r}(V_v)\,\id\,.
\end{equation}
For the IR-singular part we note that
\begin{equation}
\lim_{R\to\infty}\|r_h(\dV_v^R)+V_v(0,.)\|_{IR}=0\,,
\end{equation}
which is easily shown with the use of \eqref{rhVv}. Thus using the scaled version of \eqref{wsingx} we find
\begin{equation}
\wlim\limits_{R\to\infty}\pi_\sing(w(0\oplus r_h(\dV^R_v)))
=\pi_\sing(w(0\oplus-V_v(0,.)))\,.
\end{equation}
Therefore, we can finally conclude that the relation
\eqref{ass2} is satisfied, with $\N_{\pi_r}$ given by
\eqref{Nmod}, and
\begin{equation*}
W_{\pi_r}^\infty(V_v)=\pi_\sing(w(0\oplus-V_{v}(0,.\,))\otimes\id\,.
\end{equation*}
This form of these operators ensures that Assumption \ref{VV'} is satisfied. After a suitable change of variables one finds that the factor function has the form
\begin{equation}
\N_{\pi_r}(V_v)\,
=\exp\left(\frac{e^2}{8}\int_{u\geq0}u|\ti{\eta}(u)|^2du\,
\int\Big(\frac{v}{v\cdot l}-\frac{t}{t\cdot l}\Big)^2d^2l\right)\,,
\end{equation}
where $\ti{\eta}$ is the Fourier transform of $\eta$ defined as
in \eqref{fourier}. Using \eqref{intd2l} we obtain:
\begin{equation}
\N_{\pi_r}(V_v)\,
=\exp\bigg\{-c\bigg[\frac{v\cdot t}{\sqrt{(v\cdot t)^2-1}}
\log\left(v\cdot t+\sqrt{(v\cdot t)^2-1}\right)-1\bigg]\bigg\}\,,
\end{equation}
where $c> 0$ is a constant. The function \mbox{$v\mapsto
\N_{\pi_r}(V_v)$} is smooth and for\linebreak $v^0\rightarrow
\infty$ we have: \mbox{$1/\N_{\pi_r}(V_v)\sim \con(v^0)^c$}, with similar estimates for derivatives.
This proves that $1/\N_{\pi_r}(V_v)$ are multipliers in
$\Sc(H_+)$.
\end{proof}
\setcounter{equation}{0}
\section{Conclusions}
The algebra proposed earlier for the description of asymptotic
fields in spinor electrodynamics incorporates Gauss' law and thus
has good chances to form (at least a substantial part of)
a~consistent model of the long-range
behavior of QED. We have found here how to give the elements of
this field algebra localization in regions contained in an
arbitrarily chosen time slice `fattening towards edges'. Compact
localization regions may be chosen only for infrared-regular
electromagnetic fields. Both infrared-singular electromagnetic
fields as well as charged fields have always localization regions
extending to spacelike infinity. However, the infrared singular
electromagnetic fields may be decomposed into fields localized in
arbitrarily `thin' fattened symmetrical spacelike cones. On the
other hand we have found that there is no way of attaching an
infrared cloud to the charged field so as to localize it in such
region, at least in a wide class of representations which satisfy
some natural general conditions. Nevertheless, we have also shown
that compactly supported observables may be formed by simple
multiplication of appropriately dressed charged fields with
compensating charges.
The lack of spacelike-cone localization of dressed Dirac fields in the present model seems to be nonstandard, as already mentioned in Introduction and Section~5. One could object that the model, although it incorporates global Gauss' law, still lacks some additional asymptotic electromagnetic variables. The construction of the model suggests that in such case the variables would have to originate as limits of gauge-dependent local electromagnetic potentials. However, whether the model is indeed incomplete can only be decided by finding its place in a formulation of fully interacting electrodynamics. In particular, it would be interesting to formulate a perturbative electrodynamics incorporating some nonperturbative infrared aspects of the present model.
On the other hand, we would like to stress once more a physically important aspect of the model considered here. Our fundamental fermion fields are genuinely charged, satisfying Gauss' law even before `dressing'. Dressing is considered for the sake of inducing a
certain localization of these fields, as well as an auxiliary step
in the construction of bi-fermion observables. Simplified as the
model is, it is at the same time non-perturbative.
This is to be contrasted with all forms of `dressing' of fermion
fields in local formulations of QED. There, in the indefinite
metric space (Gupta-Bleuler), local Dirac fields cannot carry
physical charge, as they commute with the electric flux at spatial
infinity. After constructing a perturbative solution of an initial
theory formulated in such space, one attempts then, by the
addition of Lorenz condition and nonlocal dressing of charged
fields, to restore Maxwell equations and transport the theory into
a Hilbert space of physical vector states. The dressing takes the
form of a formal local gauge transformation in which the gauge
function is constructed with the use of electromagnetic potential
(see e.g. \cite{sym}). In an Ansatz put forward by Dirac this has
the following form:
\begin{equation}\label{dress}
\Psi(x)=\exp[ieG(x)]\psi(x)\,,\quad
\mathcal{A}(x)=A(x)-\partial G(x)\,,
\end{equation}
where $G(x)=\int r^a(x-y)A_a(y)d^4y$; here $r^a(x)$ is a~vector
distribution satisfying $\partial_ar^a(x)=\delta^4(x)$. Within
perturbative approach to QED this idea has been implemented most
rigorously in the `axiomatic perturbative' formulation by
Steinmann~\cite{ste}. In this approach the above tentative
transformation is carried out not on the level of fields, but
rather Wightman functions. As argued by Steinmann, the results are
insensitive to a choice of a particular form of the distribution
$r^a$. And as among such distributions are some with supports in
spacelike cones, one can argue that in this way charged fields may
be pushed into such regions.
These constructions, rigorous as they are within the limits of the
procedure followed in this approach, are not without weak points.
First, not only the local interaction, but also the dressing
exponent is treated perturbatively; this is admitted by Steinmann
himself to be an obstacle to a completely reliable representation
of the infrared problems. Secondly, the dressing transformation
\eqref{dress} is infrared-singular and cannot be performed in this
form even at the level of Wightmann functions; the actual way it
is done, is via an effective spatial truncation followed by an
adiabatic limit. However, precisely these two points are of
critical importance for the infrared problem.
Finally, we want to comment on our choice of representations. One cannot exclude that the use of some more infrared singular representations would modify our results. That localization may be improved `in front of' infravacua (KPR-type representations \cite{KPR}) has been shown by Kuhnhardt \cite{Kun} in a scalar model due to Buchholz \emph{et al.} \cite{BDMRS}. One of the main motivations for the introduction of such more singular representations of free electromagnetic fields is the fact that they may be stable under the addition of radiation fields produced in scattering processes. However, in this connection we want to mention two facts on the asymptotic model considered here. First, it has been shown in \cite{her08} that representations discussed above in Section \ref{repr} do suffice to absorb radiation fields produced by a classical current. Second, in this model the \mbox{asymptotic} fields are not completely decoupled, and the electric flux at spatial infinity is due both to free as well as Coulomb parts. However, the electric flux of the total field at infinity is an invariant characteristic of the process, not changing with time (the asymptotic flux depends on the spacelike direction, but, in fact, is invariant under any finite spacetime translation of the point from which we go to spacelike infinity). This is a fact in classical theory, and should be also expected in the full quantum theory.
\section*{Appendix}
|
2,877,628,088,620 | arxiv | \section{Introduction}
\label{sect:intro}
The field of \textit{warm dense matter} (WDM) rests in the transitional regime
between traditional condensed phase systems and strongly-correlated,
fully-ionized plasmas. As such, it draws from the complexity of both fields
while showing its own special fundamental and, as we show here, pragmatic
challenges. A significant and growing literature exists on the electronic
structure,\cite{Gregori2003,Gregori2006,Trickey2011,Trickey2012,Plagemann2012}
thermodynamics\cite{Gregori2008} and hydrodynamics\cite{MacFarlane2006} of WDM,
in addition to a range of applications in fusion energy science\cite{Lindl2004}
and laboratory astrophysics.\cite{Remington2006,Wilson2012b} Here, however, we
investigate a particular difficulty of the WDM regime: assessing the accuracy
of the experimental determination of the basic state variables of the system,
such as temperature, density and ionization state. Reliable inference of these
quantities is central to the clearly-needed improvements in the equation of
state of materials under, for example, the entire range of conditions leading
from ambient matter to inertial confinement fusion.\cite{Moses2009}
The high optical opacity of WDM requires the use of penetrating probe
radiation, i.e., x-ray photons with energies of a few to a few tens of keV.
Unfortunately, with only few counterexamples\cite{Doeppner2009} the inference
of state-variables using these methods is limited by the degree of
understanding of the electronic structure of WDM and its relationship to the
state variables themselves. Faintly circular co-dependencies of this type are
not uncommon in emergent fields of experimental science (e.g., consider the
many years of effort needed to establish accurate and precise pressure sensing
in the Mbar range in opposed-anvil pressure
cells\cite{Forman1972,Piermarini1975,Gupta1990,Ragan1992}), and a firm
foundation for such methodologies can follow from any of several developments.
Foremost among such developments are: experimental data of sufficient
information content to itself strongly constrain the constituent theories for
electronic structure; broad programs to assess accuracy by cross-comparison of
different metrologies; and, finally, an eventual comparison to international
standards. The experimental determination of state variables in the WDM regime
is seeing only the earliest such examples, including notably the rare
experiments using detailed balance in x-ray Thomson
scattering\cite{Doeppner2009} or the recent checks in consistency between
conclusions drawn from the elastic and inelastic components of x-ray
scattering.\cite{Fortmann2012}
The present paper reports a first step in evaluating the accuracy, rather than
precision, of the methods used for state variable determination in the WDM
regime. To this end, we investigate the various available treatments for the
core-electron, or bound-free, contribution to x-ray Thomson scattering (XRTS,
also called nonresonant inelastic x-ray scattering, NIXS\cite{Schuelke}) with a
special emphasis on the plane-wave form-factor approximation (PWFFA) of
Schumacher, et al.\cite{Schumacher1975}. We will show that this approximation,
which has seen extensive use in XRTS studies of shock-compressed
matter,\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} is fundamentally flawed and presents a source of systematic
uncertainty in inferred quantities.
Based on these observations we come to three main conclusions. First, looking
to the near future, when XRTS studies of WDM with improved energy resolution
will be performed at the Linac Coherent Light Source\cite{HauRiege2012} and
the National Ignition Facility\cite{Moses2009}, the errors implicit in the use
of the PWFFA must be avoided if physically meaningful information on the
equation of state in the WDM regime is to be determined. Second, while it is
important to note that a faulty theoretical treatment has been used, and that
some reevaluation of experimental results may be called for, the more lasting
conclusion is that the information content in the measured XRTS spectra for
WDM has been insufficient to alert the experimenters to the presence of an
unphysical model for the electronic structure. This strongly suggests the
need for cross-comparison with alternative methods of WDM state variable
determination, e.g., x-ray fluorescence
thermometry\cite{Zastrau2010,Sengebusch2009,Stambulchik2009,Nilson2010,Levy2012,Vinko2012}.
Third, we find that stronger connections between the synchrotron x-ray and
WDM-XRTS communities provide important experimental and theoretical synergies.
The wealth of very high-resolution studies at synchrotron light sources of
both the free-free (valence)\cite{Cooper,Huotari2007,Volmer2007,Huotari2010,Huotari2009,Hakala2004,Okada2012,Sakurai2011} and bound-free (core) \cite{Schuelke,Bradley2011,Bradley2010,Bradley2010b,Sakko2010,Feroughi2010,Sternemann2008,Fister2008,Gordon2008,Balasubramanian2007,Fister2006,Feng2004,Lee2008,Bergmann2004}
contributions to XRTS provide important benchmarks both for comparison to
WDM-specific theory and also for validation of experimental protocol
including, e.g., instrument-specific backgrounds. Further, as we have
illustrated here, there will be cases where theoretical methods already in use
for synchrotron studies may be beneficially transported to WDM-XRTS studies.
In Sect.~\ref{sect:theory} we discuss four theoretical treatments of bound-free
XRTS: the impulse approximation (IA), which is valid at large energy transfer;
a hydrogenic model (HM); an extension of the IA to incorporate binding energies
(PWFFA); and a real-space Green's function method (RSGF). The first three are
essentially atomic (with varying degrees of approximation), while the latter
treats the condensed solid, and is based on methods broadly used for many years
in the interpretation of several x-ray spectroscopic techniques.
\cite{RehrAlbers2000,Rehr2009}
Since high-resolution XRTS data from WDM at known thermodynamic conditions is
currently unavailable, we instead compare each of the above theories with very
high-quality XRTS spectra collected under ambient conditions at a synchrotron x-ray
source. This provides a baseline validation for the core contribution: a
theoretical treatment which fails under these conditions is certain to form a
weak foundation when including the further complexities of continuum lowering and
partial ionization present in WDM.
Experimental details are described in Sect.~\ref{sect:experiment} and the
comparison of theory and experiment is made in Sect.~\ref{sect:comparison}.
The relative success of even atomic treatments at describing the condensed
solid suggests that these methods should be extensible to the WDM regime with
only minor modifications. However, we find that the PWFFA, which has been used
for a few years in the interpretation of WDM measurements,\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} is in
stark disagreement with the ambient experimental data. In
Sect.~\ref{sect:pwffa_in_detail}, we show that this disagreement is due to
internal inconsistency in the PWFFA that leads to unphysical results. Next, in
Sect.~\ref{sect:implications}, we consider the implications of using the PWFFA
to model the bound-free contribution to WDM XRTS data; namely, the likelihood
of previously-undiagnosed systematic errors in extracted thermodynamic
quantities. These observations then motivate a discussion of best future
practice in Sect.~\ref{sect:future_practice}, after which we conclude in
Sect.~\ref{sect:conclusions}.
\section{Theory}
\label{sect:theory}
The fundamental observable in XRTS/NIXS is the \textit{dynamic structure
factor} $S(\vec{q},\omega)$, which separates into independent contributions
from electrons in different shells. We will focus on the contribution from
tightly bound core electrons, i.e., the \textit{bound-free} contribution.
The theoretical description of bound-free XRTS begins with the
Kramers-Heisenberg formula for the first-Born approximation to the
double-differential scattering cross-section (DDSCS):\cite{Schuelke}
\begin{eqnarray}
\label{eq:DDSCS}
\frac{d^2\sigma}{d\omega d\Omega} &=& \frac{\omega_2}{\omega_1} r_o^2 \left|\hat{\epsilon}_1 \cdot \hat{\epsilon}_2^*\right|^2 S(\vec{q}, \omega) \\
S(\vec{q}, \omega) &=& \sum_I P_I(T) \sum_F \Big| \bra{F} \sum_j e^{i\vec{q}\cdot\vec{r}_j} \ket{I} \Big|^2 \nonumber \\
&\times& \delta(E_F - E_I - \omega)
\label{eq:Sfull}
\end{eqnarray}
Here, $\omega_{1,2}$ and $\hat{\epsilon}_{1,2}$ are initial and final photon
energies and polarizations; $r_0$ is the Thomson scattering length;
$\ket{I}$,$\ket{F}$ are initial and final many-body states with energies
$E_I$,$E_F$; $P_I(T)$ is the temperature-dependent Boltzmann factor; $\vec{q}$
is the momentum transfer; $\omega = \omega_1 - \omega_2$ is the energy
transfer and $j$ indexes individual electrons. In this and subsequent
formulae, we use Hartree atomic units ($\hbar = m_e = 1$).
In the independent particle approximation, Eq.~(\ref{eq:Sfull}) can written as
\begin{eqnarray}
S(\vec{q}, \omega) &=& \sum_i n_i S_i(\vec{q}, \omega)
\nonumber \\
\label{eq:Si}
S_i(\vec{q}, \omega) &=& \sum_f (1-n_f) \left| \bra{f} e^{i\vec{q}\cdot\vec{r}} \ket{i} \right|^2 \delta(E_f - E_i - \omega),
\end{eqnarray}
where $\ket{i}$,$\ket{f}$ are initial and final state \textit{single particle}
states with energies $E_i$,$E_f$ and thermally-averaged occupation numbers $n_i$, $n_f$.
\subsection{The impulse approximation}
In the limit of large energy-transfer $\omega$ relative to the initial state
binding energy $E_B$, known as the impulse approximation (IA), the XRTS spectrum
is completely determined by the
initial-state electronic momentum distribution; the binding energy of the scattering
electron plays no
role.\cite{Eisenberger1970}
For $\omega \gg E_B$, only unoccupied final states contribution to
Eq.~(\ref{eq:Si}), so we set $n_f = 0$. Next, following Eisenberger and
Platzman,\cite{Eisenberger1970} we expand the $\delta$-function using the
standard Fourier representation
\begin{equation}
\delta(\omega) = \int \frac{dt}{2\pi} e^{i\omega t}.
\label{eq:deltaFourier}
\end{equation}
After rearranging slightly and using the fact that $\ket{i}$ and $\ket{f}$ are
eigenstates of the single-particle Hamiltonian $H$, we find
\begin{eqnarray}
S_i(\vec{q}, \omega) &=& \int \frac{dt}{2\pi} \sum_f e^{i\omega t} \bra{i}e^{iHt} e^{-i\vec{q}\cdot\vec{r}} e^{-iHt}\ket{f} \bra{f}e^{i\vec{q}\cdot\vec{r}}\ket{i} \nonumber\\
&=& \int \frac{dt}{2\pi} e^{i\omega t} \bra{i}e^{iHt} e^{-i\vec{q}\cdot\vec{r}} e^{-iHt}e^{i\vec{q}\cdot\vec{r}}\ket{i}.
\label{eq:IA_pre_approx}
\end{eqnarray}
In the second line, we have used completeness to remove the sum over final
states. The IA corresponds to replacing $H$ by the free-particle Hamiltonian
$H_0$, which can be justified in the limit of $(\omega/E_B)^2 \gg 1$ (see
Section III of Ref.~\onlinecite{Eisenberger1970}). After inserting a complete
set of momentum eigenstates and integrating over the direction of momentum, we
obtain
\begin{equation}
S_i(\vec{q}, \omega) = (2\pi/q)\int_{|w/q - q/2|}^{\infty} p\, dp\, \rho_i(p),
\label{eq:IA}
\end{equation}
where $\rho_i(p) = (2\pi)^{-3}|\braket{i}{p}|^2$ is the initial-state momentum
density, which is here assumed to be isotropic (e.g.\ $s$-shell, or sum over a
filled subshell).
This formula can be interpreted as describing XRTS from a gas of free electrons
with the same initial-state momentum distribution. The scattering spectrum
consists of a line centered at the free-particle Compton shift $\omega_c =
q^2/2$ that is Doppler broadened by the projection of the momentum distribution
along the direction of $\vec{q}$. The integral over the momentum distribution
in Eq.~(\ref{eq:IA}) simply counts electrons with a given momentum-projection
$p_q$ determined by the energy transfer.
We now turn to methods that take the binding energy into account.
\subsection{Hydrogenic Model}
It is possibly to analytically evaluate Eq. (\ref{eq:Si}) using hydrogenic initial and final states. For a $1s$ initial state, the result is\cite{Eisenberger1970}
\begin{equation}
S_i(q,\omega) = \int \frac{d^3p}{(2\pi)^3} |\bra{f}e^{i\vec{q}\cdot\vec{r}}\ket{i}|^2 \delta(\omega - E_B - p^2/2),
\label{eq:hydrogenic1}
\end{equation}
with
\begin{eqnarray}
|\bra{f}e^{i\vec{q}\cdot\vec{r}}\ket{i}|^2 &=&
\frac{\pi^28^3a^2}{p}(1 - e^{-2\pi/pa})^{-1} \nonumber\\
&\times& \exp\left[\frac{-2}{pa}\tan^{-1}\left(\frac{2pa}{1+(q^2a^2)-p^2a^2}\right)\right] \nonumber \\
&\times& \left[ k^4a^4 + (1/3)k^2a^2(1+p^2a^2) \right] \nonumber\\
&\times& \left[(k^2a^2 + 1 - p^2a^2)^2 + 4 p^2a^2\right]^{-3}.
\label{eq:hydrogenic}
\end{eqnarray}
Here, $p$ is the final-state momentum, $a = 1/Z$ where $Z$ is the effective
nuclear charge\cite{Clementi1963}, and $E_B = 1/2Z^2$ is the binding energy.
In this expression, contributions from bound final states have been neglected.
Expressions for shells other than the $1s$ are included in Schumacher, et al.\cite{Schumacher1975}
, where this method is referred to as the \textit{hydrogenic form-factor approximation}.
We will, however, refer to this simply as the hydrogenic model (HM).
\subsection{Plane-wave form-factor approximation}
\label{sect:pwffa}
The \textit{plane-wave form-factor approximation} (PWFFA) is an attempt to
improve the IA by including the binding energy in the kinematics. The final
states are assumed to be momentum eigenstates, which is conceptually appealing
in the context of dense plasmas where the jellium model has found wide
application. However, as we demonstrate in Sect.~\ref{sect:pwffa_in_detail},
a fundamental difficulty arises: this approximation effectively evaluates the
initial-state energy using the atomic Hamiltonian, while evaluating the
final-state energy using the free-particle Hamiltonian. This inconsistent
treatment violates energy conservation, resulting in violations of the Bethe
$f$-sum rule and deviations from experimental results that, although small in
the original context of gamma-ray scattering, are quite large under the
kinematic conditions typical of XRTS measurements. It is the use of the PWFFA
in the interpretation of recent XRTS experiments on WDM\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} that
motivates the present paper.
Schumacher's derivation of the PWFFA\cite{Schumacher1975} makes the
following assumptions:
\begin{eqnarray}
\ket{f} &=& \ket{\vec{p}+\vec{q}} \nonumber\\
\sum_f &\rightarrow& \int \frac{d^3p }{(2\pi)^3} \nonumber\\
E_f &=& E_{\vec{p}+\vec{q}} = \frac{(\vec{p}+\vec{q})^2}{2} \nonumber\\
E_i &=& -E_B
\label{eq:assumptions}
\end{eqnarray}
where $E_B$ is the initial-state binding energy, $\vec{p}$ is the initial-state
momentum and $\vec{p}+\vec{q}$ is the final-state momentum. Assuming that $T=0$
and applying (\ref{eq:assumptions}) to (\ref{eq:Si}), we find
\begin{eqnarray}
S_i(\vec{q}, \omega) &=& \int \frac{d^3p}{(2\pi)^3} \left| \bra{\vec{p} + \vec{q}} e^{i\vec{q}\cdot\vec{r}} \ket{i} \right|^2 \delta(E_{\vec{p}+\vec{q}} + E_B - \omega) \nonumber\\
&=& \int \frac{d^3p}{(2\pi)^3} \left| \braket{\vec{p}}{i} \right|^2 \delta(E_{\vec{p}+\vec{q}} + E_B - \omega) \nonumber \\
&=& \int d^3p \rho_i(\vec{p}) \delta(E_{\vec{p}+\vec{q}} + E_B - \omega)
\label{eq:derivation}
\end{eqnarray}
In the second line we use the fact that $e^{i\vec{q}\cdot\vec{r}}$ is a
momentum translation operator. The last line uses the definition of the
momentum density $\rho_i(\vec{p}) =
(2\pi)^{-3}\left|\braket{\vec{p}}{i}\right|^2$. Furthermore, if we again
restrict ourselves to an isotropic momentum density (e.g., $s$-shell, or sum
over a filled subshell), we can perform the angular integrals to obtain
\begin{equation}
S_i(\vec{q}, \omega) = (2\pi/q) \int_{|\sqrt{2(\omega-B)}-q|}^{\sqrt{2(\omega-B)}+q} p\, dp\, \rho_i(p)
\label{eq:PWFFA}
\end{equation}
This expression, along with (\ref{eq:DDSCS}) differs from
Schumacher's\cite{Schumacher1975} Eqs. (5) and (21) only in that it does not
contain the relativistic prefactor $\sqrt{1+(p/mc)^2}$, which for the
experimental conditions under consideration differs negligibly from unity.
Eq.~(\ref{eq:PWFFA}) has the same form as the IA expression Eq.~(\ref{eq:IA}),
differing only by the bounds on the integration over the momentum density.
Given this similarity and the well-tested validity of the IA in its regime of
applicability, one would expect that for $\omega \gg E_B$ the PWFFA would
reproduce the IA\@. This is, however, not the case --- despite claims in the
literature to the contrary.\cite{Schumacher1975,Glenzer2009} We will return to
this point in Sect.~\ref{sect:results}, after comparison with experiment.
\subsection{The real-space Green's function method}
The prior methods have all treated XRTS for an isolated atom to various
degrees of approximation. The next method we describe treats an arbitrary
cluster of atoms using a real-space Green's function (RSGF) formalism
implemented in recent versions of the x-ray spectroscopy code
\texttt{FEFF}.\cite{Soininen2005} This formalism, which can treat complex, aperiodic
systems, has been extensively applied to condensed-matter systems, where
XRTS/NIXS provides a bulk-sensitive alternative to, and extension of, soft
x-ray absorption spectroscopy,
\cite{Soininen2005,Sternemann2007,Sternemann2007b,Balasubramanian2007,Feng2008,Fister2009,Pylkkanen2010}
The RSGF approach has recently been extended to treat the valence
contribution.\cite{Mattern2012}
Starting with a description of atomic species and locations, an effective
one-particle Green's function for the valence electrons in the cluster of atoms
is calculated in the muffin-tin approximation, including the effects of full
multiple scattering\cite{Rehr2009}. This Green's function implicitly contains the excited
electronic states that are the final states in the scattering experiment. In
terms of a spectral density matrix defined by $\rho(E) = \sum_f
\ket{f}\bra{f}\delta(E-E_f)$, which is related to the Green's function by
$\bra{\vec{r}}\rho(E)\ket{\vec{r}'} = -
(1/\pi)\,\textrm{Im}\,G(\vec{r}',\vec{r},E)$, Eq. (\ref{eq:Si}) can be recast
as
\begin{equation}
S_i(\vec{q}, \omega) = \bra{i}e^{-i\vec{q}\cdot\vec{r'}}P\rho(E) P e^{i\vec{q}\cdot\vec{r}}\ket{i}.
\label{eq:Srsms}
\end{equation}
Here $E = \omega + E_i$ is the photoelectron energy and $P$ projects the final
states (which are calculated in the presence of a core hole) onto the
unoccupied states of the initial-state Hamiltonian (which has no core
hole).\cite{Soininen2005} The Green's function can be separated into
contributions from the central atom and from scattering off other atoms in the
cluster. Likewise, the dynamic structure factor can be factored as
\begin{equation}
S_i(\vec{q}, \omega) = S_0(q, \omega)[1 + \chi_{\vec{q}}(\omega)],
\end{equation}
where $S_0(q,\omega)$ is a smoothly varying, isotropic atomic background and
$\chi_{\vec{q}}$ is the fine structure due to all orders of photoelectron
scattering from the environment.\cite{Soininen2005,Fister2006} Implicit in the
fine structure is information about nearest-neighbor distances and thus also
density\cite{Soininen2005}. However, at the poor experimental resolution
typical of WDM measurements\cite{Glenzer2009}, this structure will be washed
out. Thus, for our purposes, we will include only the atomic background
contribution, $S_0(q, \omega)$.
\section{Experiment}
\label{sect:experiment}
\begin{figure}
\begin{center}
\includegraphics{fig1}
\end{center}
\caption{(Color online.) XRTS from polycrystalline beryllium under ambient
conditions at a fixed $171^\circ$ scattering angle and 9890-eV scattered
photon energy\cite{Mattern2012}. Here, the energy transfer $\omega$ is
the difference between the incident and scattered photon energies. In the
upper panel, the data are shown along with a combined real-space Green's
function (RSGF) valence and core calculation. The data have been scaled
as described in the text. In the lower panel, the valence contribution and
linear background have been subtracted to give the core contribution alone.}
\label{fig:expt}
\end{figure}
Although we are ultimately interested in the elevated temperatures and
densities of WDM, it is important to first validate theoretical methodology
against spectra taken under known thermodynamic conditions and at higher
resolution than presently typical of WDM experiments. To this end,
experimental XRTS data for polycrystalline Be at ambient temperature and
pressure were collected using the lower energy resolution inelastic x-ray
(LERIX) spectrometer at beamline 20-ID of the Advanced Photon
Source.\cite{LERIX} Scattered photons with $\omega_2 = (9891.7 \pm 0.2)$~eV
were analyzed by a single spherically bent Si crystal located at a fixed
$171^\circ$ scattering angle while scanning the incident photon energy. From
the elastic peak width the total instrumental resolution was determined to be
1.3 eV. The data, which have previously been reported\cite{Mattern2012}, are
shown in Fig.~\ref{fig:expt}. The graph is labelled $S(q_\theta, \omega)$ to
indicate that the data are collected at fixed scattering angle, and thus $q$
is a weak function of $\omega$.
After normalizing to the incident flux, a small linear background and a single
scale factor were fit in order to match the RSGF core calculation in the tail
region ($800 < \omega < 1500$ eV) and thus put the data into absolute units.
The results of this fit have been used to scale the experimental data in
Fig.~\ref{fig:expt}. Additionally, theoretical core and valence calculations,
the fit linear background, and the sum of these three are shown.
The experimental generalized oscillator strength $\int (2/q^2) \omega
S(q,\omega) d\omega$ matches that for the combined theoretical RSGF spectrum to
within 1\%. Since the measurement was performed at fixed scattering angle, this
value is slightly larger than the Bethe $f$-sum
rule\cite{Schuelke,Inokuti1978,Wang1999} value of N=4 (which only holds for
experiments performed at fixed $q$).
The theoretical valence profile, calculated from the RSGF,\cite{Mattern2012}
was then subtracted to obtain the experimental core profile, shown in the
lower panel of Fig.~\ref{fig:expt}. The small peak visible for $550 < \omega <
650$ eV is a result of the theoretical valence profile underestimating the
actual contribution in this region, as was also seen in comparisons with higher
momentum-transfer data (see Figs. 4 and 5 of Mattern, et al.\cite{Mattern2012}).
\section{Results and Discussion}
\label{sect:results}
\begin{figure}[]
\begin{center}
\includegraphics{fig2}
\end{center}
\caption{(Color online.) Comparison of extracted core-shell XRTS with theoretical
calculations using the RSGF, the hydrogenic model (HM), impulse
approximation (IA), and plane-wave form-factor approximation (PWFFA). The
energy transfer $\omega$ is the difference between the incident and
scattered photon energies. \textit{All calculations are in absolute
units.} Vertical guides are shown at the $1s$ binding energy (112 eV) and
the free-particle Compton shift (396 eV).}
\label{fig:be_theory}
\end{figure}
\begin{figure}
\includegraphics{fig3}
\caption{(Color online.) On the left are shown the integration bounds for both the PWFFA (upper and lower bounds) and IA (lower only, upper bound is $\infty$) calculations. On the right, plotted vertically, is the integrand $p\rho(p)$. Both the offset of the peak by the binding energy (112 eV) and the lack of convergence of the PWFFA to the IA at large $\omega$ are apparent.}
\label{fig:bounds}
\end{figure}
\subsection{Comparison of Experiment and Theory}
\label{sect:comparison}
The extracted experimental core profile is compared in Fig.~\ref{fig:be_theory}
to to each of the four theoretical calculations discussed in
Section~\ref{sect:theory}. All calculations are in absolute units and have
taken the weak dependence of $q$ on $\omega$ into account. Vertical guides are
included at the $1s$ binding energy (112 eV) and the free-particle Compton
shift (396 eV).
The IA and PWFFA calculations use the ground-state Dirac-Fock Be $1s$
wavefunction calculated using \texttt{FEFF}'s atomic solver as the initial state, and
thus only differ in their treatment of the final states and energy
conservation. For the RSGF calculation, the Dirac-Fock wavefunction is
calculated in the presence of a $1s$ core-hole. We have included only the
atomic background contribution $S_0(q, \omega)$. The fine structure visible in
the experimental data for $\omega \lesssim 200$ eV is not included, although it
has been treated elsewhere.\cite{Soininen2005} The HM calculation uses
hydrogenic wavefunctions with an effective nuclear charge
(Z=3.685)\cite{Clementi1963} for the initial and final states.
We focus on three regions for comparison: the vicinities of the $K$-edge binding
energy ($\omega\sim 112$~eV), the peak ($\omega\sim 396$~eV), and the tail
($\omega \gtrsim 600$~eV). The RSGF calculation matches the data reasonably
well in all three regions (with the exception of immediately above the $K$ edge,
where interference effects have been omitted).
The HM accurately describes the peak and tail regions, but shows a large
deficit above the experimental binding energy due to the larger binding energy
of the hydrogenic state. The IA, which ignores the binding energy, has an
unphysical tail at low energy transfers. The peak region is reasonably well
described by the IA, while the high-$\omega$ tail region is quite accurate.
This is expected since the conditions of applicability of the IA are well
satisfied for large energy transfer.
By contrast, although the PWFFA of Schumacher, et al.\cite{Schumacher1975}
vanishes below $K$ edge, it exhibits only a gradual onset and further shows
strong quantitative and qualitative disagreement with the experimental data
everywhere else. Given its application in the interpretation of several XRTS
experiments\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} on WDM, and its evident failure to describe
high-resolution synchrotron measurements, we now turn our focus to the
PWFFA\@. We will first look in more detail at the source of the
approximation's error, and then briefly discuss the possible implications for
interpretation of experiment and future best practice.
\subsection{A closer look at the PWFFA}
\label{sect:pwffa_in_detail}
Although others have previously observed that the PWFFA gives results with
unphysical features that are in disagreement with experimental
data,\cite{Currat1971,Bell1986} we are unaware of a discussion of the origin of
the approximation's inconsistency. We now consider this point in detail.
An alternative route to obtain the PWFFA is to follow the IA
derivation up to Eq. (\ref{eq:IA_pre_approx}). At this point, if one makes the
\textit{ad hoc} approximation of replacing only the second $H$ by $H_0$,
\begin{equation}
S_i(\vec{q}, \omega) \approx \int \frac{dt}{2\pi} e^{i\omega t}
\bra{i}e^{iHt} e^{-i\vec{q}\cdot\vec{r}}
e^{-iH_0t}e^{i\vec{q}\cdot\vec{r}}\ket{i}. \label{eq:PWFFA_flaw}
\end{equation}
then, instead, the PWFFA result (\ref{eq:PWFFA}) follows. Thus, the assumptions
(\ref{eq:assumptions}) correspond to making the uncontrolled approximation of
evaluating the initial-state energy using $H$ and the final-state energy using
$H_0$. This effectively violates energy conservation, opening the possibility
of unphysical results.
As we mentioned in Sect.~\ref{sect:pwffa}, the IA~(\ref{eq:IA}) and
PWFFA~(\ref{eq:PWFFA}) results differ only in the bounds of the integration
over the momentum density. In left panel of Fig.~\ref{fig:bounds}, we show the
integration bounds as a function of $\omega$ for both theories. For the IA,
there is no upper bound, and for the PWFFA the upper bound is only relevant for
small $\omega$, beyond which it is well above the integrand's region of
support. The right panel shows the integrand (rotated so that the abscissa runs
vertically). The scattering spectra can be seen to peak when the lower
integration bound vanishes. For the PWFFA, this is offset to higher energy
transfer by the binding energy of the initial state. Furthermore, the failure of the
PWFFA to reduce to the IA at large $\omega$ can be clearly seen here. The
offset of the PWFFA peak relative to the IA appears to be in conflict with the
calculations presented in Fig. 3 of Riley, et al.\cite{Riley2007}.
\begin{figure}
\begin{center}
\includegraphics{fig4}
\end{center}
\caption{(Color online.) Theoretical core profiles for the experimental conditions of Fortmann, et al.\cite{Fortmann2012}. The inaccuracy of the PWFFA remains, even after substantial broadening.}
\label{fig:modelWDM}
\end{figure}
\begin{figure}
\includegraphics{fig5}
\caption{\
(Color online.) Theoretical (RSGF) XRTS from polycrystalline Be compressed
to $3.6\times$ ambient density. The valence contribution is broader than
under ambient conditions (\textit{cf.} Fig.~\ref{fig:expt}, top panel).
Subsequently, the core contribution (assumed here to be independent of
density) is relatively larger in the region of the peak.
}
\label{fig:compressed}
\end{figure}
Given the much lower energy resolution typical of WDM experiments, it is
natural to ask whether the discrepancies seen above are relevant in that
context. In Fig.~\ref{fig:modelWDM}, we compare RSGF, IA and PWFFA
calculations at the representative experimental conditions of
Fortmann, et al.\cite{Fortmann2012}. The IA calculation has been truncated at the
empirical binding energy. In addition to the absolute-unit PWFFA calculation,
we have included a curve scaled to match the $f$-sum of the other theoretical
curves. The unbroadened calculations are shown in the upper panel. The curves
in the middle panel are broadened by 115-eV (FWHM) to match the experimental
resolution of Ref.~\onlinecite{Fortmann2012}. Finally, for illustrative
purposed, the lower panel contains curves broadened by 500 eV. Even in the
latter, admittedly extreme case, the PWFFA offset and subsequent overestimation
of the high-$\omega$ tail is still quite prominent. The \textit{ad hoc} $f$-sum
scaling results in a tail that is only slightly overestimated, but at the cost
of an extreme deficit beneath the peak.
At the higher densities typical of WDM, the core contribution becomes even more
important. As the density is increased, the Fermi level increases relative to
the bottom of the valence band resulting in a broader valence contribution.
The core wavefunction, on the other hand, is only weakly dependent on density
(at least for modest compression, where the cores from neighboring sites have
negligible overlap). Subsequently, as shown in Fig.~\ref{fig:compressed}
(\textit{cf.} Fig.~\ref{fig:expt}, top panel), the core contribution is
relatively larger in the peak region, increasing the importance of a
numerically accurate theoretical treatment.
\subsection{Implications}
\label{sect:implications}
We now turn to implications of an incorrect core treatment on the
interpretation of XRTS spectra from WDM\@. It is important to recognize the
difficulty of these experiments and their analysis. The intrinsic width of
backlighter x-ray sources fundamentally limits the energy resolution obtainable
in this measurement technique. The low flux and need for single-shot
measurement requires the use of low-resolution spectrometers with limited
spectral range further decreasing the resolution while also complicating background
characterization. This uncertainty in the background subtraction makes $f$-sum
normalization especially difficult, and thus the spectra often can not be
reliably placed into absolute units. Furthermore, the highest likelihood
background is necessarily dependent upon assumptions made about the core
contribution to the spectrum.
The complicated interplay between the various degrees of freedom present in
such fits makes it difficult to state the exact implications of using the
PWFFA in the extraction of thermodynamic state variables in published
work.\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} It is likely that, in order for a good fit in the
high-$\omega$ tail to be obtained, the ionization state must be overestimated.
This could explain the discrepancy noted by Fortmann, et
al.,\cite{Fortmann2012} between their best-fit ionization state found using
the PWFFA and earlier work that appears to use the
HM.\cite{Lee2009,Glenzer2010} Beyond that, the net effect on extracted
thermodynamic parameters is unclear. However, due to this previously
undiagnosed systematic uncertainty, re-evaluation of existing experimental
data\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} using a more appropriate core calculation and a maximum
likelihood treatment of the background is necessary.
\begin{figure}
\includegraphics{fig6}
\caption{(Color online.) Comparison of theoretical core contribution to Be $K$-edge XRTS calculated using RSGF and truncated IA methods as a function of momentum transfer. Vertical arrows are located at the free-particle Compton shifts.}
\label{fig:be_vs_q}
\end{figure}
\begin{figure}
\includegraphics{fig7}
\caption{(Color online.) Same as Fig.~\ref{fig:be_vs_q}, except for Al $L_1$- and $L_{2,3}$-edge XRTS\@.}
\label{fig:al_vs_q}
\end{figure}
\begin{figure}
\includegraphics{fig8}
\caption{(Color online.) Comparison of RSGF and IA calculations for $L_1$ ($2s$) and $L_{2,3}$ ($2p$) subshells, along with the combined spectra at high $q$. While the RSGF and IA differ for individual subshells, agreement is recovered for the combined spectra. }
\label{fig:al_by_edge}
\end{figure}
\subsection{Future Practice}
\label{sect:future_practice}
The limitations of the PWFFA bring up the question of best future practice for
fitting the core-shell XRTS from WDM\@. This will become particularly
important when higher resolution WDM-XRTS experiments are performed at the
Linac Coherent Light Source and the National Ignition Facility. If such
experiments are to reach their full scientific potential, errors on the scale
of those given by the PWFFA must be avoided. An ideal treatment
would include self-consistent determination of occupied and unoccupied
electronic states including condensed-phase effects. Also, the decrease of
ionization potentials with increased density (i.e., \textit{continuum
lowering}) should be either implicitly present in the calculation, or tunable
using models from plasma physics.\cite{Ecker1963,Stewart1966,Zimmerman1980}
Of the methods we have presented, the RSGF calculation most closely describes
the ambient experimental data. Condensed-phase effects are included explicitly in
final states. Although a frozen atomic core wavefunction is
used, this is a common feature of all techniques under consideration, and
should be sufficient at modest densities where core overlap is expected to be
negligible. The primary limitation of the RSGF approach is that it is currently
unclear how to incorporate continuum-lowering effects.
Alternatively, one can use an \textit{ad hoc} modification of the low-energy-transfer
tail of the IA
\begin{equation}
S_{\rm tr-IA}(q,\omega) = S(q,\omega) \left(1 - \frac{1}{e^{\beta(\omega-E_B)} + 1}\right).
\label{eq:S_tr-IA}
\end{equation}
We will refer to this approach, which for $T=0$ simply truncates the spectrum below binding energy,
as the \textit{truncated IA} (tr-IA).
This allows straightforward application of
continuum-lowering models to adjust the binding energy. A similar
approach, using screened hydrogenic wavefunctions for the initial state instead
of Dirac-Fock is discussed in Gregori, et al.\cite{Gregori2004}, but only in the
context of smaller $q$, where the core contribution is relatively small. In
Figs.~\ref{fig:be_vs_q},~\ref{fig:al_vs_q},~and~\ref{fig:al_by_edge}, we explore the accuracy and
applicability of the tr-IA at T=0 by comparing it with RSGF calculations for a range
of momentum transfers for the $K$ shell of Be and $L$ shell of Al. Note that all
calculations are in absolute units.
As long as the free-particle Compton shift (shown by vertical arrows) is a few
times the edge energy, the tr-IA is in reasonable agreement with the RSGF
calculation. Since the IA satisfies the $f$-sum rule by construction, the
truncation of the low $\omega$ tail results in slight $f$-sum violations
($\lesssim 5\%$ for Al, $q \ge 6$ \AA$^{-1}$). We also note that for the Al
$L$ shell, the tr-IA and RSGF calculations differ for the individual subshells
(Fig.~\ref{fig:al_by_edge}, upper two panels). However, agreement is recovered
after combining to form the total $L$-shell contribution
(Fig.~\ref{fig:al_by_edge} lower panel).
Recently, another approach to modeling XRTS from WDM has been discussed by
Johnson, et al.\cite{Johnson2012} They use an \textit{average-atom} model,
which gives a significant improvement in the treatment of the free-electrons
compared to a simple jellium model. Unfortunately, the average-atom binding
energies disagree significantly with experiment, limiting the accuracy of the
bound-free contribution to the XRTS spectrum. Johnson, et al.\cite{Johnson2012}
also consider applying the PWFFA to the average-atom $1s$ state for Be and find
similar qualitative discrepancies as we have discussed here.
In summary, for XRTS experiments with modest energy resolution at high
momentum transfer it should be sufficient to treat the bound-free contribution
with a truncated IA, where the truncation energy is adjusted to include the
effects of continuum lowering. However, the IA ceases to be accurate at lower
momentum transfers. If continuum-lowering shifts and temperatures are
negligible compared to the desired energy resolution, then the RSGF approach
can be immediately applied at any momentum transfer. Further investigation is
needed to determine if continuum lowering can be calculated or included
empirically within the RSGF framework.
\section{Conclusion}
\label{sect:conclusions}
We have discussed several techniques for calculating the core-shell
contribution to XRTS and compared with experimental data collected from
polycrystalline Be under ambient conditions. Of the techniques considered,
the real-space Green's function method best describes the data. However, a
simple \textit{ad hoc} truncation of the impulse approximation is reasonably
accurate at higher momentum transfers and allows more straightforward
inclusion of continuum lower effects. The accuracy of this truncated IA as a
function of $q$ has been explored by comparing with RSGF calculations for both
the Be $K$-shell and Al $L$-shell. On the other hand, the plane-wave
form-factor approximation, which has been used in the interpretation of
several WDM experiments\cite{Riley2007,Sawada2007,Sahoo2008,Glenzer2009,Kritcher2009,Kritcher2011,Fortmann2012} is quantitatively and qualitatively
inaccurate due to an inconsistent treatment of the single-particle
Hamiltonian. Re-evaluation of the experimental data using a more accurate
core calculation and maximum likelihood background subtraction is recommended.
More importantly, an accurate treatment of the bound-free XRTS from WDM will
be necessary when higher resolution experiments are performed at the Linac
Coherent Light Source and the National Ignition Facility. We believe we have
also motivated the need for cross-method comparisons and the usefulness of
exchange of both theoretical and experimental techniques between the
condensed-matter and dense-plasma communities.
\begin{acknowledgments}
This work was supported by the US Department of Energy, Office of Science,
Fusion Energy Sciences and the National Nuclear Security Administration,
through grant DE-SC0008580. We thank C. Fortmann, S. Glenzer, G. Gregori, T.
Doeppner, J. Rehr, J. Kas, F. Vila and D. Riley for many enlightening
discussions.
\end{acknowledgments}
|
2,877,628,088,621 | arxiv | \section{Introduction}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}Dielectric elastomers (DEs) are materials that deform
under electrostatic excitation. Due to their light weight, flexibility
and availability these materials can be used in a wide variety of
applications such as artificial muscles \citep{barc01book}, energy-harvesting
devices \citep{mcka&etal10apl,Springhetti01102014}, micropumps \citep{rudy&etal12ijnm},
and tunable wave guides \citep{0964-1726-22-10-104014,Shmuel2012307},
among others. The principle of the actuation is based on the attraction
between two oppositely charged electrodes attached to the faces of
a thin soft elastomer sheet. Due to Poisson's effect, the sheet expands
in the transverse direction. \citet{toup56arma}, in his theoretical
work, found that this electromechanical coupling is characterized
by a quadratic dependence on the applied electric field. This was
later verified experimentally by \citet{kofo&etal03jimss}. However,
DEs have a low energy-density in comparison with other actuators such
as piezoelectrics and shape memory alloys \citep{barc01book}. Furthermore,
their feasibility is limited due to the high electric fields ($\sim100\,\mathrm{MV/m}$)
required for a meaningful actuation as a result of the relatively
low ratio between the dielectric and elastic moduli \citep{Pelrine200089,Pelrine04022000}.
Specifically, common flexible polymers have low dielectric moduli
while polymers with high dielectric moduli are usually stiff. Nevertheless,
a few recent works suggest that this ratio may be improved. \citet{huan&etal04aple}
demonstrated experimentally that organic composite EAPs (electro-active
polymers) experience more than 8\% actuation strain in response to
an activation field of $20\,\mathrm{MV/m}$. The experimental work
of \citet{stoy&etal10jmatchem} showed that the actuation can be dramatically
improved by embedding conducting particles in a soft polymer. In parallel,
theoretical works dealing with the enhancement of coupling in composites
also hint at the possibility of improved actuation with an appropriate
adjustment of their microstructure \citep{tian12jmps,Galipeau20121,:/content/aip/journal/apl/102/15/10.1063/1.4801775,0964-1726-22-10-104014,lopez2014elastic}.
The above findings motivate an in-depth multiscale analysis of the
electromechanical coupling in elastic dielectrics which is inherent
from their microstructure. In this work we consider the class of polymer
dielectrics. A polymer is a hierarchical structure of polymer chains
each of which is a long string of repeating monomers. We start by
analyzing the behavior of a single monomer in a chain. Next, the response
of a chain is obtained by a first level integration from the single-monomer
level to the chain level. Finally, the macroscopic behavior of the
polymer is obtained by a higher level summation over all chains. In
this work, we utilize existing constitutive models for the chains
and concentrate on the higher level summation from the chain level
to the macroscopic-continuum level. To this end the physically motivated
micro-sphere technique, that enables to extend one dimensional models
to three dimensional models by appropriate integration over the orientation
space, is exploited \citep{bavzant1986efficient,Carol2004511}. Accordingly,
this method lends itself to the characterization of polymer networks
since the single chain is often treated as a 1-D object which is aligned
along the chain's end-to-end vector \citep{Miehe20042617,0964-1726-21-9-094008}.
The response of a polymer subjected to purely mechanical loadings
was extensively investigated at all length scales. A \emph{macroscopic
level }analysis and models describing the behavior of soft materials
undergoing large deformation, such as polymers, were developed by
\citet{ogden97book}. The \emph{microscopic level }analysis of \citet{kuhn1942beziehungen}
yielded a Langevin based constitutive relation and paved the way to
various multiscale models such as the 3-chain model \citep{wang&guth52jcp},
the tetrahedral model \citep{:/content/aip/journal/jcp/11/11/10.1063/1.1723791,TF9464200083}
and the 8-chain model \citep{arru&boyc93jmps}. Corresponding micro-sphere
implementation of the Langevin model was carried out by \citet{Miehe20042617}.
The electric response of dielectrics to electrostatic excitation was
examined by \citet{tier90book} and \citet{hutt&etal06book}, among
others, macroscopically as well as through their microstructure. Starting
with the examination of a single charge under an electric field, the
relations between the different electric macroscopic quantities, such
as the electric displacement, the electric field and the polarization,
and microscopic quantities, such as the free and bound charge densities
and the dipoles were defined and analyzed.
The study of the response of dielectrics to a coupled electromechanical
loading initiated with the pioneering work of \citet{toup56arma},
who performed a theoretical analysis at the \emph{macroscopic level.
}Later on, an invariant-based representation for the constitutive
behavior of EAPs was introduced by \citet{dorf&ogde05acmc}. Subsequently,
\citet{Ask2012156,Ask20129}, \citet{0964-1726-22-10-104014} and
\citet{jimenez2013deformation} investigated the possible influence
of the deformation and its rate on the electromechanical coupling.
\citet{0964-1726-21-9-094008} made use of a corresponding micro-sphere
technique at the chain level. By employing macroscopic constitutive
models for the mechanical and the electrical behavior of the polymer
chains, a few boundary value problems were solved by means of the
numerical implementation of the micro-sphere technique and the finite
element method. Initial multiscale analyses of the electromechanical
response were performed by \citet{Cohen14b,Cohen2014}. The present
work focuses on the implementation of different electromechanical
models to EAPs experiencing homogenous deformations under various
types of boundary conditions and examination of their predicted response.
We begin this work with the detailed description of the different
macroscopic and microscopic models and the presentation of the micro-sphere
framework. Next, the micro-sphere technique is used to compute the
macroscopic behavior of dielectrics with randomly oriented and uniformly
distributed dipoles experiencing the macroscopic rotation. In section
4 we determine and compare the behavior of a polymer according to
three electromechanical models under different boundary conditions.
The conclusions are gathered in section 5. \vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\section{Theoretical background}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}
Consider the deformation of a hyperelastic dielectric body subjected to electro-mechanical loading from a referential configuration to a current one.
In the reference configuration, the body occupies a region $V_{0}\in{\mathbb{R}^{3}}$ with a boundary $\partial V_{0}$. The referential location of a material point is $\refposT$.
In the current configuration, the body occupies a region $V\in{\mathbb{R}^{3}}$ with a boundary $\partial V$, and we denote the location of a material point by $\curposT$.
The mapping of positions of material points from the reference to the current configurations is $\curposT=\mapping(\refposT)$, and the corresponding deformation gradient is
\begineq{deformation gradient}
\defgT = \nabla_{\refposT}\mapping,
\fineq
where the operation $\nabla_\refposT$ denotes the gradient with respect to the referential coordinate system.
The right and left Cauchy-Green strain tensors are
$\CGstrainT = \defgT ^{T} \defgT$
and
$\CGleftstrainT = \defgT \, \defgT^{T}$.
The ratio between the volumes of an infinitesimal element in the current and the reference configurations, $J = \det{\defgT}$, is strictly positive. In the case of incompressible materials, which are of interest in the present work, we have $J=1$.
The induced electric field $\EfieldT$ on the body satisfies the governing equation
\begineq{curl free E}
\nabla_{\curposT}\times\EfieldT=\mathbf{0},
\fineq
in the entire space, where $\nabla_\curposT$ denotes the gradient with respect to the current coordinate system.
Consequently, we can define a scalar field, the electric potential $\electricpotential$, such that $\EfieldT= -\nabla_{\curposT} \electricpotential$.
The electric displacement field is
\begineq{dielectric displacement}
\DfieldT = \vacpermittivity\EfieldT + \polarizationT,
\fineq
where $\vacpermittivity$ is the permittivity of the vacuum and $\polarizationT$ is the polarization, or the electric dipole-density.
We recall that in vacuum $\polarizationT = \mathbf{0}$.
In the absence of free charges the electric displacement field is governed by the local equation
\begineq{divergence free d}
\nabla_{\curposT}\cdot\DfieldT=0.
\fineq
In the work of \cite{dorf&ogde05acmc}, the referential counterparts $\EfieldTref$ and $\DfieldTref$ of the electric field and the electric displacement were determined. Specifically,
\begineq{referential electric field}
\EfieldTref=\defgT^{T}\EfieldT,
\fineq
\begineq{referential electric displacement}
\DfieldTref=J\defgT^{-1}\DfieldT,
\fineq
where $\nabla_\refposT\times\EfieldTref=\mathbf{0}$ and $\nabla_\refposT\cdot\DfieldTref=0$.
We note that unlike $\EfieldTref$ and $\DfieldTref$, the referential polarization is not uniquely defined.
In order to ensure that the referential polarization is energy conjugate to the referential electric field such that $\frac{1}{J}\EfieldTref\cdot\polarizationTref=\EfieldT\cdot\polarizationT$, we adapt the definition
\begineq{referential polarization}
\polarizationTref=J\defgT^{-1}\polarizationT.
\fineq
In accordance with our assumption that the dielectric solid can be treated as a hyper-elastic material, its constitutive behavior can be characterized in terms of a scalar electrical enthalpy per unit volume function $\energy$.
We further assume that $\energy$ can be decomposed into a mechanical and a coupled contributions, i.e.
$\energy\left(\defgT,\EfieldT\right)=
\energy_{0}\left(\defgT\right)+\energy_{c}\left(\defgT,\EfieldT\right)$,
where $\energy_{0}\left(\defgT\right)$ characterizes the material response in the absence of electric excitation and $\energy_{c} \left(\defgT,\EfieldT\right)$ accounts for the difference between $\energy$ with and without electric excitation \citep{mcme&land05jamt, mcme&etal07ijnm, Cohen2014, Cohen14b}.
Accordingly, the polarization is determined via
\begineq{polarization definition}
\polarizationT = -\frac{1}{J} \frac{\partial \energy_{c}} {\partial \EfieldT},
\fineq
and the stress developing in the material can be written as the sum
\begineq{stress decomposition}
\stressT = \mechanicalstressT + \electricstressT + \maxwellstressT,
\fineq
where
\begineq{mechanical stress}
\mechanicalstressT=\frac{1}{J}\, \frac{\partial{\energy\left(\defgT\right)}}{\partial{\defgT}}\,\defgT^{T},
\fineq
is the mechanical stress due to the deformation of the material,
\begineq{polarization stress}
\electricstressT = \EfieldT \otimes \polarizationT,
\fineq
is the polarization stress stemming from the applied electric field in the dielectric, and
\begineq{maxwell stress}
\maxwellstressT = \vacpermittivity \left[\EfieldT\otimes\EfieldT -
\frac{1}{2} \left[\EfieldT\cdot\EfieldT \right] \mathbf{I} \right],
\fineq
is the Maxwell stress in vacuum, where $\mathbf{I}$ is the second order identity tensor \citep{mcme&land05jamt}.
We emphasize that this decomposition is purely modelling-based as in an experiment the total stress can be measured, but the contributions of the individual components cannot be distinguished.
In this work, we consider incompressible materials that undergo homogenous deformations and therefore a pressure like term $p\, \mathbf{I}$, which is determined from the boundary conditions, is added to the total stress.
Assuming no body forces, the stress satisfies the local equilibrium equation
\begineq{equilibrium}
\nabla_{\curposT}\cdot\stressT=\mathbf{0}.
\fineq
The electrical boundary conditions are given in terms of either the electric potential or the charge per unit area $\rho_a$, such that $\DfieldT\cdot\hat{\mathbf{n}} = -\rho_a$,
where $\hat{\mathbf{n}}$ is the outward pointing unit normal.
Practically, in EAPs, $\rho_a$ is the charge on the electrodes.
The mechanical boundary conditions are given in terms of the displacement or the mechanical traction $\tractionT$.
Due to the presence of the electric field the stress in the vacuum outside of the body does not vanish.
Therefore, the mechanical traction at the boundary is
$\left[\stressT-\maxwellstressT \right]\cdot \hat{\mathbf{n}}=\tractionT$, where the expression for $\maxwellstressT$, the Maxwell stress outside the material, is given in \Eq{maxwell stress} in terms of the electric field in the vacuum.
\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{Existing models for the behavior of dielectrics}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}Within the framework of finite deformation elasticity
the simplest constitutive model is the well-known neo-Hookean model
that requires only one material parameter. The corresponding strain
energy-density function (SEDF) is
\begin{equation}
\energy_{0}^{\,\, nH}\left(\defgT\right)=\frac{\shear}{2}\,\left[I_{1}-3\right],\label{eq:neo-Hookean}
\end{equation}
where $\shear$ is the shear modulus and $I_{1}=\mathrm{tr}\left(\CGstrainT\right)$
is the first invariant of the right Cauchy-Green strain tensor. This
model does not capture the \emph{lock-up} effect observed in experiments
and corresponds to a significant stiffening of the material at large
strains. \citet{gent96rc&t} proposed a phenomenological constitutive
model in which this effect is accounted for. The SEDF for this model
\begin{equation}
\energy_{0}^{\,\, G}\left(\defgT\right)=-\frac{\shear J_{m}}{2}\ln\left(1-\frac{I_{1}-3}{J_{m}}\right),\label{eq:gent}
\end{equation}
depends on two parameters, $\shear$ and $J_{m}$. The latter is the
lock-up parameter such that $J_{m}+3$ is the value of $I_{1}$ at
the lock-up stretch. Thus, the expression in Eq. (\ref{eq:gent})
becomes unbounded at $I_{1}=J_{m}+3$ which captures this phenomenon.
With regard to the response of the dielectric to electrical excitation
a quadratic dependence of $\energy_{c}$ on $\EfieldT$ is commonly
assumed, leading to the linear relation
\begin{equation}
\polarizationT=\susceptibility\EfieldT,
\end{equation}
where $\susceptibility$ is the susceptibility of the material \citep{blyt&bloo08book}.
Consequently, the electric displacement is
\begin{equation}
\DfieldT=\permittivity\EfieldT,\label{eq:linear_relation_displacement}
\end{equation}
where $\permittivity=\vacpermittivity+\susceptibility$ is the permittivity
of the material. We note that this linear relation is in agreement
with the invariant based representation of \citet{dorf&ogde05acmc}.
Furthermore, an experiment carried out by \citet{:/content/aip/journal/jap/111/2/10.1063/1.3676201}
on VHB 4910 showed that this assumption is fairly accurate.
Recent experiments with various types of polymers imply that the permittivity,
and therefore the relation between the polarization and the electric
field, is deformation dependent \citep{choi2005effects,wissler2007electromechanical,mcka,qiang2012experimental}.
A possible explanation for this dependency of the susceptibility on
the deformation is related to the alteration of the inner structure
of the polymer \citep{Cohen2014,Cohen14b}.
In the polymer, the monomers in the chain can move or rotate relative
to their neighbors thus providing the chains with a freedom to deform
\citep{flor53book}. In order to better understand the response of
polymers to an electro-mechanical loading, their microstructure should
be accounted for. This can be accomplished in terms of a multiscale
analysis consisting of three stages: the first involves the examination
of the behavior of the monomers, the second includes analysis of the
response of the chains, and the third deals with the polymer behavior
at the continuum level.
\citet{trel43atfaradaysoc} and \citet{:/content/aip/journal/jcp/11/11/10.1063/1.1723791}
carried out a multiscale analysis of a polymer subjected to mechanical
loading. It was assumed that the directions of the monomers, or links,
composing a chain are random, and consequently it was found that the
chains are distributed according to a Gaussian distribution. Based
on statistical considerations and the laws of thermodynamics, the
variation in the entropy of the chain due to its deformation was determined.
The overall variation in the entropy of the polymer is computed by
summing the entropies of the chains. Remarkably, their result recovered
the macroscopic neo-Hookean behavior. Furthermore, a comparison between
the micro and macro analyses related the macroscopic shear modulus
to the number of chains per unit volume $\chainspervolume_{0}$. Specifically,
it was found that
\begin{equation}
\shear=k\, T\,\chainspervolume_{0},\label{eq:shear_modulus_micro}
\end{equation}
where $k$ and $T$ are the Boltzmann constant and the absolute temperature,
respectively.
Due to the assumptions that lead to the use of Gaussian statistics,
the\emph{ }lock-up\emph{ }effect was not captured in the above mentioned
analysis. A more rigorous examination of the polymer behavior by \citet{kuhn1942beziehungen}
revealed that this phenomenon is a result of the finite extensibility
of the chains. According to this analysis the SEDF associated with
a polymer chain is
\begin{equation}
\energy_{_{0}}^{\,\, LC}=k\, T\,\numberoflinks\left[\invlangevin\left(\frac{\chainlencur}{\numberoflinks\,\linklen}\right)\,\frac{\chainlencur}{\numberoflinks\,\linklen}+\ln\left(\frac{\invlangevin\left(\frac{\chainlencur}{\numberoflinks\,\linklen}\right)}{\sinh\left(\invlangevin\left(\frac{\chainlencur}{\numberoflinks\,\linklen}\right)\right)}\right)\right],\label{eq:first langevin}
\end{equation}
where $\linklen$ is the length of a link, $\numberoflinks$ is the
number of links in a chain, $\chainlencur$ is the distance between
the two ends of the chain, and $\invlangevin\left(\bullet\right)$
is the inverse of the Langevin function
\begin{equation}
\langevin\left(\beta\right)\equiv\coth\left(\beta\right)-\frac{1}{\beta}=\frac{\chainlencur}{\numberoflinks\,\linklen}.
\end{equation}
Assuming that all chains undergo the macroscopic deformation, i.e.
$\chainveccur=\defgT\,\chainvecref$ where $\chainveccur$ and $\chainvecref$
are the current and referential end-to-end vectors, respectively,
the stress associated with a chain is derived from Eq. (\ref{eq:first langevin}),
\begin{equation}
\mechanicalstressT^{\,\, LC}=k\, T\,\sqrt{\numberoflinks}\:\frac{\chainlenref}{\chainlencur}\,\invlangevin\left(\frac{\chainlencur}{\numberoflinks\,\linklen}\right)\,\defgT\hat{\chainveccur}_{0}\otimes\defgT\hat{\chainveccur}_{0},\label{eq:Langevin stress}
\end{equation}
where $\hat{\chainveccur}_{0}$ is a unit vector in the direction
of $\chainvecref$ and Eq. (\ref{mechanical stress}) is used. Here,
\begin{equation}
\chainlenref=\linklen\,\sqrt{\numberoflinks},\label{eq:referential_chain_len}
\end{equation}
is the average length of the referential end-to-end vectors \citep{flor53book,trel75book}.
The quantity $\frac{\chainlencur}{\numberoflinks\,\linklen}\leq1$
describes the ratio between the end-to-end and the contour lengths
of the chain, and from Eq. (\ref{eq:referential_chain_len}) it follows
that
\begin{equation}
\frac{\chainlencur}{\numberoflinks\,\linklen}=\sqrt{\defgT\hat{\chainveccur}_{0}\cdot\defgT\hat{\chainveccur}_{0}}\,\frac{\chainlenref}{\numberoflinks\,\linklen}=\sqrt{\defgT\hat{\chainveccur}_{0}\cdot\defgT\hat{\chainveccur}_{0}}\,\frac{1}{\sqrt{\numberoflinks}}.
\end{equation}
It can be shown that the limit $\frac{\chainlencur}{\numberoflinks\,\linklen}\rightarrow1$
results in $\invlangevin\left(\frac{\chainlencur}{\numberoflinks\,\linklen}\right)\rightarrow\infty$,
thus capturing the experimentally observed lock-up phenomenon. The
lock-up stretch is associated with the chain undergoing the largest
extension such that the stretch ratio of its end-to-end vector is
\begin{equation}
\stretch_{max}=\sqrt{\numberoflinks}.\label{eq:Langevin_lock_up}
\end{equation}
It is important to note that the first term in the Taylor series expansion
of Eq. (\ref{eq:Langevin stress}) about $\frac{\chainlencur}{\numberoflinks\,\linklen}=0$
reproduces the Gaussian model \citep{kuhn1942beziehungen,trel75book}.
A few works proposed models that consider specific finite networks
of chains. The 3-chain model by \citet{wang&guth52jcp} examines a
network of 3 chains which are located along the axis of the principal
directions of the deformation gradient. \citet{:/content/aip/journal/jcp/11/11/10.1063/1.1723791}
and \citet{TF9464200083} proposed a network of four chains that are
linked together at the center of a regular tetrahedron, and their
other ends are located at the vertices of the tetrahedron. The tetrahedron
deforms according to the macroscopic deformation while the chains
experience different stretches. In the model proposed by \citet{arru&boyc93jmps},
8 representative chains in specific directions relative to the principal
system of the macroscopic deformation gradient are used to determine
the macroscopic behavior. An anisotropic worm-like chain model in
which no inherent alignment between the chosen and the principal coordinate
systems is assumed was considered by \citet{doi:10.1080/14786430500080296}.
A multiscale level analysis of the response of polymers with Gaussian
distribution to electro-mechanical loading was carried out by \citet{Cohen14b}.
In this study, the changes in the magnitudes of the dipolar monomers
due to the applied electric field and their rearrangement due to the
mechanical deformation were accounted for. Following a model described
in \citet{stockmayer1967dielectric}, \citet{Cohen2014} considered
the class of uniaxial dipoles in which the dipole is aligned with
the line segment between the two contact points of a monomer to its
neighbors. Thus, taking $\dipoledir$ to be the unit vector along
this line segment the dipole moment of a monomer is
\begin{equation}
\dipoleT_{u}=\constant\left[\dipoledir\otimes\dipoledir\right]\EfieldT,\label{eq:uniaxial_dipole}
\end{equation}
where $\constant$ is a material constant. The dipole of a chain composed
of $\numberofdipoles$ uniaxial dipoles with an end-to-end vector
in the direction $\hat{\chainveccur}$ is \citep{Cohen14b}
\begin{equation}
\dipoleT_{c}\approx\frac{\constant\,\numberofdipoles}{3}\left[\mathbf{I}+\frac{16}{3\,\pi^{2}}\left[\frac{\chainlencur}{\numberofdipoles\,\linklen}\right]\left[\mathbf{I}-3\hat{\chainveccur}\otimes\hat{\chainveccur}\right]\right]\EfieldT.\label{eq:long_chains_stress}
\end{equation}
If we assume that all chains undergo the macroscopic deformation,
then $\hat{\chainveccur}=\frac{\defgT\,\chainvecref}{\sqrt{\left[\defgT\,\chainvecref\right]\cdot\left[\defgT\,\chainvecref\right]}}$
. In accordance with a second type of dipoles discussed in \citet{stockmayer1967dielectric},
\citet{Cohen2014} proposed an expression for a transversely isotropic
(TI) dipole, where the dipole is aligned with the projection of the
electric field on the plane perpendicular to $\dipoledir$,
\begin{equation}
\dipoleT_{t}=\frac{\constant}{2}\left[\mathbf{I}-\dipoledir\otimes\dipoledir\right]\EfieldT.\label{eq:TI dipoles}
\end{equation}
An expression for the dipole of a chain made out of transversely isotropic
dipoles is determined by following the steps followed in the derivation
of Eq. (\ref{eq:long_chains_stress}) for the uniaxial dipoles.
The resulting polarization of the polymer is determined by summing
the dipoles of the chain in a representative volume element of a volume
$V^{R}$ via \citep{blyt&bloo08book}
\begin{equation}
\polarizationT=\frac{1}{V^{R}}\sum_{i}\dipoleT_{c},\label{eq:polarization_definition}
\end{equation}
and the resulting polarization stress is computed via Eq. (\ref{polarization stress}).\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{The micro-sphere technique}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}Consider a unit sphere whose surface represents the
directions of the referential end-to-end vectors. The directional
averaging of a quantity $\bullet$ over the unit sphere can be approximated
with a discrete summation
\begin{equation}
\left\langle \bullet\right\rangle =\frac{1}{4\pi}\intop_{A}\,\bullet\,\d A\approx\sum_{i=1}^{m}\bullet^{\left(i\right)}w^{\left(i\right)},\label{eq:micro_sphere_averaging}
\end{equation}
where the index $i=1,...,m$ refers to a unit direction vector $\indexunit$
where $\bullet^{\left(i\right)}$ is the value of quantity $\bullet$
in the direction $\indexunit$ and $w^{\left(i\right)}$ is an appropriate
non-negative weight function constrained by $\Sigma_{i=1}^{m}w_{i}=1$
\citep{bavzant1986efficient,Miehe20042617,menzel2009microsphere}.
In general, Eq. (\ref{eq:micro_sphere_averaging}) can be combined
with a more general anisotropic distribution function \citep{Alastru2009178,NME:NME2577}.
In the case of polymers the vectors $\indexunit$ represent the directions
of the end-to-end vectors of the polymer chains or, from a numerical
point of view, the integration directions in orientation space. For
randomly and isotropically oriented chains, or rather isotropic integration
schemes, the vectors $\indexunit$ satisfy
\begin{equation}
\sum_{i=1}^{m}\,\indexunit\, w^{\left(i\right)}=\mathbf{0},\label{eq:random 1}
\end{equation}
and
\begin{equation}
\sum_{i=1}^{m}\,\indexunit\otimes\indexunit\, w^{\left(i\right)}=\frac{1}{3}\,\mathbf{I}.\label{eq:random vectors tensor multiplication}
\end{equation}
In view of Eqs. (\ref{eq:random 1}) and (\ref{eq:random vectors tensor multiplication})
the micro-sphere technique naturally lends itself to the calculation
of the macroscopic polarization and stress. Specifically, for a polymer
with chains composed of $\numberofdipoles$ dipoles and $N_{0}$ chains
per unit referential volume, Eq. (\ref{eq:polarization_definition})
may be written as
\begin{equation}
\polarizationT=\frac{N_{0}}{J}\left\langle \dipoleT_{c}\right\rangle ,\label{eq:polarization_approximation}
\end{equation}
and the macroscopic stress according to Eq. (\ref{eq:Langevin stress})
as
\begin{equation}
\mechanicalstressT^{\,\, L}=\frac{N_{0}}{J}\left\langle \mechanicalstressT^{\,\, LC}\right\rangle ,\label{eq:stress_approximation}
\end{equation}
where we use the notation suggested in Eq. (\ref{eq:micro_sphere_averaging}).
\citet{bavzant1986efficient} demonstrated that a specific choice
of $42$ directions guarantees sufficient accuracy for the application
discussed in their work. We follow this conjecture where the integration
directions and the corresponding weight functions are given in Table
1 of \citet{bavzant1986efficient}. We note that other integration
schemes are available, as demonstrated by \citet{Waffenschmidt20121928,NME:NME4601}
and the references cited therein.\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\section{Dielectrics with randomly distributed monomers}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}Consider a model of a dielectric composed of $n_{0}$
monomers per unit referential volume, which are treated as mechanical
rods and electric dipoles. The dielectric is subjected to a mechanical
deformation, locally represented by $\defgT$, and an electric field
$\EfieldT$. We assume that the electric field induced on a monomer
by its neighbors is small in comparison with the applied electric
field \citep{Cohen2014}. We examine first a dielectric with uniaxial
monomers, the behavior of which is governed by the quadratic form
in Eq. (\ref{eq:uniaxial_dipole}). If all of the dipoles experience
the macroscopic rotation, i.e. $\dipoledir=\rotation\,\dipoledirref$
where $\rotation=\defgT\,\CGstrainT^{-1/2}$ is a proper rotational
tensor, then the polarization according to Eq. (\ref{eq:polarization_approximation})
is
\begin{equation}
\polarizationT=n_{0}\left\langle \dipoleT_{c}\right\rangle =n_{0}\,\constant\,\rotation\,\sum_{i=1}^{42}\dipoledirref^{\left(i\right)}\otimes\dipoledirref^{\left(i\right)}\, w^{\left(i\right)}\,\rotation^{T}\,\EfieldT=\frac{n_{0}\,\constant}{3}\,\EfieldT,\label{eq:Puniaxial}
\end{equation}
where Eqs. (\ref{eq:micro_sphere_averaging}) and (\ref{eq:random vectors tensor multiplication})
are used. The corresponding polarization stress is
\begin{equation}
\electricstressT=\EfieldT\otimes\polarizationT=\frac{n_{0}\,\constant}{3}\,\EfieldT\otimes\EfieldT.\label{eq:stress_dielectric}
\end{equation}
In the case of a dielectric composed of $n_{0}$ TI dipolar monomers
per unit referential volume, cf. Eq. (\ref{eq:TI dipoles}), which
mechanically act as rigid rods, the same assumptions that led to Eq.
(\ref{eq:Puniaxial}) lead to
\begin{equation}
\polarizationT=n_{0}\left\langle \dipoleT_{c}\right\rangle =n_{0}\,\frac{\constant}{2}\sum_{i=1}^{42}\left[\mathbf{I}-\rotation\,\dipoledirref^{\left(i\right)}\otimes\dipoledirref^{\left(i\right)}\rotation^{T}\right]w^{\left(i\right)}\EfieldT=\frac{n_{0}\,\constant}{3}\,\EfieldT.\label{eq:PTI}
\end{equation}
Accordingly, the expression for the polarization stress is given in
Eq. (\ref{eq:stress_dielectric}). We note that the polarization and
polarization stress calculated in Eqs. (\ref{eq:Puniaxial}), (\ref{eq:PTI})
and (\ref{eq:stress_dielectric}) are identical to the exact expressions
obtained by \citet{Cohen2014}.\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\section{Dielectric elastomers}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}We examine the behaviors of incompressible dielectric
elastomers according to three different models under various homogenous
electromechanical loading conditions and compare between their predicted
responses. To facilitate the comparison we assume that in the limit
of infinitesimal deformations and small electric excitations all three
models admit the same behavior. Specifically we assume that the initial
shear modulus and electric susceptibility are $\shear=0.1\,\mathrm{MPa}$
and $\susceptibility=3\,\vacpermittivity$. In those models in which
the lock-up effect is accounted for we choose the model parameters
such that under purely mechanical biaxial loading the lock-up stretch
is $\stretch^{lu}=5$. The precise models and the numerical values
assumed for their parameters are as follows:
\end{onehalfspace}
\begin{enumerate}
\begin{onehalfspace}
\item The \textit{macroscopic} model - the mechanical behavior is characterized
by the Gent model (\ref{eq:gent}) with the aforementioned shear modulus
and $J_{m}=47$. The electric behavior is determined according to
the linear model (\ref{eq:linear_relation_displacement}) with the
initial permittivity $\permittivity=4\,\vacpermittivity$.
\item The \textit{microscopic} model - the Langevin model (\ref{eq:Langevin stress})
is utilized in order to describe the mechanical behavior, where $\numberoflinks=25$
is chosen to fit the assumed lock-up stretch according to Eq. (\ref{eq:Langevin_lock_up})
and $N_{0}=\frac{\shear}{k\, T}$. We employ the long-chains model
(\ref{eq:long_chains_stress}) with chains that are composed of $\numberofdipoles=100$
uniaxial dipoles (Eq. \ref{eq:uniaxial_dipole}) to characterize the
dielectric response of the polymer. The material constant $\constant$
is chosen such that $\frac{\constant\, N_{0}\,\numberofdipoles}{3}=3\,\vacpermittivity$
to ensure that the referential polarization is identical to the one
admitted by the macroscopic model.
\item The \textit{Gaussian} model - the neo-Hookean model (\ref{eq:neo-Hookean})
with (\ref{eq:shear_modulus_micro}) are used, where $N_{0}=\frac{\shear}{k\, T}$,
in conjunction with the long-chains model (\ref{eq:long_chains_stress})
to characterize the mechanical and the electrical behaviors, respectively.
We assume that a chain is composed of $\numberofdipoles=100$ uniaxial
dipoles (Eq. \ref{eq:uniaxial_dipole}), where the chosen long-chains
model constant is identical to the one determined for the microscopic
model.\end{onehalfspace}
\end{enumerate}
\begin{onehalfspace}
In the following, we examine a thin layer of a polymer whose opposite
faces are covered with flexible electrodes with negligible stiffness.
The electrodes are charged with opposite charges so that the difference
in the electric potential induces an electric field across the layer.
From a mechanical point of view we consider four boundary conditions.
In the first two cases different displacements are prescribed at the
boundary and consequently the deformation gradient is defined. In
the following two representative cases we set the traction on the
boundaries. We choose a cartesian coordinate system in which the referential
electric field is aligned with the $\mathbf{\hat{y}}$-axis and calculate
the macroscopic polarization according to the microscopic and the
Gaussian models via Eq. (\ref{eq:polarization_approximation}). The
polarization stress is computed according to Eq. (\ref{polarization stress}).
The mechanical stress is computed via Eqs. (\ref{eq:gent}), (\ref{eq:neo-Hookean})
and (\ref{eq:Langevin stress}) for the macroscopic, the Gaussian
and the microscopic models, respectively. The pressure term is determined
from the traction free boundaries to which the electrodes are attached
and subsequently the total stress is computed via Eq. (\ref{stress decomposition}).
\global\long\def\dimStress{\bar{\sigma}}
\global\long\def\dimEfield{\bar{\Efield}}
\global\long\def\dimDfield{\bar{D}}
\global\long\def\dimEref{\bar{\Efield}^{\left(0\right)}}
\global\long\def\dimDref{\bar{D}^{\left(0\right)}}
For convenience we define the dimensionless normal stress along the
$\mathbf{\hat{x}}$-axis $\dimStress=\frac{1}{\shear}\,\mathbf{\hat{x}}\cdot\stressT\,\mathbf{\hat{x}}$
and the dimensionless referential electric field and referential electric
displacement along the $\mathbf{\hat{y}}$-direction $\dimEref=\sqrt{\frac{\permittivity}{\shear}}\,\EfieldTref\cdot\hat{\mathbf{y}}$
and $\dimDref=\frac{1}{\sqrt{\permittivity\,\shear}}\,\DfieldTref\cdot\hat{\mathbf{y}}$,
respectively. In the following examples the current configuration
counterparts of $\dimEref$ and $\dimDref$ are $\dimEfield=\sqrt{\frac{\permittivity}{\shear}}\,\EfieldT\cdot\hat{\mathbf{y}}$
and $\dimDfield=\frac{1}{\sqrt{\permittivity\,\shear}}\,\DfieldT\cdot\hat{\mathbf{y}}$,
respectively, as follows from Eqs. (\ref{referential electric field})
and (\ref{referential electric displacement}).\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{Equibiaxial stretching perpendicular to the electric field\label{sub:Biaxial}}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}In this case the material is stretched along the axes
$\mathbf{\hat{x}}$ and $\mathbf{\mathbf{\hat{z}}}$ such that $\stretch_{x}=\stretch_{z}=\stretch$.
As stated previously, the $\mathbf{\hat{y}}$-axis is aligned with
the referential electric field and due to the assumed incompressibility
$\stretch_{y}=\frac{1}{\stretch^{2}}$. This setting is common in
various experiments with EAPs \citep{choi2005effects,wissler2007electromechanical,mcka,qiang2012experimental}.
Figs. (\ref{fig:biaxial_stretch}a) and (\ref{fig:biaxial_stretch}b)
depict $\dimStress$ and $\dimDfield$ as functions of $\stretch^{2}$
and $\dimEfield$, respectively. The curves with the squared marks
correspond to the macroscopic model, the curves with the hollow circle
marks to the Gaussian model, and the curves with the filled circle
marks to the microscopic model. The applied referential electric field
is $\EfieldRef=50\,\mathrm{\frac{MV}{m}}$. We point out that at $\stretch^{2}=1$
the dimensionless stress according to the three models is not zero
but very small. Fig. (\ref{fig:biaxial_stretch}a) illustrates the
stress increase at the lock-up stretch according to the macroscopic
and the microscopic models. As expected, this effect is not observed
when the Gaussian model is employed. Furthermore, since the electric
field tends to stretch the material in the transverse plane, as long
as the prescribed stretch is smaller than the electrically induced
stretch, the overall stress is compressive. We emphasize that since
the potential is held fixed, as the layer is stretched the current
electric field increases and hence also the electromechanically induced
stress. This gives rise to different types of loss of stability phenomena
which are outside the scope of the current work. The reader is referred
to the works by, e.g., \citet{Dorfmann20101,Bertoldi201118,:/content/aip/journal/apl/102/15/10.1063/1.4801775}
and \citet{Shmuel2012307}. Only when the prescribed stretches are
large enough, the total stress becomes tensile.
In Fig. (\ref{fig:biaxial_stretch}b) we observe a linear dependence
of the electric displacement on the electric field according to the
macroscopic model, as follows from Eq. (\ref{eq:linear_relation_displacement})
and the assumed constant permittivity. Since the graph is plotted
in terms of the dimensionless quantities its slope is unity. In contrast,
the Gaussian and the microscopic models predict a stronger than linear
increase in the electric displacement as we stretch the material.
This is a result of the predicted increase in the permittivity due
to the stretching of a polymer with chains made up of uniaxial dipoles
\citep{Cohen14b}.
\begin{figure}
\hspace*{\fill}\includegraphics[scale=0.45]{bi_strVSstr}\qquad{}
\includegraphics[scale=0.45]{bi_EvsD}\hspace*{\fill}
\protect\caption{The dimensionless stress $\protect\dimStress$ (a) and electric displacement
$\protect\dimDfield$ (b) versus the stretch of the transverse plane
and the dimensionless electric field according to the macroscopic
model (the curve with the square marks), the Gaussian model (the curve
with the hollow circle marks) and the microscopic model (the curve
with the filled circle marks). \label{fig:biaxial_stretch}}
\end{figure}
\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{Pure shear deformation in the plane of the electric field\label{sub:Pure shear}}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}Once again we analyze a thin layer of a polymer whose
opposite faces are covered with flexible electrodes with negligible
stiffness. As before the $\mathbf{\hat{y}}$-axis is aligned with
the electric field, but in this case the deformation of the material
along the $\mathbf{\mathbf{\hat{z}}}$-axis is constraint such that
$\stretch_{z}=1$. The layer is stretched along the $\mathbf{\hat{x}}$-axis
such that $\stretch_{x}=\stretch$ and the incompressibility condition
yields $\stretch_{y}=\frac{1}{\stretch}$.
Fig. (\ref{fig:pure_shear}a) depicts the dimensionless normal stress
that develops along the $\mathbf{\hat{x}}$-axis as a function of
the stretch according to the macroscopic model (the curve with the
square marks), the Gaussian model (the curve with the hollow circle
marks) and the microscopic model (the curve with the filled circle
marks) under the applied referential electric field $\EfieldRef=100\,\mathrm{\frac{MV}{m}}$.
The inability of the neo-Hookean model to capture the lock-up stretch
is again clearly depicted. We also notice that there is a difference
between the lock-up stretches predicted by the macroscopic and the
microscopic models. This is because the lock-up stretch according
to the Langevin model is determined by the maximum eigenvalue of the
deformation gradient as seen from Eq. (\ref{eq:Langevin_lock_up}),
whereas according to the Gent model it depends on the first invariant
of the right Cauchy-Green strain tensor. \citet{trel75book} presented
experimental results demonstrating that polymers lock-up at different
values under different types of deformations, and therefore we conclude
that in this aspect the Gent model may be a better predictor. We wish
to point out that the lock-up stretch according to the microscopically
motivated 8-chain model of \citet{arru&boyc93jmps}, in which the
chain behaves according to Eq. (\ref{eq:Langevin stress}), depends
on $I_{1}$ as well and is able to capture the different lock-up stretch
values under various states of deformation. A comparison of different
models and their calibration according to the data reported by \citet{trel75book}
was carried out by \citet{steinmann&al2012}.
The curves with the square, hollow and filled circle marks in Fig.
(\ref{fig:pure_shear}b) correspond to the macroscopic model, the
Gaussian model and the microscopic model, respectively. Here, the
dependency of $\dimDfield$ on $\dimEfield$ is illustrated. Due to
the constant permittivity, we again note the linear dependency predicted
by the macroscopic model. The Gaussian and the microscopic models,
which are based on the electric long-chains model, predict a change
in the permittivity as a result of the mechanical stretch, in agreement
with the experimental findings of \citet{choi2005effects,wissler2007electromechanical,mcka,qiang2012experimental}.
Since the deformation is dictated by the boundary condition, the Gaussian
and the microscopic models predict the same electric behavior.
\begin{figure}
\hspace*{\fill}\includegraphics[scale=0.45]{ps_strVSstr}\qquad{}\includegraphics[scale=0.45]{ps_EvsD}\hspace*{\fill}
\protect\caption{The dimensionless stress $\protect\dimStress$ (a) and electric displacement
$\protect\dimDfield$ (b) versus the axial stretch and the dimensionless
electric field according to the macroscopic model (the curve with
the square marks), the Gaussian model (the curve with the hollow circle
marks) and the microscopic model (the curve with the filled circle
marks). \label{fig:pure_shear}}
\end{figure}
\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{Equibiaxial actuation normal to the electric field\label{sub:Trac Free}}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}We once again examine a thin layer of a polymer whose
opposite faces are covered with flexible electrodes with negligible
stiffness. This time, however, the circumferential boundary of the
layer is traction free, thus allowing the layer to expand in the plane
transverse to the direction of the electric field in response to the
electric excitation. We choose the same system of axes as in subsection
\ref{sub:Biaxial} and, thanks to the symmetry of the loading, the
deformation gradient is diagonal with $\stretch_{x}=\stretch_{z}=\stretch$.
Due to the assumed incompressibility we have $\stretch_{y}=\frac{1}{\stretch^{2}}$.
Fig. (\ref{fig:Traction_free}a) displays the dimensionless referential
electric field $\dimEref$ as a function of the induced stretch of
the transverse plane according to the macroscopic model (the curve
with the square marks), the Gaussian model (the curve with the hollow
circle marks) and the microscopic model (the curve with the filled
circle marks). The loss of stability discussed in the works of \citet{Dorfmann20101,Bertoldi201118,raey}
and \citet{Shmuel2012307} is demonstrated again. We note that after
the peak at $\dimEref\approx0.7$, even though the current electric
field increases monotonically, the Gaussian model predicts a decrease
in the referential electric field with an increase of the stretch.
In an experiment where the referential electric field is controlled,
the macroscopic and microscopic models predict a jump in the planar
stretch. This effect of a transition between two states is known as
snap-through (\citealp{goulbourne2005nonlinear}; \citealp{rudy&etal12ijnm}).
The curves with the square, hollow and filled circle marks in Fig.
(\ref{fig:Traction_free}b) correspond to the macroscopic, the Gaussian
and the microscopic models, where the predicted dependencies of $\dimDref$
on $\dimEref$ in the direction of the electric field are depicted.
Essentially, this plot illustrates the amount of charge per unit referential
surface area as a function of the potential difference divided by
the initial thickness of the layer. Initially, we observe an increase
of the surface charge with an increase of the electric potential.
However, beyond the peak at $\dimEref\approx0.7$ there is a reversed
trend where, at equilibrium, the electric potential drops while the
surface charge increases. This occurs in conjunction with the uncontrollable
increase in the area of the actuator as shown in Fig. (\ref{fig:Traction_free}a).
From a practical viewpoint this implies that beyond the peak, in a
manner reminiscent of an electrical short-circuit, excessive current
flows from the system electric source while the electric potential
drops. We stress that due to the thinning of the layer the current
electric field increases and may result in a failure of the DE due
to electric breakdown. We also note that even though the same electric
model is used in both the Gaussian and the microscopic models, there
is a difference in the relations between the electric field and the
electric displacement. This is due to the different mechanical deformations
resulting from the applied electric field according to the two different
models.
\begin{figure}
\hspace*{\fill} \includegraphics[scale=0.45]{TracFree_EvsStr}\qquad{}\includegraphics[scale=0.45]{TracFree_DvsE}\hspace*{\fill}
\protect\caption{The dimensionless referential electric field $\protect\dimEref$ (a)
and electric displacement $\protect\dimDref$ (b) versus the stretch
of the transverse plane and the dimensionless referential electric
field according to the macroscopic model (the curve with the square
marks), the Gaussian model (the curve with the hollow circle marks)
and the microscopic model (the curve with the filled circle marks).
\label{fig:Traction_free}}
\end{figure}
\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsection{Uniaxial actuation normal to the electric field \label{sub:Uniaxial-actuation-normal} }
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}We consider a setting reminiscent of the one considered
in subsection \ref{sub:Trac Free}, but this time the layer is free
to expand only along the $\mathbf{\hat{x}}$-direction. Consequently,
the deformation gradient components are $\stretch_{z}=1$, $\stretch_{x}=\stretch$,
and $\stretch_{y}=\frac{1}{\stretch}$.
Fig. (\ref{fig:actuation_strain}a) shows the dimensionless referential
electric field as a function of the stretch according to the macroscopic
model (the curve with the square marks), the Gaussian model (the curve
with the hollow circle marks) and the microscopic model (the curve
with the filled circle marks). As mentioned previously, the predicted
lock up stretches are $\stretch^{lu}=5$ and $\stretch^{lu}\approx7$
according to the microscopic and the macroscopic models, respectively.
Unlike the biaxial case described in subsection \ref{sub:Trac Free},
this loading does not admit loss of stability according to the macroscopic
and the microscopic models. The Gaussian model, that is based on the
mechanical neo-Hookean model, reaches a peak at $\dimEref\approx1$
and then monotonically decreases.
Fig. (\ref{fig:actuation_strain}b) depicts the dependence of $\dimDref$
on $\dimEref$. According to the Gaussian model, represented by the
curve with the hollow circle marks, no significant changes in the
surface charge are observed as we initially increase the electric
potential difference. However, beyond the peak of $\dimEref\approx1$,
this model predicts an unstable behavior as a result of the electrical
long-chains model. In order to maintain equilibrium according to the
macroscopic and the microscopic models an increase in the charge on
the electrodes requires an increase in the voltage difference between
them. Thus, the Gaussian model is admitting a behavior that is qualitatively
different from the behaviors of the other two models.
\begin{figure}
\hspace*{\fill}\includegraphics[scale=0.45]{ActStr_EvsStr}\qquad{}
\includegraphics[scale=0.45]{ActStr_DvsE}\hspace*{\fill}
\protect\caption{The dimensionless referential electric field $\protect\dimEref$ (a)
and electric displacement $\protect\dimDref$ (b) versus the stretch
and the dimensionless referential electric field according to the
macroscopic model (the curve with the square marks), the Gaussian
model (the curve with the hollow circle marks) and the microscopic
model (the curve with the filled circle marks). \label{fig:actuation_strain}}
\end{figure}
\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\section{Concluding remarks}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{7 mm}We determined the behavior of an incompressible polymer
undergoing homogenous deformations according to three electromechanical
models under four types of boundary conditions. The first model incorporates
well-known macroscopically motivated constitutive relations for the
mechanical and the electrical behaviors. The second, microscopic model,
combines mechanical and electrical models stemming from the microstructure
of the polymer. The third model assumes a Gaussian distribution of
the polymer chains and accordingly the mechanical and the electrical
behaviors are determined. We comment that the material parameters
$\numberofdipoles$ and $\numberoflinks$, which denote the number
of dipoles and links, respectively, are used as fitting parameters
and therefore the electrical long-chains model and the Langevin model
are not consistent. Further investigations in this regard is needed.
In the first two representative examples we apply a referential electric
field by setting the potential difference between the electrodes and
controlling the deformation. In the following two examples we apply
a referential electric field and set the traction on the boundaries
of the polymer. In order to determine the polarization and the stress
resulting according to the microscopic models we make use of the micro-sphere
technique. A comparison between the results shows that the macroscopically
and the microscopically motivated models predict different behaviors.
Therefore, this work encourages a further and a more rigorous investigation
into the connection between the two scales aimed at deepening our
understanding of the micro-macro relations and the mechanisms which
control the actuation. Moreover, analysis of this type may open the
path to the design and manufacturing of polymers with microstructures
that enable to improve the electromechanical coupling.\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsubsection*{Competing interests}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{2 mm}We have no competing interests.\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsubsection*{Authors' contributions}
\end{onehalfspace}
\begin{onehalfspace}
\vspace{2 mm}All authors carried out the research and the analysis
jointly and N.C. was responsible for the programming. All authors
gave final approval for publication.\vspace{7 mm}
\end{onehalfspace}
\begin{onehalfspace}
\subsubsection*{Funding }
\end{onehalfspace}
\begin{onehalfspace}
\vspace{2 mm}The first author would like to thank the financial assistance
of the Minerva Foundation. Additionally, partial financial support
for this work was provided by the Swedish Research Council (Vetenskapsr�det)
under grant 2011-5428, and is gratefully acknowledged by the second
author. Lastly, the first and the last authors wish to acknowledge
the support of the Israel Science Foundation founded by the Israel
Academy of Sciences and Humanities (grant 1246/11).
\bibliographystyle{unsrtnat}
|
2,877,628,088,622 | arxiv | \section{Introduction}
Simulating the dynamics of quantum systems is a major potential application of quantum computers, and is the original motivation for quantum computation~\cite{Feynman1982}. Moreover, the existence of efficient classical algorithms for this problem is unlikely as it is \textsc{BQP}-complete~\cite{Osborne2012}.
The first explicit quantum simulation algorithm by Lloyd~\cite{Lloyd1996universal} considered Hamiltonians described by a sum of non-trivial local interaction terms, which is relevant to many realistic systems. This was later extended by Aharonov and Ta-Shma~\cite{Aharonov2003Adiabatic} to sparse Hamiltonians, which generalizes local Hamiltonians and is a natural model for designing other quantum algorithms~\cite{Childs2003Exponential,Harrow2009}.
This paper considers the problem of simulating a $d$-sparse Hamiltonian $H\in\mathbb{C}^{N\times N}$ with at most $d$ nonzero entries in any row. That is, given a description of $H$, an evolution time $t>0$, and a precision $\epsilon>0$, the Hamiltonian simulation problem is concerned with approximating the time-evolution operator $e^{-iHt}$ with an error at most $\epsilon$. In the standard setting following~\cite{Berry2012}, a $d$-sparse $H$ is described by black-box unitary oracles that compute the positions and values of its nonzero matrix entries. These are more precisely defined as follows.
\begin{restatable}[Sparse matrix oracles]{definition}{sparseoracles}
\label{def:Sparse_Oracle}
A $d$-sparse matrix $H\in \mathbb{C}^{N\times N}$ has a black-box description if there exists the following two black-box unitary quantum oracles.
\begin{itemize}
\item ${O}_{H}$ is queried by row index $i\in[N]$, column index $k\in[N]$, and $z\in\{0,1\}^b$ and returns ${H}_{ik}=\bra{i}{H}\ket{k}$ represented as a $b$-bit number:
\begin{align}
{O}_{H}\ket{i}\ket{k}\ket{z}=\ket{i}\ket{k}\ket{z\oplus {H}_{ik}}.
\end{align}
\item ${O}_{F}$\footnote{In the dense or non-sparse case, the oracle ${O}_{F}$ is replaced by identity, which implements an $N$-sparse matrix $H$.} is queried by row index $i\in[N]$ and an index $l\in[d]$, and returns, in-place, the column index $f(i,l)$ of the $l^{\text{th}}$ non-zero matrix entry in the $i^{\text{th}}$ row:
\begin{align}
{O}_{F}\ket{i}\ket{l}=\ket{i}\ket{f(i,l)}.
\end{align}
\end{itemize}
\end{restatable}
The complexity of $d$-sparse Hamiltonian simulation in terms of these black-box queries has been studied extensively. Early approaches~\cite{Berry2007Efficient,Childs2011,Childs2012} based on Lie product formulas achieved scaling like $\mathcal{O}(\op{poly}(t,d,\|H\|,1/\epsilon))$. Steady improvements based on alternate techniques such as quantum walks~\cite{Childs2010}, fractional queries~\cite{Berry2014}, linear-combination-of-unitaries~\cite{Berry2015Hamiltonian,Berry2016corrected}, and quantum signal processing~\cite{Low2016}, have recently culminated in an algorithm~\cite{Low2016HamSim} with optimal scaling $\mathcal{O}(td\|H\|_\mathrm{max}+\log{(1/\epsilon)})$\footnote{For readability, we use this instead of the precise complexity~$\Theta(t d\|H\|_\mathrm{max}+\frac{\log{(1/\epsilon)}}{\log{(e+\log{(1/\epsilon)}/(td\|H\|_\mathrm{max}))}})$~\cite{Gilyen2018quantum}.} with respect to all parameters. Nevertheless, the possibility of further improvement is highlighted by~\cite{Low2017USA} which uses uniform spectral amplification to obtain a more general result scaling like $\tilde{\mathcal{O}}(t\sqrt{d\|H\|_\mathrm{max}\|H\|_1}\log{(1/\epsilon)})$, and~\cite{Low2018IntPicSim} which uses the interaction picture to obtain logarithmic scaling with respect to the diagonal component of $H$.
Depending on the application, it is more natural to express complexity in terms one choice of parameters over another. To illustrate, Hamiltonians with unit spectral norm $\|H\|=1$ are simulated as a subroutine in quantum algorithms that solve systems of linear equations~\cite{Childs2015LinearSystems}. In particular, the algorithm that is optimal in only $t,d$, and max-norm $\|H\|_\mathrm{max} = \max_{ik}|H_{ik}|$ is unable to exploit this prior information. Though the spectral norm is a natural parameter in many problems of interest,~\cite{Childs2010Limitation} have ruled out sparse simulation algorithms scaling like $\mathcal{O}(\op{poly}(\|H\|)t)$. To date, the best simulation algorithm that exploits knowledge of $\|H\|$ has query complexity $\tilde{\mathcal{O}}((t\|H\|)^{4/3}d^{2/3}/\epsilon^{1/3})$~\cite{Berry2012}, but the best lower bound, based on quantum search, is $\Omega(\sqrt{d})$.
While some recent developments~\cite{Wang2018NonSparse,Chakraborty2018BlockEncoding} achieve better scaling like $\tilde{\mathcal{O}}(t\sqrt{N}\|H\|)$ for non-sparse Hamiltonians, this is in terms of queries to a very strong quantum-RAM~\cite{Giovannetti2009qRAM} oracle. This model is incomparable to standard black-box queries as it enables the ability to prepare arbitrary quantum states with $\mathcal{O}(\log{(N)})$ queries, which is at odds with the standard $\Omega(\sqrt{N})$ quantum search query lower bound. Thus identifying the optimal trade-off between $t,d,\|H\|$ in the black-box setting remains a fundamental open problem with useful applications.
We present an algorithm for simulating sparse Hamiltonians with a complexity trade-off between time, sparsity, and spectral norm, that is optimal up to subpolynomial factors. The query complexity of our algorithm is as follows.
\begin{restatable}[Sparse Hamiltonian simulation with dependence on $\|H\|_{1 \shortto 2}$]{theorem}{HamSimSubordinate}
\label{thm:HamSim_sparse}
Let $H\in \mathbb{C}^{N\times N}$ be a $d$-sparse Hamiltonian satisfying all the following:
\begin{itemize}
\item There exist oracles of~\cref{def:Sparse_Oracle} for the positions of nonzero matrix entries, and their values to $b$ bits of precision.
\item An upper bound on the subordinate norm $\Lambda_{1 \shortto 2}\ge \|H\|_{1 \shortto 2} = \max_{k}\sqrt{\sum_i|H_{ik}|^2}$ is known.
\end{itemize}
Let $\tau=t\sqrt{d}\Lambda_{1 \shortto 2}$. Then the time-evolution operator $e^{-iHt}$ may be approximated to error $\epsilon$ for any $t>0$ with query complexity $C_\mathrm{queries}[H,t,\epsilon]$ and $C_\mathrm{gates}[H,t,\epsilon]$ additional arbitrary two-qubit gates where
\begin{align}
C_\mathrm{queries}[H,t,\epsilon]&=\mathcal{O}\left(\tau\lr{\log{\lr{\tau/\epsilon}}}^{\mathcal{O}(\sqrt{\log{d}})}\right)=\mathcal{O}\left(\tau\left(\tau/\epsilon\right)^{o(1)}\right),
\\\nonumber
C_\mathrm{gates}[H,t,\epsilon]&=\mathcal{O}\left(C_\mathrm{queries}[H,t,\epsilon](\log{(N)}+b)\right).
\end{align}
\end{restatable}
The scaling of our algorithm with the subordinate norm $\|H\|_{1 \shortto 2}$ rather than the spectral norm $\|H\|$ is in fact a stronger result than expected. In the worst-case, $\|H\|_{1 \shortto 2}\le\|H\|$, and in the best-case, $\|H\|_{1 \shortto 2}\ge \frac{\|H\|}{\sqrt{d}}$~\cite{Childs2010Limitation}. Indeed, any algorithm scaling like $\mathcal{O}(t\sqrt{d}\|H\|/\operatorname{polylog}{(td\|H\|)})$ would violate a quantum search lower bound~\cite{Berry2012}. Up to subpolynomial factors,~\cref{thm:HamSim_sparse} provides a strict improvement over all prior sparse Hamiltonian simulation algorithms~\cite{Low2016HamSim,Low2017USA}, as seen by substituting the inequalities $\sqrt{d}\|H\|_{1 \shortto 2}\le \sqrt{d\|H\|_\mathrm{max}\|H\|_1}\le d\|H\|_\mathrm{max}$. The linear scaling of gate complexity $\mathcal{O}(b)$ with respect to the bits of $H$, where $b$ is independent of $\tau/\epsilon$, is notable. Previous approaches scale with $\mathcal{O}(b^{5/2})$~\cite{Berry2015Hamiltonian} in addition to requiring scaling $b=\mathcal{O}(\log{(\tau/\epsilon)})$.
We also prove optimality through a lower bound. This lower bound is based on finding a Hamiltonian with a known query complexity, that also allows for the independent variation of these parameters as follows.
\begin{restatable}[Lower bound on sparse Hamiltonian simulation.]{theorem}{HamSimLowerBound}
\label{thm:Lower_bound}
For real numbers $d>1,s>1,t>0$, there exists a $d$-sparse Hamiltonian $H$ with $\|H\|_{1 \shortto 2}\le s$ such that the query complexity of simulating time-evolution $e^{-iHt}$ with bounded error is
\begin{align}
\Omega(t\sqrt{d}\|H\|_{1 \shortto 2}).
\end{align}
\end{restatable}
This simulation algorithm solves an open problem on the query complexity of black-box unitary implementation~\cite{Jordan2009,Berry2012}. Whereas Hamiltonian simulation is concerned with approximating a unitary $e^{-iHt}$ where given a description of $H$, the black-box unitary problem is concerned with approximating a unitary $U$ where $U$ itself is directly described by black-box oracles. Similar to the simulation problem, the decision variant of black-box unitary implementation is $\mathsf{BQP}$-complete. Though the query complexity of previous algorithms for this problem is $\tilde{\mathcal{O}}(d^{2/3}/\epsilon^{1/3})$, it was conjectured by~\cite{Berry2012} that the optimal scaling is closer to $\mathcal{O}(\sqrt{d})$. This is motivated by an $\Omega(\sqrt{d})$ lower bound for the case where $U$ solves a search problem. Using a simple reduction by~\cite{Jordan2009} to the Hamiltonian simulation problem, we show that this lower bound is tight up to subpolynomial factors as follows.
\begin{restatable}[Query complexity of black-box unitary implementation]{corollary}{BlackBoxUnitary}
\label{thm:BlackBoxUnitary}
Let $U\in \mathbb{C}^{N\times N}$ be a $d$-sparse unitary satisfying all the following:
\begin{itemize}
\item There exist oracles of~\cref{def:Sparse_Oracle} for the positions and values of non-zero matrix elements.
\end{itemize}
Then the $U$ may be approximated to error $\epsilon$ with query complexity
\begin{align}
C_\mathrm{queries}=\mathcal{O}\left(\sqrt{d}\lr{\log{\lr{d/\epsilon}}}^{\mathcal{O}(\sqrt{\log{d}})}\right)
=\mathcal{O}\left(\sqrt{d}\left(d/\epsilon\right)^{o(1)}\right).
\end{align}
\end{restatable}
Solving systems of linear equations is another application of this algorithm. The original black-box formulation of this problem~\cite{Harrow2009}, which also has $\mathsf{BQP}$-complete decision variant, specifies a $d$-sparse matrix $A\in\mathbb{C}^{N\times N}$ with condition number $\kappa$ to be described by black-box oracles, and a state $\ket{b}\in\mathbb{C}^{N}$. The goal then is to approximate the state $A^{-1}\ket{b}/\|A^{-1}\ket{b}\|$ to error $\epsilon$. Though previous algorithms achieve this using $\mathcal{O}(\kappa d\operatorname{polylog}{(\kappa d/\epsilon)})$ queries~\cite{Childs2015LinearSystems}, this falls short of the lower bound $\Omega(\kappa \sqrt{d}\log{(1/\epsilon)})$~\cite{Harrow2018}. By invoking quantum linear system solvers based on the block-encoding framework~\cite{Chakraborty2018BlockEncoding}, we match this lower bound up to subpolynomial factors.
\begin{restatable}[Query complexity of solving sparse systems of linear equations]{corollary}{BlackBoxQLSP}
\label{thm:BlackBoxQLSP}
Let $A\in \mathbb{C}^{N\times N}$ be a $d$-sparse matrix satisfying all the following:
\begin{itemize}
\item There exist oracles of~\cref{def:Sparse_Oracle} for the positions and values of non-zero matrix elements.
\item The spectral norm $\|A\|= 1$ and the condition number $\|A^{-1}\|\le\kappa$.
\end{itemize}
Let $\ket{b}\in\mathbb{C}^{N}$ be prepared by a unitary oracle $O_{b}\ket{0}=\ket{b}$.
Then the query complexity to all oracles for preparing a state $\ket{\psi}$ satisfying $\left\|\ket{\psi}-\frac{A^{-1}\ket{b}}{\|A^{-1}\ket{b}\|}\right\|\le\epsilon$ is
\begin{align}
C_\mathrm{queries} =\mathcal{O}\left(\kappa\sqrt{d}\left(\kappa d/\epsilon\right)^{o(1)}\right).
\end{align}
\end{restatable}
A key result in obtaining~\cref{thm:HamSim_sparse} is a general-purpose simulation algorithm for Hamiltonians $H=\sum^m_{j=1}H_j$ described by a sum of Hermitian terms, and is of independent interest. Roughly speaking, let $\alpha_j\ge \|H_j\|$ bound the spectral norm of each term, and let $C_j$ be the cost of simulating each term $H_j$ alone for constant time $t\alpha_j =\mathcal{O}(1)$. Then previous algorithms~\cite{Berry2015Truncated,Low2016hamiltonian} for simulating the full $H$ have cost scaling like
$\mathcal{O}\left(t\|\vec{\alpha}\|_1\|\vec{C}\|_1\right)$, which combines the worst-cases of $\alpha_j$ and $C_j$. In contrast, we describe in~\cref{Thm:HamSimRecursion} a simulation algorithm that scales like $\mathcal{O}(t\langle\vec{\alpha},\vec{C}\rangle e^{\mathcal{O}(m)})$, but picks up an exponential prefactor. This algorithm, which extends work by~\cite{Low2018IntPicSim}, is advantageous when $m$ held constant, and the cost of each term scales like $C_j=\mathcal{O}(1/\alpha_j)$. Though this condition appears artificial, it is situationally useful.
At the highest-level, our main simulation result~\cref{thm:HamSim_sparse} involves three steps. The first step, similar to~\cite{Berry2012}, splits a sparse Hamiltonian into a sum of $m$ terms $H_j$, where the $j^{\text{th}}$ term contains all matrix entries of $H$ with absolute value thresholded between $(\Lambda^{(j-1)}_\mathrm{max},\Lambda^{(j)}_\mathrm{max}]$. The second step uses a modification of the uniform spectral amplification technique by~\cite{Low2017USA} and an upper bound on the spectral norm $\|H_j\|\le\|H_j\|_1\le \alpha_j= \Lambda^2_{1 \shortto 2}/\Lambda^{(j-1)}_\mathrm{max}$ to simulate each term with cost $C_j=\mathcal{O}((d\Lambda^{(j)}_\mathrm{max}/\alpha_j)^{1/2})$ -- a different bound $\alpha_1$ is used for the $j=1$ term. Finally, we recombine these terms using the general-purpose simulation algorithm of~\cref{Thm:HamSimRecursion}. The stated result is obtained by a judicious choice of these thresholds $\Lambda^{(j)}_\mathrm{max}$ and optimizing for $m$ as a function of $d$.
In~\cref{sec:overview} we provide a more detailed overview of our algorithms.~\cref{sec:Ham_sim_recursion} derives the general-purpose simulation algorithm~\cref{Thm:HamSimRecursion}, which is obtained from a recursive application of Hamiltonian simulation in the interaction picture by~\cite{Low2018IntPicSim}, outlined in~\cref{sec:IntPicSim}.~\cref{sec:sparse_ham_sim} derives the sparse Hamiltonian simulation algorithm~\cref{thm:HamSim_sparse}, which applies uniform spectral amplification by~\cite{Low2017USA}, outlined in~\cref{sec:USA}, and modified in~\cref{sec:arithmetic-free} to obtain an improved precision scaling.~\cref{Sec:Lower_Bound} proves the lower bound~\cref{thm:Lower_bound} on sparse Hamiltonian simulation. The example applications are found in~\cref{sec:blackboxunitary}, which derives the result on black-box unitary implementation~\cref{thm:BlackBoxUnitary}, and~\cref{sec:blackboxQLSP}, for the result on solving sparse systems of linear equations~\cref{thm:BlackBoxQLSP}. We conclude in~\cref{sec:conclusion}.
\section{Overview of algorithms}
\label{sec:overview}
Recent simulation algorithms~\cite{Low2017USA,Low2018IntPicSim} have focused on simulating Hamiltonians described by so-called `standard-form' oracles~\cite{Low2016hamiltonian} (more descriptively called `block-encoding' in~\cite{Chakraborty2018BlockEncoding}). There, it is assumed that $H$, in some basis, is embedded in the top-left block of a unitary oracle as follows.
\begin{restatable}[Block-encoding framework]{definition}{Blockencoding}
\label{Def:Standard_Form}
A matrix ${H} \in \mathbb {C}^{N_s\times N_s}$ that acts on register $s$ is block encoded by any unitary $U$ where the top-left block of $U$, in a known computational basis state $\ket{0}_a\in\mathbb{C}^{N_a}$ on register $a$, is equal to $H/\alpha$ for some normalizing constant $\alpha \ge \|H\|$:
\begin{align}
U =
\left(\begin{matrix}
H/\alpha & \cdot \\
\cdot & \cdot
\end{matrix}\right),
\quad
(\bra{0}_a\otimes \ii_s)U(\ket{0}_a\otimes \ii_s) = \frac{H}{\alpha}.
\end{align}
\end{restatable}
Now given a Hamiltonian expressed as a sum of $m$ Hermitian terms
\begin{align}
H=\sum^m_{j=1}H_j,
\end{align}
where each term $H_j$ is assumed to be block-encoded by a unitary oracle $U_j$, the cost $C_\mathrm{total}[H,t,\epsilon]$ of Hamiltonian simulation to error $\epsilon$ then expressed in terms of $\alpha_j$ and number of times each $U_j$ is applied, and any additional arbitrary two-qubit quantum gates. As each $U_j$ might differ in complexity, we assign them a cost $C_j$, and also assume that the cost of a controlled-$U_j$ operator is $\mathcal{O}(C_j)$.
Depending on context, which should be clear, $C_j$ could refer to a query or gate complexity. We also find it useful to distinguish between two types of costs as follows.
\begin{itemize}
\item $C_\mathrm{queries}[H,t,\epsilon]=\sum_{j=1}M_j C_j$, where $M_j$ is the number of queries made to $U_j$.
\item $C_\mathrm{gates}[H,t,\epsilon]$ is the number of any additional arbitrary two-qubit quantum gates required.
\end{itemize}
In other words, if $C_j$ is a measure of gate complexity, then the total cost of simulation is
\begin{align}
C_\mathrm{total}[H,t,\epsilon]=C_\mathrm{queries}[H,t,\epsilon]+C_\mathrm{gates}[H,t,\epsilon].
\end{align}
In general, these queries are expensive, and so cost is dominated by $C_\mathrm{queries}[H,t,\epsilon]$. In other cases, particularly in sparse Hamiltonian simulation, $C_j$ is the number of queries made to the more fundamental oracles described in~\cref{def:Sparse_Oracle}.
Error is commonly defined as follows~\cite{Berry2015Truncated,Childs2017Speedup}. Any quantum circuit $U\in\mathbb{C}^{N_sN_a\times N_sN_a}$ that approximates a unitary operator $A\in\mathbb{C}^{N_s\times N_s}$, say time-evolution $A= e^{-iHt}$, to error $\epsilon$ is a block-encoding of $A$ such that $\|(\bra{0}_a\otimes I_s)U(\ket{0}_a\otimes I_s)-A\|\le\epsilon$. The error of approximating a product of $A_1,\cdots,A_m$ where each is approximated by a unitary $U_j$ is then $\|(\bra{0}_a\otimes I_s)U_m(\ket{0}_a\otimes I_s)\cdots(\bra{0}_a\otimes I_s) U_1(\ket{0}_a\otimes I_s) - A_m\cdots A_1\|\le m\epsilon$, with failure probability $\mathcal{O}(m\epsilon)$. This is useful when obtaining longer time-evolution by concatenating shorter time-evolution. Alternatively, if we choose to not project onto the $\ket{0}_a$ state, this implies an error $\max_{\|\ket{\psi}\|=1}\|[U-I_a\otimes A]\ket{0}\ket{\psi}\|\le\epsilon+\sqrt{2\epsilon(1-\epsilon)}$ for a single operator, and $\max_{\|\ket{\psi}\|=1}\|[U_m\cdots U_1 - I\otimes (A_m\cdots A_1)]\ket{0}\ket{\psi}\|\le m(\epsilon+\sqrt{2\epsilon(1-\epsilon)})$ for a sequence. Our results are insensitive to either definition of error as both cases scale linearly with $m$, and all factors of $1/\epsilon$ later on occur in subpolynomial factors.
With existing algorithms~\cite{Berry2015Truncated,Low2016hamiltonian}, the cost of simulation is
\begin{align}
\label{eq:qubitization}
C_\mathrm{queries}[H,t,\epsilon]&=\mathcal{O}\left(\left(t\|\vec{\alpha}\|_1+\log{(1/\epsilon)}\right)\|\vec{C}\|_1\right),\quad \|\vec{\alpha}\|_1=\sum^m_{j=1}\alpha_j,\quad \|\vec{C}\|_1=\sum^m_{j=1}C_j,
\\\nonumber
C_\mathrm{gates}[H,t,\epsilon]&=\mathcal{O}\left(\left(t\|\vec{\alpha}\|_1+\log{(1/\epsilon)}\right)\log{(N_a)}\right),
\end{align}
which combines the worst-cases of $\alpha_j$ and $C_j$ separately. The cost of our algorithm~\cref{Thm:HamSimRecursion} instead scales with the sum $\sum^m_{j=1}\alpha_j C_j$, but picks up an exponential prefactor $e^{\mathcal{O}(m)}$ as follows.
\begin{restatable}[Hamiltonian simulation by recursion in the interaction picture]{theorem}{HamSimRecursion}
\label{Thm:HamSimRecursion}
For any Hamiltonian $H\in\mathbb{C}^{N\times N}$, let us assume that
\begin{itemize}
\item $H=\sum^m_{j=1}H_j$ is a sum of $m$ Hermitian terms.
\item Block-encoding $H_j/\alpha_j$ with ancilla dimension $N_a$ has cost $C_j$.
\item The normalizing constants are sorted like $\alpha_1 \ge \alpha_2 \ge\cdots \ge \alpha_m > 0$.
\end{itemize}
Then the time-evolution operator $e^{-iHt}$ may be approximated to error $\epsilon$ for any $t>0$ with cost
\begin{align}
\label{eq:costRecursion}
C_\mathrm{queries}[H,t,\epsilon]&=
\mathcal{O}\left(t\langle\vec{\alpha},\vec{C}\rangle\log^{2m-1}{\lr{\frac{t\alpha_{1}}{\epsilon}}} \right),\quad \langle\vec{a},\vec{C}\rangle=\sum^m_{j=1}\alpha_j C_j,
\\\nonumber
C_\mathrm{gates}[H,t,\epsilon]&=
\mathcal{O}\left(t\|\vec{\alpha}\|_1\log^{2m-1}{\lr{\frac{t\alpha_{1}}{\epsilon}}} \log{(N_a)}\right).
\end{align}
\end{restatable}
This is achieved by repeatedly applying the interaction picture Hamiltonian simulation algorithm of~\cite{Low2018IntPicSim}, which we briefly outline. For simplicity, let us ignore error contributions, as they occur in polylogarithmic factors. Given a Hamiltonian $H=A+B$ with two terms, let us assume that $B/\alpha_B$ is block-encoded with cost $C_B$, and that $e^{-iAt}$ may be simulated with cost $\tilde{\mathcal{O}}(t\alpha_A C_A)$ for some cost$C_A$ and $\alpha_A\ge\|A\|$.~\cite{Low2018IntPicSim} then simulates $e^{-i(A+B)t}$ with cost $\tilde{\mathcal{O}}(t(\alpha_BC_B+\alpha_AC_A))$. Given a Hamiltonian described by a sum of $m$ block-encoded terms $H_j$, the proof of~\cref{Thm:HamSimRecursion} then follows by induction. For $k=1$, one simulates $e^{-iH_1t}$ using, say, an algorithm with the complexity of~\cref{eq:qubitization}. For $k>1$, one simulates $e^{-i(H_1+\cdots+H_k)t}$ by combining $e^{-i(H_1+\cdots+H_{k-1})t}$ with the block-encoding of $H_k$. One repeats until $k=m$, and each step contributes a multiplicative factor.
Given a $d$-sparse Hamiltonian described by the black-box oracles of~\cref{def:Sparse_Oracle}, we split it into $m$ terms $H_j$ where the $j^{\text{th}}$ term only contains entries of $H$ with absolute value between $(\Lambda^{(j-1)}_\mathrm{max},\Lambda^{(j)}_\mathrm{max}]$. Subsequently, we block-encode each term in the format of~\cref{Def:Standard_Form}. This block-encoding requires some number $C_j$ of queries, and achieves some normalizing constant $\alpha_j$. The total cost of simulation given by~\cref{eq:costRecursion} depends crucially on the quality of this encoding -- clearly, we would like to minimize $C_j$ and $\alpha_j$ such that for any fixed $m$, all products $\alpha_jC_j$ scale identically with respect to $d$.
Block-encoding a sparse Hamiltonian is related to the Szegedy quantum walk defined for any Hamiltonian~\cite{Childs2010}. In the $m=1$ case, one defines a set of quantum states $\ket{\chi_k}$ and $\ket{\bar{\chi}_i}$, such that they have $\mathcal{O}(d)$ nonzero amplitudes at known positions and their mutual overlap is an amplitude $\braket{\bar{\chi}_i}{\chi_k}=H_{ik}/\alpha$ that reproduces matrix values of $H$, up to a normalizing constant. By doubling the Hilbert space of $H$ so that $N_a=N$ and using two additional qubits, all states within each set $\{\ket{\chi_k}\}$, and $\{\ket{\bar\chi_i}\}$ can be made mutually orthogonal. Block-encoding then reduces to controlled-arbitary quantum state preparation where on an input state $\ket{k}_s$, one prepares a quantum state $\ket{\chi_k}$, then unprepares with a similar procedure for $\ket{\bar{\chi}_i}$.
A key step in controlled-state preparation applied in~\cite{Berry2012,Berry2015Hamiltonian,Low2017USA}, is converting a $b$-bit binary representation of a matrix element $H_{ik}$ in quantum state $\ket{H_{ik}}\ket{0}_a$ to an amplitude $\ket{H_{ik}}\left(\sqrt{\frac{H_{ik}}{\Lambda_\mathrm{max}}}\ket{0}_a+\cdots\ket{1}_a\right)$. Previous approaches require coherently computing a binary representation of $\ket{H_{ik}}\mapsto\ket{\sin^{-1}(\sqrt{H_{ik}/\Lambda_{\mathrm{max}}})}$. This is performed with quantum arithmetic and is extremely costly, both in the asymptotic limit and in constant prefactors. Moreover, the function can only be approximated, leading to the number of bits $b$ scaling with error. Using recent insight on arithmetic-free black-box quantum state preparation~\cite{Sanders2018}, this subroutine may be replaced with an easier problem of preparing the desired amplitude with a garbage state $\ket{u_{|H_{ik}|}}_b$, that depends only on $|H_{ik}|$, attached like $\ket{H_{ik}}\ket{0}_b\ket{0}_a\mapsto\ket{H_{ik}}\left(\sqrt{\frac{H_{ik}}{\Lambda_\mathrm{max}}}\ket{u_{|H_{ik}|}}_b\ket{0}_a+\cdots\right)$. This suffices to implement the walk, and as shown in the proof of~\cref{thm:sparse-Ham-block-encoding}, can be performed exactly with $O(1)$ reversible integer adders, hence the linear $\mathcal{O}(b)$ gate complexity scaling.
As shown by~\cite{Berry2012}, one may then block-encode $H_j$ with $\alpha_j=d\Lambda^{(j)}_\mathrm{max}$ using $C_j=\mathcal{O}(1)$.
By enhancing quantum state preparation with a linearized high-precision variant of amplitude amplification,~\cite{Low2017USA} encodes $H$ to error $\epsilon$ with $\alpha_j=\Theta(\Lambda^{(j)}_1)$ using $C_j=\tilde{\mathcal{O}}((d\Lambda^{(j)}_\mathrm{max}/\alpha_j)^{1/2})$ queries. By bounding the induced one-norm $\|H_j\|_1\le \Lambda^{(j)}_1\le \Lambda^2_{1 \shortto 2}/\Lambda^{(j-1)}_\mathrm{max}$, the overall cost of simulation by~\cref{eq:costRecursion} scales with the sum of $\alpha_jC_j=\tilde{\mathcal{O}}((d\Lambda^{(j)}_\mathrm{max}/\Lambda^{(j-1)}_\mathrm{max})^{1/2}\Lambda_{1 \shortto 2})$ (the $j=1$ case uses a different bound $\Lambda^{(1)}_1 \le \sqrt{d}\Lambda_{1 \shortto 2}$). For any fixed $m$, we choose the ratio between absolute values of terms to scale like $\Lambda^{(j)}_\mathrm{max}/\Lambda^{(j-1)}_\mathrm{max}=d^{\mathcal{O}(1/m)}$. A straightforward optimization of the exponents leads to the stated complexity of~\cref{thm:HamSim_sparse} with $m=\mathcal{O}(\sqrt{\log{(d)}})$.
\section{Hamiltonian simulation by recursion}
\label{sec:Ham_sim_recursion}
In this section, we prove~\cref{Thm:HamSimRecursion} for simulating time-evolution by Hamiltonians $H$ expressed as a sum of $m$ Hermitian terms
\begin{align}
H=\sum^m_{j=1}H_j.
\end{align}
Typically, time-evolution $e^{-iH_jt}$ by each term alone is easy to implement -- the challenge is combining these parts to approximate time-evolution $e^{-iHt}$ by the whole. In our algorithm, each Hamiltonian $H_j$ is assumed to be block-encoded by a unitary oracle in the format of~\cref{Def:Standard_Form}.
\begin{proof}[Proof of~\cref{Thm:HamSimRecursion}.]
We combine two other simulation algorithms by~\cite{Berry2015Truncated,Low2016hamiltonian,Low2018IntPicSim} in an $m$ step recursive procedure. At the $k=1$ step, the first algorithm~\cref{thm:sim_block} approximates $e^{-iH_1t}$ to error $\epsilon$ for time $t>0$ using the block-encoding of $H_1$. At the $k>1$ step, the second algorithm~\cref{thm:int_pic_sim} approximates $e^{-i(H_1+\cdots+H_k)t}$ by combining $e^{-i(H_1+\cdots+H_{k-1})t}$ with the block-encoding of $H_k$. By repeating this step for $k=2,\cdots,m$, we obtain the time-evolution operator $e^{-iHt}$. We now state these simulation algorithms. As~\cref{thm:int_pic_sim} modifies the original presentation of the same result in~\cite{Low2018IntPicSim}, we provide a proof sketch~\cref{sec:IntPicSim}.
\begin{restatable}[Hamiltonian simulation of a single term~\cite{Berry2015Truncated,Low2016hamiltonian}]{lemma}{HamSimBlock}
\label{thm:sim_block}
For any Hamiltonian $A\in\mathbb{C}^{N\times N}$, let us assume that
\begin{itemize}
\item Block-encoding $A/\alpha$ with ancilla dimension $N_a$ has cost $C_A$.
\end{itemize}
Then the time-evolution operator $e^{-iHt}$ may be approximated to error $\epsilon$ for any $t>0$ with cost
\begin{align}
C_\mathrm{queries}[H,t,\epsilon]=\mathcal{O}\left(\left(t\alpha+1\right)C_A\log{(t\alpha /\epsilon)}\right),
\;
C_\mathrm{gates}[H,t,\epsilon]=\mathcal{O}\left(\left(t\alpha+1\right)\log{(t\alpha/\epsilon)}\log{(N_a)}\right).
\end{align}
\end{restatable}
\begin{restatable}[Hamiltonian simulation in the interaction picture; adapted from~\cite{Low2018IntPicSim}]{lemma}{HamSimIntPic}
\label{thm:int_pic_sim}
For any Hamiltonian $A+B\in\mathbb{C}^{N\times N}$, where $A$ and $B$ are Hermitian, let us assume that
\begin{itemize}
\item Block-encoding $B/\alpha_B$ with ancilla dimension $N_a$ has cost $C_B$.
\item There exists some $\alpha_A\ge\alpha_B$, $\gamma>0$, such that approximating $e^{-iAt}$ to error $\epsilon$ for any time $t>0$ has cost
\begin{align}
C_\mathrm{queries}[A,t,\epsilon]&=\mathcal{O}\left((t\alpha_A+1)\log^{\gamma}\left(t \alpha_A/\epsilon\right)\right),
\\\nonumber
C_\mathrm{gates}[A,t,\epsilon]&=\mathcal{O}\left((t\alpha_A+1)\log^{\gamma}\left(t\alpha_A/\epsilon\right)\log{(N_a)}\right).
\end{align}
\end{itemize}
Then the time-evolution operator $e^{-iHt}$ may be approximated to error $\epsilon$ for any $t>0$ with cost
\begin{align}
\label{eq:int_pic_sim_cost}
C_\mathrm{queries}[B,A,t,\epsilon]&=\mathcal{O}\left((t\alpha_B+1)
\left(C_B+C_\mathrm{queries}\left[A,\frac{1}{\alpha_B},\frac{\epsilon}{t\alpha_B}\right]\right)\log^2{\lr{\frac{t\alpha_A}{\epsilon}}}
\right),
\\\nonumber
C_\mathrm{gates}[B,A,t,\epsilon]&=\mathcal{O}\left((t\alpha_B+1)
C_\mathrm{gates}\left[A,\frac{1}{\alpha_B},\frac{\epsilon}{t\alpha_B}\right]\log^2{\lr{\frac{t\alpha_A}{\epsilon}}}
\right).
\end{align}
\end{restatable}
Note the factor $\mathcal{O}(t\alpha+1)$, which provides the correct dominant scaling in the nonasymptotic limit where $t \alpha=\mathcal{O}(1)$. Let us apply~\cref{thm:int_pic_sim} recursively with $A=H_{< k}=\sum^{k-1}_{j=1}H_j$ and $B=H_k$ for $k>1$. The cost of the $k^{\text{th}}$ iteration depends on the cost of the $k-1^{\text{th}}$ iteration as follows:
\begin{align}
C_\mathrm{queries}[H_1,\cdot,t,\epsilon]&=\mathcal{O}\left((t\alpha_1+1)C_1\log\left(t\alpha_1/\epsilon\right)\right)
\displaybreak[0]\\\nonumber
C_\mathrm{queries}[H_2,H_{<2},t,\epsilon]&=\mathcal{O}\left((t\alpha_2+1)
\left(C_2+C_\mathrm{queries}\left[H_1,\cdot,\frac{1}{\alpha_2},\frac{\epsilon}{t\alpha_2}\right]\right)\log^2{\lr{\frac{t\alpha_1}{\epsilon}}}
\right)
\\\nonumber
&=\mathcal{O}\left((t\alpha_2+1)
\left(C_2+\lr{\frac{\alpha_1}{\alpha_2}+1}C_1\right)\log^3{\lr{\frac{t\alpha_1}{\epsilon}}}
\right)
\\\nonumber
&=\mathcal{O}\left((t\alpha_2+1)
\left(C_2+2\frac{\alpha_1}{\alpha_2}C_1\right)\log^3{\lr{\frac{t\alpha_1}{\epsilon}}}
\right)
\displaybreak[0]\\\nonumber
C_\mathrm{queries}[H_3,H_{<3},t,\epsilon]&=\mathcal{O}\left((t\alpha_3+1)
\left(C_3+C_\mathrm{queries}\left[H_2,\cdot,\frac{1}{\alpha_3},\frac{\epsilon}{t\alpha_3}\right]\right)\log^2{\lr{\frac{t\alpha_1}{\epsilon}}}
\right),
\\\nonumber
&=\mathcal{O}\left((t\alpha_3+1)
\left(C_3+2\frac{\alpha_2}{\alpha_3}
\left(C_2+2\frac{\alpha_1}{\alpha_2}C_1\right)\right)\log^5{\lr{\frac{t\alpha_1}{\epsilon}}}
\right),
\displaybreak[0]\\\nonumber
&\vdots
\displaybreak[0]\\\nonumber
C_\mathrm{queries}[H_k,H_{< k}, t,\epsilon]
&=\mathcal{O}\left((t\alpha_k+1)\left(\sum^k_{j=1}C_j\prod^{k}_{i=j+1}\frac{2\alpha_{i-1}}{\alpha_{i}}\right)\log^{2k-1}{\lr{\frac{t\alpha_1 }{\epsilon}}}\right)
\\\nonumber
&=\mathcal{O}\left(t\left(\sum^k_{j=1}\alpha_jC_j\right)\left(2\log{\lr{\frac{t\alpha_1 }{\epsilon}}}\right)^{2k-1}\right).
\\\nonumber
\end{align}
The gate complexity follows by an identical recursion:
\begin{align}
C_\mathrm{gates}[H_1,\cdot,t,\epsilon]&=\mathcal{O}\left((t\alpha_1+1)\log\left(t\alpha_1/\epsilon\right)\log{(N_a)}\right),
\\\nonumber
C_\mathrm{gates}[H_2,H_{<2},t,\epsilon]&=\mathcal{O}\left((t\alpha_2+1)
\left(C_\mathrm{gates}\left[H_1,\cdot,\frac{1}{\alpha_2},\frac{\epsilon}{t\alpha_2}\right]\right)\log^2{\lr{\frac{t\alpha_1}{\epsilon}}}
\right)
\\\nonumber
&=\mathcal{O}\left((t\alpha_2+1)
2\frac{\alpha_1}{\alpha_2}\log^3{\lr{\frac{t\alpha_1}{\epsilon}}}\log{(N_a)}\right),
\\\nonumber
&\vdots
\\\nonumber
C_\mathrm{gates}[H_k,H_{< k}, t,\epsilon]
&=\mathcal{O}\left((t\alpha_k+1)\left(\prod^{k}_{i=j+1}\frac{2\alpha_{i-1}}{\alpha_{i}}\right)\log^{2k-1}{\lr{\frac{t\alpha_1 }{\epsilon}}}\log{(N_a)}\right)
\\\nonumber
&=\mathcal{O}\left(t\left(\sum^k_{j=1}\alpha_j\right)\left(2\log{\lr{\frac{t\alpha_1 }{\epsilon}}}\right)^{2k-1}\log{(N_a)}\right).
\\\nonumber
\end{align}
Note that we use $\frac{\alpha_{j-1}}{\alpha_j}\ge 1$ to simplify $\frac{\alpha_{j-1}}{\alpha_j}+1\le 2\frac{\alpha_{j-1}}{\alpha_j}$. Setting $k=m$ completes the proof.
\end{proof}
\section{Sparse Hamiltonian simulation with $t\sqrt{d}\|H\|_{1\rightarrow 2}$ scaling}
\label{sec:sparse_ham_sim}
We now simulate sparse Hamiltonians by applying the algorithm of~\cref{sec:Ham_sim_recursion}. In the standard definition, a Hamiltonian is $d$-sparse if it has at most $d$-nonzero entries in any row. Moreover, it is assumed that there exists black-box oracles to compute the positions and $b$-bit values of these entries. The cost of simulation is the number of queries made to these oracles, which are more precisely defined by~\cref{def:Sparse_Oracle}. The query complexity of block-encoding a sparse Hamiltonian $H$ is given by the following result, which was mostly proven by~\cite{Low2017USA}. Our contribution is improving the gate complexity scaling from $\mathcal{O}(b^{5/2})$ to $\mathcal{O}(b)$.
\begin{restatable}[Block encoding of sparse Hamiltonians by amplitude multiplication; modified from~\cite{Low2017USA}]{theorem}{USAblockencoding}
\label{thm:sparse-Ham-block-encoding}
Let $H\in \mathbb{C}^{N\times N}$ be a $d$-sparse Hamiltonian satisfying all the following:
\begin{itemize}
\item There exist oracles $O_F$ and $O_H$ of~\cref{def:Sparse_Oracle} that compute the positions and values of non-zero matrix elements to $b$ bits of precision.
\item An upper bound on the max-norm $\Lambda_{\mathrm{max}}\ge \|H\|_\mathrm{max} = \max_{ik}|H_{ik}|$ is known.
\item An upper bound on the spectral-norm $\Lambda\ge \|H\| = \max_{v\neq 0}\frac{\|H\cdot v\|}{\|v\|}$ is known.
\item An upper bound on the induced one-norm $\Lambda_{1}\ge \|H\|_{1} = \max_{k}\sum_i|H_{ik}|$ is known.
\end{itemize}
Then there exists a Hamiltonian $\tilde H \in \mathbb{C}^{N\times N}$ that approximates $H$ with error $\|\tilde H- H\|=\mathcal{O}(\Lambda\delta)$, and can be block-encoded with normalizing constant $\alpha=\Theta(\Lambda_{1})$ using
\begin{itemize}
\item Queries $O_F$ and $O_H$: $\mathcal{O}\left(\sqrt{\frac{d\Lambda_{\mathrm{max}}}{\Lambda_{1}}}\log{\lr{\frac{1}{\delta}}}\right)$.
\item Quantum gates: $\mathcal{O}\left(\sqrt{\frac{d\Lambda_{\mathrm{max}}}{\Lambda_{1}}}\log{\lr{\frac{1}{\delta}}}\left(\log{(N)}+b\right)\right)$.
\item Qubits: $\mathcal{O}(\log{(N)}+b)$.
\end{itemize}
\end{restatable}
\begin{proof}
Proof outline in~\cref{sec:USA}.
\end{proof}
Our algorithm for simulating sparse Hamiltonians splits $H=\sum^m_{j=1}H_j$ into $m$ Hermitian terms, where matrix entries of the $j^\text{th}$ term are
\begin{align}
(H_j)_{ik}=
\begin{cases}
H_{ik}, & \Lambda^{(j-1)}_{\mathrm{max}} < |H_{ik}| \le \Lambda^{(j)}_{\mathrm{max}}, \\
0, &\text{otherwise},
\end{cases}
\quad
0=\Lambda^{(0)}_{\mathrm{max}}<\Lambda^{(1)}_{\mathrm{max}}<\cdots<\Lambda^{(m)}_{\mathrm{max}}=\Lambda_\mathrm{max}.
\end{align}
Each term is block-encoded with normalization constant $\alpha_j$ by a procedure making $C_j$ queries to ${O}_{H}$ and ${O}_{F}$. We may then simulate time-evolution $e^{-iHt}$ by recombining these terms using~\cref{Thm:HamSimRecursion}. The query complexity of this procedure then depends strongly on the scaling of $\alpha_jC_j$. A judicious choice of cut-offs $\Lambda^{(j)}_{\mathrm{max}}$ combined with the block-encoding procedure~\cref{thm:sparse-Ham-block-encoding} allows us to prove our main result of~\cref{thm:HamSim_sparse} on sparse Hamiltonian simulation.
\begin{proof}[Proof of~\cref{thm:HamSim_sparse}.]
The Hamiltonian $H_j$ in the decomposition $H=\sum^m_{j=1}H_j$ has matrix elements $\Lambda^{(j-1)}_{\mathrm{max}} < |H_{ik}| \le \Lambda^{(j)}_{\mathrm{max}}$, and may be block encoded using the procedure of~\cref{thm:sparse-Ham-block-encoding}. The only difference is replacing the oracle $O_H$ with an oracle
\begin{align}
O_{H_j}\ket{i}\ket{k}\ket{z}=\ket{i}\ket{k}\ket{z\oplus ({H}_j)_{ik}},
\end{align}
that outputs matrix entries of $H_j$. Using $\mathcal{O}(1)$ queries to the oracle $O_H$ and $\mathcal{O}(\op{poly}(b))$ quantum gates, $O_{H_j}$ is constructed by computing the absolute value of $|({H})_{ik}|$ on $\mathcal{O}(b)$ bits, and performing a comparison with $\Lambda^{(j-1)}_{\mathrm{max}}$ and $\Lambda^{(j)}_{\mathrm{max}}$.
From~\cref{thm:sparse-Ham-block-encoding}, we may block-encode a Hamiltonian $\tilde{H}_j$ that approximates $\|\tilde{H}_j - H_j\| = \mathcal{O}(\Lambda^{(j)} \delta_j)$, with a normalization constant $\alpha_j=\Theta(\Lambda^{(j)}_1)\ge \|H_j\|_1$.
Thus the error from simulating time-evolution by $\tilde{H}=\sum_{j=1}^m \tilde{H}_j$ instead of $H$ is bounded by
\begin{align}
\|e^{-i\tilde{H}t}-e^{-iHt}\|
\le t\|\tilde{H}-H\|\le t\sum_{j=1}^m\|\tilde{H}_j-H_j\|
=\mathcal{O}\left(t\sum_{j=1}^m\Lambda^{(j)}\delta_j\right).
\end{align}
Using the upper bound $\Lambda^{(j)}\le \Lambda^{(j)}_1\le \sqrt{d}\Lambda^{(j)}_{1 \shortto 2} \le \sqrt{d}\Lambda_{1 \shortto 2}$, the overall contribution of this error may be bounded by $\mathcal{O}(\epsilon)$ with the choice $\delta_{j}=\frac{\epsilon}{m t \sqrt{d}\Lambda_{1 \shortto 2}}$. Thus the query complexity of block-encoding $H_j$ is $C_j=\mathcal{O}(\sqrt{d\Lambda^{(j)}_\mathrm{max}/\Lambda^{(j)}_1}\log{(mt\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon)})$. Using these values $C_j$, and relabeling the $\alpha_j$ so that they are sorted like $\alpha_1 \ge \alpha_2 \ge\cdots \ge \alpha_m$, the query complexity of simulation by~\cref{Thm:HamSimRecursion} is
\begin{align}
\label{eq:sparse_cost_intermediate}
C_\mathrm{queries}[H,t,\epsilon]&=
\mathcal{O}\left(t\langle\vec{\alpha},\vec{C}\rangle\log^{2m-1}{\lr{\frac{t\alpha_1}{\epsilon}}} \right)
\\\nonumber
&=\mathcal{O}\left(t\left(
\sum^m_{j=1}\sqrt{d\Lambda^{(j)}_\mathrm{max}\Lambda^{(j)}_1}\right)(\log(t \sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))^{\mathcal{O}(m)} \right).
\end{align}
In the last line, we simplify $\log{\lr{\frac{mt\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}}}\log^{2m-1}{\lr{\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}}}=\log^{\mathcal{O}(m)}{\lr{\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}}}$.
We use the following upper bounds on the induced one-norm of $H_j$:
\begin{align}
\text{For } j=1,\; &\|H_1\|_1
\le \|H\|_1
= \max_j \|H\cdot e_{j}\|_1
\le \sqrt{d} \max_j\|H\cdot e_{j}\|
=\sqrt{d}\|H\|_{1 \shortto 2}
\le \sqrt{d}\Lambda_{1 \shortto 2} =\Lambda^{(1)}_1,
\\\nonumber
\forall j>1,\; &\|H_j\|_1
=\max_{k}\sum_{i}|(H_j)_{ik}|
< \max_{k}\sum_{i}\frac{|H_{ik}|^2}{\Lambda^{(j-1)}_{\mathrm{max}}}
= \frac{\|H\|^2_{1 \shortto 2}}{\Lambda^{(j-1)}_{\mathrm{max}}}
\le \frac{\Lambda^2_{1 \shortto 2}}{\Lambda^{(j-1)}_{\mathrm{max}}}
=\Lambda^{(j)}_1.
\end{align}
In the first line, $e_j$ is a unit vector with an entry $1$ in row $j$, and we apply the fact that $\|v\|_1\le\sqrt{d}\|v\|$ for all vectors $v$ with $d$ non-zero entries. In the second line, we apply the facts $|(H_j)_{ik}|/ \Lambda^{(j-1)}_{\mathrm{max}}\ge 1$ and $|H_{ik}|\ge|(H_j)_{ik}|$. By substituting into~\cref{eq:sparse_cost_intermediate}, we obtain
\begin{align}
\label{eq:sparse_cost_intermediateB}
C_\mathrm{queries}[H,t,\epsilon]
&=\mathcal{O}\left(t\left[
\sqrt{d^{3/2}\Lambda^{(1)}_\mathrm{max}\Lambda_{1 \shortto 2}}
+\sum^m_{j=2}\sqrt{d\frac{\Lambda^{(j)}_\mathrm{max}}{\Lambda^{(j-1)}_\mathrm{max}}}\Lambda_{1 \shortto 2}\right](\log(t \sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))^{\mathcal{O}(m)} \right).
\end{align}
We now present an appropriate sequence of cut-offs $\Lambda^{(j)}_{\mathrm{max}}$, chosen so that all terms in the square brackets of~\cref{eq:sparse_cost_intermediateB} scale identically.
The largest may be chosen to be $\Lambda^{(m)}_{\mathrm{max}}=\Lambda_{1 \shortto 2}$, which follows from the inequality $\|H\|_{1 \shortto 2}=\max_{k}\sqrt{\sum_{i}|H_{ik}|^2}\ge \|H\|_\mathrm{max}$. The smallest may be chosen to be $\Lambda^{(1)}_{\mathrm{max}}=\Lambda_{1 \shortto 2}d^{-1/2+\gamma}$, for some $\gamma>0$. For any fixed value of $m$, let us interpolate between these extremes with a fixed ratio
\begin{align}
\label{eq:ratio_choice}
\frac{\Lambda^{(j)}_{\mathrm{max}}}{\Lambda^{(j-1)}_{\mathrm{max}}}
=d^{\gamma} >1
\quad
\Rightarrow
\quad
\gamma=\frac{1}{2m}
\quad\Rightarrow
\quad
\Lambda^{(j)}_{\mathrm{max}}=\Lambda_{1 \shortto 2}d^{\frac{1}{2}\frac{j}{m}-\frac{1}{2}}
\quad\Rightarrow
\quad
\Lambda^{(j)}_1 = \sqrt{d} \Lambda_{1 \shortto 2}d^{-\frac{j-1}{2m}}.
\end{align}
Substituting this choice into~\cref{eq:sparse_cost_intermediateB}, the cost of simulation is
\begin{align}
\label{eq:sparse_cost_intermediate1}
C_\mathrm{queries}[H,t,\epsilon]&=\mathcal{O}\left(t
\sqrt{d}\Lambda_{1 \shortto 2}d^{\frac{1}{4m}}(\log(t\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))^{\mathcal{O}(m)} \right)
\\\nonumber
&=\mathcal{O}\left( t
\sqrt{d}\Lambda_{1 \shortto 2}e^{\mathcal{O}(m \log\log(t\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))+\frac{1}{4m}\log{(d)}}\right)
\\\nonumber
&=\mathcal{O}\left( t
\sqrt{d}\Lambda_{1 \shortto 2}e^{\mathcal{O}(\sqrt{\log{(d)}}\log\log(t\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))}\right)
\\\nonumber
&=\mathcal{O}\left(t
\sqrt{d}\Lambda_{1 \shortto 2}\left(\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}\right)^{o(1)}\right).
\end{align}
In the first line, we simplify using $m\log^{\mathcal{O}(m)}{\lr{\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}}}=\log^{\mathcal{O}(m)}{\lr{\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}}}$.
In the third line, we minimize cost with respect to $d$ by choosing
$m=\mathcal{O}\left(\sqrt{\log{(d)}}\right)$.
In the last line, the query complexity in~\cref{thm:HamSim_sparse} is proven by noting that the factor $e^{\mathcal{O}({\sqrt{\log{(d)}}\log{\log( t\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon)}})}=\mathcal{O}\left(\left(\frac{t\sqrt{d}\Lambda_{1 \shortto 2}}{\epsilon}\right)^{o(1)}\right)$ has subpolynomial scaling with respect to $t,d,\Lambda_{1 \shortto 2}$ and $\epsilon$. The gate complexity is proven simply by multiplying the query complexity by the block-encoding overhead of $\mathcal{O}(\log{(N)}+b^k)$ gates per query.
\end{proof}
Simple modifications allow for minor improvements in query complexity. For instance, we used an upper bound $\Lambda_\mathrm{max}=\Lambda_{1 \shortto 2}$ in~\cref{eq:ratio_choice}. This enabled expressing complexity in terms of just the matrix norm $\Lambda_{1 \shortto 2}$. However, repeating our proof with both $\Lambda_\mathrm{max}$ and $\Lambda_{1 \shortto 2}$ free parameters leads to $C_\mathrm{queries}=\mathcal{O}\left( t
\sqrt{d}\Lambda_{1 \shortto 2}e^{\mathcal{O}(\sqrt{\log{(\sqrt{d}\Lambda_{\max}/\Lambda_{1 \shortto 2})}}\log\log(t\sqrt{d}\Lambda_{1 \shortto 2}/\epsilon))}\right)$. This implies polylogarithmic, instead of subpolynomial, scaling with error in the special case $\sqrt{d}\Lambda_{\max}=\mathcal{O}(\Lambda_{1 \shortto 2})$.
\section{Sparse Hamiltonian simulation lower bound}
\label{Sec:Lower_Bound}
We now prove a lower bound demonstrating that the scaling of our simulation algorithm with $t\sqrt{d}\|H\|_{1 \shortto 2}$ is optimal up to sub-polynomial factors. The argument is identical to the lower bound by~\cite{Low2017USA}. The only difference is that we quantify cost with the subordinate norm $\|H\|_{1 \shortto 2}$ instead of the induced one-norm $\|H\|_1$. This leads to a lower bound of $\Omega\left(t \sqrt{d}\|H\|_{1 \shortto 2}\right)$, which is a more general result.
\HamSimLowerBound*
\begin{proof}
We construct sparse Hamiltonians whose dynamics compute suitable Boolean functions with known quantum query lower bounds. The first Hamiltonian ${H}_{\op{PARITY}}$ computes the parity of $n$ bits, and the second Hamiltonian ${H}_{\op{OR}}$ computes the disjunction of $m$ bits. By composing these Hamiltonians, one may obtain a third Hamiltonian ${H}_{\op{PARITY}\circ\op{OR}}$ that computes the parity of $n$ bits, where each bit is the disjunction of $m$ bits, as depicted in~\cref{Fig:ParityOr}. The stated lower bound is then obtained by combining the known quantum query complexity of $\Omega(n\sqrt{m})$~\cite{Reichardt2009span} for computing $\op{PARITY}\circ\op{OR}$ on $n\times m$ bits~, with parameters that describe ${H}_{\op{PARITY}\circ\op{OR}}$.
\begin{figure} [t]
\centering
\tikz [font=\footnotesize,
level 1/.style={sibling distance=10em},
level 2/.style={sibling distance=2em}, level distance=1cm]
\node {$\op{PARITY}$}
child { node {$\op{OR}$}
child { node {$x_{1,1}$} }
child { node {$x_{1,2}$} }
child { node {$\cdots$} }
child { node {$x_{1,m}$} }
}
child { node {$\op{OR}$}
child { node {$x_{2,1}$} }
child { node {$x_{2,2}$} }
child { node {$\cdots$} }
child { node {$x_{2,m}$} }
}
child { node {$\cdots$}
}
child { node {$\op{OR}$}
child { node {$x_{n,1}$} }
child { node {$x_{n,2}$} }
child { node {$\cdots$} }
child { node {$x_{n,m}$} }
};
\caption{\label{Fig:ParityOr}Computation of $\op{PARITY}\circ\op{OR}$ on $n\times m$ bits $x_{i,j}\in\{0,1\}$.}
\end{figure}
The Hamiltonian ${H}_{\op{PARITY}}$ by~\cite{Berry2015Hamiltonian} is constructed from a simpler $(n+1)\times(n+1)$ Hamiltonian ${H}_{\mathrm{spin}}$ that acts on basis states $\{\ket{j}_s : j\in\{0,\cdots,n\}\}$ with non-zero matrix elements $\bra{j-1}_s{H}_{\mathrm{spin}}\ket{j}_s
=
\bra{j}_s{H}_{\mathrm{spin}}\ket{j-1}_s
=\sqrt{j(n-j+1)}/n$ for $j\in\{1,\cdots, N\}$. Thus
\begin{align}
{H}_{\mathrm{spin}}=\sum_{j\in\{1,\cdots N\}}\frac{\sqrt{j(n-j+1)}}{n}\ket{j-1}\bra{j}_{s} + \op{h.c.}.
\end{align}
The transitions generated by Hamiltonian may be represented by the graph in~\cref{Fig:H_parity_graph}a, where basis states are nodes, and non-matrix elements are edges between nodes.
with the useful property that time-evolution by ${H}_{\mathrm{spin}}$ for time $\frac{n\pi}{2}$ transfers the state $\ket{0}_s$ to $\ket{n}_s=e^{-i{H}_{\mathrm{spin}}n\pi/2}\ket{0}_s$, after passing through all intermediate nodes.
\begin{figure} [t]
\centering
\begin{tikzpicture}
\node [set=label] (m0) at (-2,4) {a)};
\foreach \i in {0,...,9} {
\node [set=label] (l\i) at (\i,3.7) {$\ket{\i}_s$};
\node [circle, draw=black, fill=black, inner sep=2pt] (x\i) at (\i,3) {};
}
\graph{
(x0) -- (x9);
};
\node [set=label] (m0) at (-2,2) {b)};
\foreach \i in {0,...,9} {
\node [set=label] (l\i) at (\i,1.7) {$\ket{\i}_s$};
\node [circle, draw=black, fill=black, inner sep=2pt] (x\i) at (\i,1) {};
\node [circle, draw=black, fill=black, inner sep=2pt] (y\i) at (\i,0) {};
\node [circle, draw=black, fill=black, inner sep=2pt] (z\i) at (\i,0) {};
}
\node [set=label] () at (-1,1) {$\ket{0}_{\mathrm{out}}$};
\node [set=label] () at (-1,0) {$\ket{1}_{\mathrm{out}}$};
\graph{
(y0) -- (y1) -- (x2) -- (x3) -- (y4) -- (y5) -- (y6) -- (x7) -- (y8) -- (y9);
(x0) -- (x1) -- (y2) -- (y3) -- (x4) -- (x5) -- (x6) -- (y7) -- (x8) -- (x9);
};
\node [set=label] (m0) at (-1,-0.6) {$x_j=$};
\foreach \i/\val in {0/0,1/1,2/0,3/1,4/0,5/0,6/1,7/1,8/0} {
\node [set=label] (m0) at (\i+0.5,-0.5) {$\val$};
}
\end{tikzpicture}
\caption{\label{Fig:H_parity_graph}a) Graph representation of non-zero matrix elements of the Hamiltonian ${H}_{\mathrm{spin}}$ with $n=9$. Evolution under ${H}_{\mathrm{spin}}$ for time $n\pi/2$ transfers state $\ket{0}_s$ to $\ket{9}_s$. b) Graph representation of non-zero matrix elements of the Hamiltonian ${H}_{\mathrm{PARITY}}$ with $n=9$, obtained by composing ${H}_{\mathrm{spin}}$ element-wise with ${H}_{\mathrm{NOT},j}$, which depends on the $j^{\text{th}}$ value of the bit-sting $x\in\{0,1\}^n$. Evolution under ${H}_{\mathrm{PARITY}}$ for time $n\pi/2$ transfers state $\ket{0}_s\ket{z}_\mathrm{out}$ to $\ket{9}_s\ket{z\oplus_j x_j}_\mathrm{out}$.}
\end{figure}
We now modify ${H}_{\mathrm{spin}}$ based on a bit-string $x\in\{0,1\}^{n}$. For each bit $x_j$, consider the $2\times 2$ Hamiltonian $H_{\op{NOT},j}$ that acts on basis states $\{\ket{k}_{\mathrm{out}}:k\in\{0,1\}$, with matrix elements defined by
\begin{align}
{H}_{\mathrm{NOT},j}=\left(
\begin{matrix}
x_j \oplus 1 & x_j \\
x_j & x_j \oplus 1
\end{matrix}
\right).
\end{align}
Observe that ${H}_{\mathrm{NOT},j}\ket{k}_\mathrm{out}=\ket{k\oplus x_j}_\mathrm{out}$. Composing ${H}_{\op{spin}}$ with ${H}_{\mathrm{NOT},j}$ defines the $(2n+2)\times (2n+2)$ ${H}_{\mathrm{PARITY}}$ in the following manner
\begin{align}
\label{Eq:Ham_Parity}
{H}_{\mathrm{PARITY}}=
\sum_{j\in\{1,\cdots N\}}\frac{\sqrt{j(n-j+1)}}{n}\ket{j-1}\bra{j}_{s}\otimes {H}_{\mathrm{NOT},j} + \op{h.c.}.
\end{align}
This Hamiltonian is represented by the graph in~\cref{Fig:H_parity_graph}b. Note the two disjoint paths connecting states $\ket{0}_s$ and $\ket{n}_s$. As the path taken by an initial state $\ket{0}_s\ket{0}_\mathrm{out}$ depends on the parity of $x$, time-evolution by ${H}_{\mathrm{PARITY}}$ for time $\frac{n\pi}{2}$ transfers the state $\ket{0}_s\ket{0}_\mathrm{out}$ to $\ket{n}_s\ket{\bigoplus_j x_j}_\mathrm{out}$. This computes the parity of $x$, and the answer is obtained by measuring the `$\op{out}$' register.
The $2m\times 2m$ Hamiltonian ${H}_{\op{OR}}$ by~\cite{Low2017USA} computes the $\op{OR}$ of a bit-string $x\in\{0,1\}^m$, assuming that at most one bit is non-zero. In the basis $\{\ket{k}_{\mathrm{out}}\ket{l}_{o}:k\in\{0,1\},l\in\{1,\cdots,m\}\}$, its matrix elements are defined by
\begin{align}
\label{eq:Ham_OR}
{H}_{\mathrm{OR}}&=\left(
\begin{array}{c|c}
{C}_{ 1} & {C}_{0} \\
\hline
{C}^\dag_{0} & {C}_{1}
\end{array}
\right),\\\nonumber
{C}_{0}&=\left(
\begin{array}{cccc}
x_1 & x_2 & \cdots &x_{m} \\
x_{m} & x_{1} & \cdots & x_{m-1}\\
x_{m-1} & x_{m} & \cdots & x_{m-2}\\
\vdots & \vdots & \ddots & \vdots \\
x_{2} & x_{3} & \cdots & x_{1}
\end{array}
\right),
\quad
{C}_{1} = \frac{1}{m}\left(
\begin{array}{cccc}
1 & 1 & \cdots &1 \\
1 & 1 & \cdots &1 \\
\vdots & \vdots & \ddots &\vdots \\
1 & 1 & \cdots &1
\end{array}
\right)
-\frac{{C}_{0}+{C}_{0}^\dag}{2}.
\end{align}
It is easy to verify that if at most one bit in $x$ is non-zero, ${H}_{\mathrm{OR}}\ket{k}_\mathrm{out}\ket{u}_o=\ket{k\oplus \mathrm{OR}(x)}_\mathrm{out}\ket{u}_o$, where $\ket{u}_o=\frac{1}{\sqrt{m}}\sum_{l\in\{1,\cdots,m\}}\ket{l}_o$ is a uniform superposition state.
As the function $\op{PARITY}\circ\op{OR}$ we consider acts on $n\times m$ bits, let ${H}_{\op{OR},_j}$ be the Hamiltonians ${H}_{\mathrm{OR}}$ of~\cref{eq:Ham_OR}, except that the input bit-string $x$ is replaced by the $j^{\text{th}}$ set of $m$ bits $x_j=(x_{j,1},\cdots,x_{j,m})\in\{0,1\}^m$. By replacing each ${H}_{\mathrm{NOT},j}$ with ${H}_{\op{OR},_j}$, we obtain the $m(n+1)\times m(n+1)$ Hamiltonian
\begin{align}
\label{Eq:Ham_Parity_OR}
{H}_{\mathrm{PARITY}\circ \mathrm{OR}}=
\sum_{j\in\{1,\cdots N\}}\frac{\sqrt{j(n-j+1)}}{n}\ket{j-1}\bra{j}_{s}\otimes {H}_{\mathrm{OR},j} + \op{h.c.}.
\end{align}
Time-evolution by ${H}_{\mathrm{PARITY}\circ \mathrm{OR}}$ for time $\frac{n\pi}{2}$ transforms the state
\begin{align}
e^{-i{H}_{\mathrm{PARITY}\circ \mathrm{OR}}n\pi/2} \ket{0}_s\ket{u}_o\ket{0}_\mathrm{out} = \ket{n}_s\ket{u}_o\ket{\oplus_j \mathrm{OR}(x_j)}_\mathrm{out},
\end{align}
thus measuring the `$\op{out}$' register returns the parity of $n$ bits, where each bit is the disjunction of $m$ bits.
As the desired lower bound requires us to independently vary over three parameters, we introduce one final modification. Let ${H}_{\mathrm{complete}}$ be a $s\times s$ Hamiltonian where all matrix elements are $1$ in the basis $\{\ket{i}_c:i\in\{1,\cdots,s\}\}$. We now take the tensor product of ${H}_{\mathrm{PARITY}\circ \mathrm{OR}}$ with ${H}_{\mathrm{complete}}$, the resulting $sm(n+1)\times sm(n+1)$ Hamiltonian
\begin{align}
{H}={H}_{\mathrm{PARITY}\circ \mathrm{OR}}\otimes {H}_{\mathrm{complete}}.
\end{align}
As the uniform superposition $\ket{u}_c=\frac{1}{\sqrt{s}}\sum_{i\in\{1,\cdots,s\}}\ket{i}_c$ state is an eigenstate of ${H}_{\mathrm{complete}}\ket{u}_c=s\ket{u}_c$ with eigenvalue $s$, time-evolution by ${H}$ with initial state $\ket{0}_s\ket{u}_o\ket{u}_c\ket{0}_\mathrm{out}$ performs the same computation as~\cref{Eq:Ham_Parity_OR}, but in shorter time $\frac{n\pi}{2s}$.
By varying the problem size through the number of bits $m,n$, and the dimension $s$, we may express the query lower bound $\Omega(n\sqrt{m})$ in terms of the evolution time $t$, and sparsity $d$ of $H$, and the subordinate norm $\|H\|_{1 \shortto 2}$. We note the following facts:
\begin{itemize}
\item The max-norm $\|H\|_\mathrm{max}=\mathcal{O}(1)$.
\item The evolution time $t=\Theta\left(\frac{n}{s}\right)$.
\item The sparsity of $H$ is $d=\Theta\left(ms\right)$.
\item The subordinate norm of $H$ is $\|H\|_{1 \shortto 2}=\max_{j}\sqrt{\sum_{i}|H_{ij}|^2}
=\mathcal{O}\left(\sqrt{s}\right)
$.
\end{itemize}
Substituting these parameters into the lower bound, we obtain the stated quantum query complexity for sparse Hamiltonian simulation of
\begin{align}
\Omega(n\sqrt{m})=\Omega(ts\sqrt{m})=\Omega(t\sqrt{d}\sqrt{s})=\Omega(t\sqrt{d}\|H\|_{1 \shortto 2}).
\end{align}
\end{proof}
\section{Application to black-box unitaries}
\label{sec:blackboxunitary}
Following~\cite{Jordan2009,Berry2012}, the black-box unitary problem reduces to an instance of Hamiltonian simulation, which we implement using the algorithm of~\cref{thm:HamSim_sparse}. The reduction is straightforward.
\begin{proof}[Proof of~\cref{thm:BlackBoxUnitary}.]
For any $N\times N$ unitary $U$ operator that acts on basis states $\{\ket{j}_u:j\in\{0,\cdots,N-1\}\}$, let us define a $2N\times 2N$ Hamiltonian $H$ that acts on basis states $\{\ket{k}_h\ket{j}_u:k\in\{0,1\}:j\in\{0,\cdots,N-1\}\}$:
\begin{align}
\label{eq:BlackBoxUnitaryHamiltonian}
{H}&=\left(
\begin{array}{cc}
0 & U \\
U^\dag & 0
\end{array}
\right).
\end{align}
Using the fact $H^2 = \ii$, the time-evolution operator generated by $H$ has a simple form:
\begin{align}
e^{-iHt}=\cos{(t)}-i\sin{(t)}H
\quad\Rightarrow\quad
e^{-iH\pi/2}=-iH.
\end{align}
Thus we may apply $U$ to an arbitrary state $\ket{\psi}_u$, up to a global phase, by applying
\begin{align}
e^{-iH\pi/2}\ket{1}_h\ket{\psi}_u=-i \ket{0}_hU\ket{\psi}_u.
\end{align}
The $-i\ket{0}_h$ state may be converted to $\ket{1}_h$ by a single Pauli $Y$ gate. Thus the cost of applying $U$ reduces to the cost of simulating the Hamiltonian $H$ for constant time $t=\pi/2$.
Given that matrix positions and values of a $d$-sparse $U$ are described by the sparse matrix oracles of~\cref{def:Sparse_Oracle}, $\mathcal{O}(1)$ queries suffice to synthesize black-box oracles that describe the $d$-sparse Hamiltonian $H$ of~\cref{eq:BlackBoxUnitaryHamiltonian}. As $U$ is unitary, this Hamiltonian has max-norm $\|H\|_\mathrm{max}=\|U\|_\mathrm{max}\le\Lambda_\mathrm{max}=1$, and subordinate norm $\|H\|_{1 \shortto 2}=\|U\|_{1 \shortto 2}\le\|U\|\le \Lambda_{1 \shortto 2}=1$. By substituting these parameters into~\cref{thm:HamSim_sparse}, we immediately obtain the query complexity of
\begin{align}
\mathcal{O}(\sqrt{d}(d/\epsilon)^{o(1)}).
\end{align}
for approximating black-box unitaries.
\end{proof}
\section{Application to sparse systems of linear equations}
\label{sec:blackboxQLSP}
Following~\cite{Childs2015LinearSystems,Chakraborty2018BlockEncoding}, the cost of solving systems of linear equations depends primarily on the cost of Hamiltonian simulation, which we once again implement using the algorithm of~\cref{thm:HamSim_sparse}. As details of the reduction are quite involved, we only sketch the proof, and obtain~\cref{thm:BlackBoxQLSP} by invoking results in prior art.
A system of linear equations is described by a matrix $A\in\mathbb{C}^{N\times N}$ typically characterized by spectral norm $\|A\|=1$ and condition number $\|A^{-1}\|\le\kappa$, and an input vector $\ket{b}\in\mathbb{C}^{N}$. This is solved by preparing a state proportional to $A^{-1}\ket{b}$. Without loss of generality, we may assume that $A$ is Hermitian~\cite{Harrow2009}. Further assuming that $A/\alpha$ is block-encoded by a unitary $U_A$, as in~\cref{Def:Standard_Form}, one may use linear-combination-of-unitaries~\cite{Childs2015LinearSystems} or quantum signal processing~\cite{Haah2018product} to block-encode $A^{-1}/\kappa$ in a unitary $U_{A^{-1}}$. Generally, $U_{A^{-1}}$ may be approximated to error $\epsilon$, meaning that $\|A^{-1}-\kappa (\bra{0}\otimes I)U_{A^{-1}}(\ket{0}\otimes I)\|\le\epsilon$, using $\mathcal{O}(\alpha \kappa \operatorname{polylog}{(\alpha\kappa/\epsilon)})$ queries to $U_A$.
In the basic approach, applying $U_{A^{-1}}\ket{0}\ket{b}\approx\ket{0}\frac{A^{-1}}{\kappa}\ket{b}+\cdots$ approximates the desired state, but with a worst-case success probability of $\mathcal{O}(\kappa^{-2})$. Thus $\mathcal{O}(\kappa)$ rounds of amplitude amplification are required to obtain the desired state with $\mathcal{O}(1)$ success probability. By multiplying these factors, this leads to an overall query complexity of $\mathcal{O}(\alpha \kappa^2 \operatorname{polylog}{(\alpha\kappa/\epsilon)})$ to $U_A$ and the unitary oracle $O_b\ket{0}=\ket{b}$ that prepares the input state $\ket{b}$. However, this may be improved to $\mathcal{O}(\alpha \kappa \operatorname{polylog}{(\alpha\kappa/\epsilon)})$ using a more sophisticated approach based on variable-time amplitude amplification as follows.
\begin{restatable}[Variable-time quantum linear systems algorithm; adapted from Lemma 27 of~\cite{Chakraborty2018BlockEncoding}]{lemma}{VTAQLSP}
\label{thm:QLSP_Chakraborty}
Let $\ket{b}\in\mathbb{C}^N$, $\kappa=\Omega(1)$, and $A\in\mathbb{C}^{N\times N}$ be a Hermitian matrix such that $\|A\|=1$ and $\|A^{-1}\|\le\kappa$. Suppose that $A/\alpha$ is block-encoded by a unitary $U_A$ with error $o(\epsilon/\operatorname{poly}(\kappa,\log{(1/\epsilon)}))$, and that $\ket{b}$ is prepared by a unitary oracle $O_b\ket{0}=\ket{b}$. Then there exists a quantum algorithm that outputs a state $\ket{\psi}$ such that $\left\|\ket{\psi}-\frac{A^{-1}\ket{b}}{\|A^{-1}\ket{b}\|}\right\|\le\epsilon$. The query complexity of this algorithm to $U_A$ and $O_b$ is
\begin{align}
\mathcal{O}(\alpha \kappa \operatorname{polylog}{(\alpha\kappa/\epsilon)}).
\end{align}
\end{restatable}
\begin{proof}[Proof of~\cref{thm:BlackBoxQLSP}]
Now assuming that $A$ is $d$-sparse and described by the black-box oracles of~\cref{def:Sparse_Oracle}, our simulation algorithm~\cref{thm:HamSim_sparse} allows us to approximate the time-evolution operator $e^{-iA/2}$ with error $\delta$ using $\mathcal{O}(\sqrt{d}(d/\delta)^{o(1)})$ queries. Note that we have used the fact that $\|A\|=1$ to bound the subordinate norm input $\Lambda_{1 \shortto 2}=1$ required by the algorithm. By taking a matrix logarithm of this operator using Theorem 9 of~\cite{Low2017USA}, $A/\alpha$ may be block-encoded with $\alpha=\Theta(1)$ in a unitary $U_A$ without affecting the query complexity. Thus by invoking~\cref{thm:QLSP_Chakraborty} which requires $\delta=o(\epsilon/\operatorname{poly}(\kappa,\log{(1/\epsilon)}))$, an $\epsilon$-approximation of the state $\frac{A^{-1}\ket{b}}{\|A^{-1}\ket{b}}$ may be prepared. The query complexity is obtained by multiplication, and is
\begin{align}
\mathcal{O}(\sqrt{d}(d/\delta)^{o(1)})\times\mathcal{O}(\kappa \operatorname{polylog}{(\kappa/\epsilon)})=\mathcal{O}(\kappa\sqrt{d}(\kappa d/\epsilon)^{o(1)}).
\end{align}
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
Our algorithm for sparse Hamiltonian simulation combines ideas from simulation in the interaction picture with uniform spectral amplification. In applications such solving linear systems of equations or implementing black-box unitaries, one often simulates a sparse Hamiltonian where parameters for the time $t$, sparsity $d$, and subordinate norm $\|H\|_{1 \shortto 2}$, naturally describe the problem. Given these, our algorithm scales like $\mathcal{O}\left(( t
\sqrt{d}\|H\|_{1 \shortto 2}(
t\sqrt{d}\|H\|_{1 \shortto 2}/\epsilon)^{o(1)}\right)$, which is optimal up to subpolynomial factors. Moreover, one is allowed to substitute these parameters for any weaker set of constraints, such as from a well-known sequence of tight norm inequalities for sparse Hamiltonians~\cite{Childs2010Limitation},
\begin{align}
\nonumber
\|H\|_\mathrm{max} \le \|H\|_{1 \shortto 2} \le \|H\| \le \|\op{abs}(H)\|\le\|H\|_1\le\sqrt{d}\|H\|_{1 \shortto 2}\le\sqrt{d\|H\|_\mathrm{max}\|H\|_1}\le d\|H\|_\mathrm{max}.
\end{align}
Thus our algorithm generalizes, and in some cases strictly improves, prior art scaling with parameters $(d,\|H\|_\mathrm{max})$~\cite{Berry2015Hamiltonian,Low2016HamSim} or $(d,\|H\|_\mathrm{max},\|H\|_1)$~\cite{Low2017USA} or $(d,\|H\|)$~\cite{Berry2012}.
This greatly narrows the interval containing ultimate bounds on the complexity of sparse Hamiltonian simulation in the black-box setting. Known lower bounds forbid algorithms scaling like $\mathcal{O}(t\op{poly}(\|H\|))$ or even $\mathcal{O}(t\sqrt{d}\|H\|/\operatorname{polylog}{(d)})$, so it would be interesting to pin this down, or improve the subpolynomial factors in our upper bound. Another useful direction would be to consider the simulation of structured Hamiltonians. For instance, some algorithms are highly successful at exploiting the geometric locality~\cite{Haah2018quantum} of certain Hamiltonians, or Hamiltonians with a large separation of energy scales~\cite{Low2018IntPicSim}. Within the black-box setting, the main challenge would to identify parameters that are sufficiently structured so as to enable a speedup, yet sufficiently general so as to describe problems of interest.
\textbf{Acknowledgements} -- We thank Dominic Berry, Andrew Childs, Robin Kothari, and Yuan Su for insightful discussions. In particular, we thank Nathan Wiebe for the idea of interaction picture simulation~\cite{Low2018IntPicSim}.
|
2,877,628,088,623 | arxiv | \section{Conclusion}
\label{sec:con}
In this paper, we proposed a novel junction detector ASJ which exploits the anisotropy of junctions via estimating the endpoints (length) of branches for isotropic scale junctions for indoor images which are dominanted by junctions in a more global manner. We then devised an affine invariant dissimilarity measure to match these anisotropic-scale junctions across different images. We tested our method on a collected indoor images and compared its performance with several current sate-of-the-art methods. The results demonstrated that our approach establishes new state-of-the-art performance on the indoor image dataset.
\section{Experimental Analysis}
\label{sec:exp}
This section illustrates the results and analysis for ASJ detection and matching routines with comparison to existing approaches for junction detection, junction matching, key-points matching and line segment corresponding. In our experiments, we first detect anisotropic-scaled junctions by relying on
the procedures presented in Section \ref{sec:detection}, and then make the correspondence of
junctions with the affine homography induced by these semi-local geometrical structures.
\vspace{-1mm}
\subsection{Stability and Control of the Number of False Detection}
The \emph{a-contrario} approaches detect meaningful events controlled by the threshold $\epsilon$: it bounds the average number of false detections in an image following null hypothesis. In this subsection, we check the average number of false detections in Gaussian noise image and illustrate the results of detected ASJs with fixed threshold $\epsilon$.
Experimentally, we generate $1000$ random images with $256\times 256$ pixels which follow standard Gaussian distribution independently pixel-wised. For each pixel, we generate an orientation randomly from the uniform distribution in the interval $[0,2\pi)$ and estimate scale at this pixel with the orientation. Ideally, there is no meaningful line-segment structure appeared in random images but might be detected mistakenly, which are counted in number of false detection averagely. If the number of false detection can be controlled by the \emph{NFA} proposed \emph{a-contrario approach}, the approach would be identified as correct \emph{a-contrario approach}.
\begin{figure*}[htb!]
\centering
\subfigure[original images]{
\includegraphics[height=0.2\linewidth]{figures/cmpimg1}
\includegraphics[height=0.2\linewidth]{figures/cmpimg2}
\includegraphics[height=0.2\linewidth]{figures/cmpimg3}
}
\subfigure[repeatability rate with respect to scale changes]{
\includegraphics[height=0.24\linewidth]{figures/cmpimg1-rep}
\includegraphics[height=0.24\linewidth]{figures/cmpimg2-rep}
\includegraphics[height=0.24\linewidth]{figures/cmpimg3-rep}
}
\caption{Repeatability rate with respect to scale change. Original images to generate image sequences are shown in the first row. In the second row, the repeatability is shown as a function of scale factors.}
\label{fig:rep-exp}
\end{figure*}
\begin{table}[htb!]
\centering
\caption{Average number of false detections in $1000$ images generated by Gaussian white noises}
\scriptsize
\begin{tabular}{c|cccccc}
\hline
$\epsilon$ & 0.01 & 0.1 & 1 & 10 & 100 & 200\\
\hline
\hline
Avg.
False & 0.002 & 0.006 & 0.198 & 5.923 & 66.472 & 132.676
\\
\hline
\end{tabular}
\label{tab:nfa-gaussian}
\end{table}
The average number of false detections in $1000$ Gaussian noise images are reported in the Tab.~\ref{tab:nfa-gaussian}. The value of NFA are varied in our experiments from $10^{-2}$ to $10^2$ and the corresponding average number of false detections are upper bounded by the NFA.
\subsection{Comparison with ACJ}
It is necessary to compare the repeatability for our proposed ASJ with ACJ since we extend the \emph{a-contrario} model for scale estimation to discuss their difference.
Following with the baseline experiments proposed in~\cite{XiaDG14}, these images are firstly zoomed with different factors to form the image sequences with scale change. Then, the ASJ and ACJ are performed on these image sequences to detect the junctions. The repeatability for ACJ is discussed in~\cite{XiaDG14}, however, their definition for corresponding junction just concentrates on the location and branch of junctions while ignoring the scale coherence. Therefore, we are going to define the corresponding ACJ and ASJ with scale information here. For the original image $I_0$ and the scaled image $I_s = s(I_0)$, the corresponding ACJ junctions should have close locations, branch orientations as well as scales. Meanwhile, two junctions with different number of branches cannot be identified as correspondence. More precisely, we define two ACJ junctions $\jmath_1 = \{\bm{p}_1,r_1,\{\theta_i\}_{i=1}^M\}$ and $\jmath_2 = \{\bm{p}_2,r_2,\{\theta_i^s\}_{i=1}^M\}$ detected in $I_0$ and $I_s$ if they follow
\begin{equation}
\left\|s\cdot \bm{p}_1-\bm{p}_2\right\|_2 < 3,
\label{eq:corresponding-loc}
\end{equation}
\begin{equation}
\left|s\cdot r_1-r_2\right|<3,
\label{eq:corresponding-iso-scale}
\end{equation}
\begin{equation}
\max_{\theta\in\{\theta_i\}_{i=1}^M}\min_{\theta'\in\{\theta_i\}_{i=1}^M} d_{2\pi}(\theta,\theta') < \frac{\pi}{20},
\label{eq:corresponding-ang}
\end{equation}
where the angular distance $d_{2\pi}(\theta,\theta') = \min(\left|\theta-\theta'\right|,2\pi-\left|\theta-\theta'\right|)$.
Similar to the above, the correspondence for two junctions $\jmath_1 = \{\bm{p}_1,\{r_i,\theta_i\}_{i=1}^M\}$ and $\jmath_2 = \{\bm{p}_2,\{r_i^s,\theta_i\}_{i=1}^M\}$ detected by ASJ should satisfy the inequalities \eqref{eq:corresponding-loc}, \eqref{eq:corresponding-ang} as well as
\begin{equation}
\max_{r\in\{r_i\}_{i=1}^M}\min_{r'\in\{r_i^s\}_{i=1}^M} \left|s\cdot r-r'\right| < 3.
\end{equation}
In this experiment, the set of scale factors is $\left\{1.0, 0.9, 0.8, \ldots, 0.3\right\}$ and the results are shown in Fig.~\ref{fig:rep-exp}. Observing the repeatability curve, our proposed ASJ performs better than ACJ. The repeatability rate reported in \cite{XiaDG14} is higher, however, it just demonstrate the accuracy of locations and orientation of branches. In our experiment, the scale difference are also considered here.
As reported in \cite{XiaDG14}, the scale of ACJ represents the length of shortest branch and it is roughly linear through the scale factors\cite{XiaDG14}. Theoretically, if a detected ACJ has scale $r$ in original image, its correspondence in the scaled image $s(I)$ should be close enough to $s\cdot r$. However, the upper bound of scale is required for ACJ algorithm as input and it is recommend to be set as in the range of $[12,30]$ constantly\cite{XiaDG14} for the sake of computational speed. As a matter of fact, the junctions in indoor images usually have large scale branches and they cannot be bound with a relative small constant.
To demonstrate this fact, we compare the detected junctions in Fig.~\ref{fig:scale-inaccurate}. In this experiment, the junctions are detected by ACJ firstly in original image $I$ and scaled image $s(I)$ with the factor $s$ firstly. Then we find the corresponding ACJ in the image pair by using the inequalities \eqref{eq:corresponding-ang} and \eqref{eq:corresponding-loc} while ignoring the inequality \eqref{eq:corresponding-iso-scale}. For the sake of comparing the scale of junctions with respect to factor $s$, all the correspondences are shown with colored circle.
In Fig.~\ref{fig:scale-inaccurate}, a correspondence of $\jmath_2 = {\bm{p}_2,r_2,\{\theta_i^s\}_{i=1}^M}$ in image $s(I)$ which has scale $r_1$ is shown with a yellow circle with the radius $r_1\cdot s$. The red circle and green line segments present the junction $\jmath_2$. We can find out that there exist several correspondences which do not have consistent scales. If a junction is formed by several line segments of which lengths are more than $1/s$ time of maximal radius threshold of ACJ, the scale of junction will be equal to the threshold in the original image. When the image is zoomed with factor $s$, the scale will not be decreased since it is still larger than the threshold. This is the reason why the repeatability is lower when we use the inequality \eqref{eq:corresponding-iso-scale} to calculate it.
In the final of this subsection, some example results of ASJ detector for indoor images are shown in Fig.~\ref{fig:example-asj}.
The \emph{anisotropic-scale} junction are shown in the middle column and the results of ACJ are listed in the right column.
Observing the results, we can find that ASJ has the ability to detect more geometric structure than ACJ. The anisotropic-scale branches of a junction can depict the layout of indoor scenes. By contrast, the results of ACJ just represent the very local information. For example, there are several rectangles in the Fig.~\ref{fig:example-asj}, our ASJ can produce the boundary of the rectangle while ACJ just detect the corner points and orientations around the corners of rectangle.
\begin{figure*}[htb!]
\centering
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate1-a}
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate2-a}
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate3-a}
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate1-b}
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate2-b}
\includegraphics[height=0.13\textheight]{figures/scale-inaccurate3-b}
\caption{The scale consistency between the original image and scaled image. The \emph{yellow circles} represent the scale estimated in the scaled image with scale factor $s$ while \emph{red circles} represent the scales detected in original images. The scale factors are $s = 0.6$ for the top row and $s=0.3$ for the bottom.}
\label{fig:scale-inaccurate}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.25\textwidth]{figures/cmpimg1}
\includegraphics[width=0.25\textwidth]{figures/cmpimg1-ASJ}
\includegraphics[width=0.25\textwidth]{figures/cmpimg1-ACJ}
\\
\vspace{1mm}
\includegraphics[width=0.25\textwidth]{figures/nontexture}
\includegraphics[width=0.25\textwidth]{figures/nontexture-asj}
\includegraphics[width=0.25\textwidth]{figures/nontexture-acj}
\\
\vspace{1mm}
\includegraphics[width=0.25\textwidth]{figures/indoorshow}
\includegraphics[width=0.25\textwidth]{figures/indoorshow-ASJ}
\includegraphics[width=0.25\textwidth]{figures/indoorshow-ACJ}
\caption{Some results of ASJ for the input images in the first column are shown in the middle column. The junctions detected by ACJ are shown in the right column for comparison.}
\label{fig:example-asj}
\end{figure*}
\subsection{ASJ Matching}
\begin{figure*}[htb!]
\centering
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/a_im1.jpg}
\\
\includegraphics[height=0.07\textheight]{figures/a_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/b_im1.jpg}
\\
\includegraphics[height=0.07\textheight]{figures/b_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/c_im1.jpg}
\\
\includegraphics[height=0.07\textheight]{figures/c_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/d_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/d_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/e_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/e_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/f_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/f_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/g_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/g_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/h_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/h_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/i_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/i_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/j_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/j_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/k_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/k_im2.jpg}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.13\textwidth}
\includegraphics[height=0.07\textheight]{figures/l_im1.jpg}\\
\includegraphics[height=0.07\textheight]{figures/l_im2.jpg}
\end{minipage}
}
\caption{Some example of collected indoor images used for comparison of different matching approaches. }
\label{fig:imageshow}
\end{figure*}
In order to evaluate our approach, we collect more than 100 images to perform our proposed approach ASJ. Some of the collected images are from indoor 3D reconstruction dataset~\cite{FurukawaCSS09,SrajerSPP14} while others are taken by ourselves. As shown in Fig.~\ref{fig:imageshow}, the collected images are less texture than natural images. Some of them contain large viewpoint changes and indistinct texture repeated regions such as Fig.~\ref{fig:imageshow}(b), Fig.~\ref{fig:imageshow}(i) and Fig.~\ref{fig:imageshow}(l).
We define that two junctions are matched, only if the junction centers and orientations of branches are corresponding.
In this sense, our
matching result is somewhat beyond of local features and can be compared with
existing approaches in different settings:
\begin{itemize}
\item[-] It is comparable to keypoint matching methods, if we regard junctions as a specific corner points with two orientations;
\item[-] It is also comparable to line segment matching ones, if we take junctions as several intersecting line segments.
\end{itemize}
For key-points matching, we compare the results of matched junctions with SIFT~\cite{Lowe04}, Affine-SIFT~\cite{MorelY09,YuM11}, Hessian-Affine~\cite{PerdochCM09}, EBR and IBR in~\cite{TuytelaarsG04}.
Meanwhile, we compare maching accuracy with existing approaches LPI~\cite{FanWH12B} and LJL~\cite{LiYLLZ16} for matched line segments that measures the proportion of matches if their endpoints are corresponding. This rule is more strict for assessing line segment matching results. Interestingly, the approaches LPI~\cite{LiYLLZ16} and LJL~\cite{LiYLLZ16} use the epipolar geometry without outliers to assist their line segment matcher, our proposed method without epipolar geometry achieves better accuracy.
The implementation for Affine-SIFT~\cite{MorelY09,YuM11}, Hessian Affine~\cite{PerdochCM09}, LPI~\cite{FanWH12B} and LJL~\cite{LiYLLZ16} are getting from authors' homepage. EBR and IBR~\cite{TuytelaarsG04} are got from VGG's website\footnote{\url{http://www.robots.ox.ac.uk/~vgg/research/affine/descriptors.html#binaries}}. The version of SIFT detector is provided by VLFeat\footnote{\url{http://www.vlfeat.org}}. The descriptor used in our experiment is SIFT and the mismatches are filtered according to the ratio test with threshold $1.5$ for ASJ , SIFT~\cite{Lowe04}, Hessian-Affine~\cite{PerdochCM09}, EBR~\cite{TuytelaarsG04} and IBR~\cite{TuytelaarsG04} by comparing the $\ell_2$-distance, which is the default threshold for computing matches from descriptor in VLFeat. Remarkably, the implementation of Affine-SIFT~\cite{MorelY09,YuM11} provided by its authors use threshold $1.33$ since they calculate the distance with $\ell_1$ norm and we keep it unchanged. Since the released code for Affine-SIFT~\cite{MorelY09,YuM11} produce the matched result with outliers filtering, we remove this procedure in all fairness, which makes the results in our experiment different from the released executable program. All of parameters for compared approaches are the default value which is provided by their authors.
\begin{table*}[htb!]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\caption{Comparison of different matching methods. The number of correct matches, number of total matches and the matching accuracy for the comparision with key-points matching results are reported in the first row. The results for key point matching approaches SIFT~\cite{Lowe04}, Affine-SIFT~\cite{MorelY09,YuM11}, Hessian-Affine~\cite{PerdochCM09}, EBR~\cite{TuytelaarsG04} and IBR~\cite{TuytelaarsG04} are list in the 3-th row to 7-th row. The average matching accuracy for all collected images is reported in the last column.}
\resizebox{\textwidth}{!}{
\begin{tabular}{{c|c|cccccccccccc|c}}
\hline
\multicolumn{2}{r|}{\multirow{2}*{\diagbox{Methods}{Image pairs}}} &
\multirow{2}*{(a)} &
\multirow{2}*{(b)} &
\multirow{2}*{(c)} &
\multirow{2}*{(d)} &
\multirow{2}*{(e)} &
\multirow{2}*{(f)} &
\multirow{2}*{(g)} &
\multirow{2}*{(h)} &
\multirow{2}*{(i)} &
\multirow{2}*{(j)} &
\multirow{2}*{(k)} &
\multirow{2}*{(l)} & \multirow{2}*{\tabincell{c}{Average\\accuracy}}\\
\multicolumn{2}{r|}{} & & & & & & & & & & & & \\\hline
\multirow{3}*{\tabincell{c}{Ours\\(Junctions)}} &
\#correct &
12 & 26 & 12 & 16 &50 &14 &197&65 &119 & 37 &17 &33
& \multirow{3}*{\bf{85.17}\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
12 & 29 & 13 & 20 & 60& 15& 214&69 & 121& 45&19 &40 & \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
\textbf{100.00} & \textbf{89.66} & \textbf{92.31} & \textbf{80.00} & \textbf{83.33} & 93.33 & 92.06 & 94.20 & \textbf{98.35} & \textbf{82.22} & 89.47 & \textbf{82.50} & \\\hline
\multirow{3}*{\tabincell{c}{SIFT~\cite{Lowe04}}} &
\#correct &
128 & 476 & 559 & 115 & 435 & 74 & 708 & 199 & 147 & 200 & 103 & 65 & \multirow{3}*{62.69\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
206 & 700 & 839 & 222 & 652 & 287 & 770 & 261 & 330 & 299 & 191 & 161 & \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
62.14 & 68.00 & 66.63 & 51.80 & 66.72 & 25.78 & 91.95 & 76.25 & 44.55 & 66.89 & 53.93 & 40.37 & \\\hline
\multirow{3}*{\tabincell{c}{Affine-SIFT~\cite{YuM11}}} &
\#correct &
135 & 183 & 433 & 364 & 430 & 119 & 4141 & 1271 & 136 & 172 & 196 & 163
& \multirow{3}*{82.85\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
141 & 240 & 519 & 480 & 549 & 133 & 4205 & 1326 & 240 & 263 & 224 & 264
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
95.74 & 76.25 & 83.43 & 75.83 & 78.32 & 89.47 & \textbf{98.48} & \textbf{95.85} & 56.67 & 65.40 & 87.50 & 61.74
& \\\hline
\multirow{3}*{\tabincell{c}{Hessian-Affine~\cite{PerdochCM09}}} &
\#correct &
24 & 13 & 96 & 82 & 66 & 17 & 640 & 226 & 32 & 114 & 38 & 29
& \multirow{3}*{79.68\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
26 & 34 & 132 & 105 & 109 & 18 & 671 & 248 & 63 & 144 & 41 & 49
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
92.31 & 38.24 & 72.73 & 78.10 & 60.55 & \textbf{94.44} & 95.38 & 91.13 & 50.79 & 79.17 & \textbf{92.68} & 59.18
& \\\hline
\multirow{3}*{\tabincell{c}{EBR~\cite{TuytelaarsG04}}} &
\#correct &
0 & 0 & 10 & 0 & 0 & 0 & 64 & 20 & 28 & 14 & 0 & 0
& \multirow{3}*{32.56\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
1 & 1 & 16 & 15 & 10 & 2 & 75 & 21 & 46 & 34 & 2 & 4
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
0.00 & 0.00 & 62.50 & 0.00 & 0.00 & 0.00 & 85.33 & 95.24 & 60.87 & 41.18 & 0.00 & 0.00
& \\\hline
\multirow{3}*{\tabincell{c}{IBR~\cite{TuytelaarsG04}}} &
\#correct &
0 & 0 & 28 & 11 & 14 & 0 & 46 & 0 & 0 & 10 & 0 & 0
& \multirow{3}*{31.84\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
4 & 9 & 39 & 16 & 25 & 9 & 63 & 12 & 10 & 21 & 8 & 5
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
0.00 & 0.00 & 71.79 & 68.75 & 56.00 & 0.00 & 73.02 & 0.00 & 0.00 & 47.62 & 0.00 & 0.00
& \\\hline
\end{tabular}
}
\label{tab:matchkeypoint_results}
\end{table*}
\subsubsection{Matching results for key-points matching}
As shown in Tab.~\ref{tab:matchkeypoint_results}, our proposed feature ASJ is compare with most widely used feature detectors. In the sense for key-points matching, we regard an ASJ as a key-point with two specific orientations. The matching accuracy for ASJ is better than other key-points matches in most cases. Representatively, in Fig.~\ref{fig:imageshow}(i), the indistinct repeated region in chessboard are matched very well with the accuracy $98.35\%$ since ASJs makes corner points contain more global information than other approaches, which represents the relative position with meaningful orientations in images.
Comparing with the most related approach EBR and IBR~\cite{TuytelaarsG04}, our proposed approach ASJ handles straight edges in a better way which can produce more key-points and more correct correspondences. In many cases as shown in Tab.~\ref{tab:matchkeypoint_results}, the results of EBR and IBR illustrate their limitation in indoor images which are dominated by straight edges.
\begin{figure*}[t!]
\centering
\subfigure[]
{
\begin{minipage}[b]{0.31\linewidth}
\includegraphics[width=0.48\linewidth]{figures/a-structure-ASJ-1-out}
\includegraphics[width=0.48\linewidth]{figures/a-structure-ASJ-2-out}\\
\includegraphics[width=0.48\linewidth]{figures/a-structure-KPT-1-out}
\includegraphics[width=0.48\linewidth]{figures/a-structure-KPT-2-out}
\end{minipage}
}
\subfigure[]
{
\begin{minipage}[b]{0.31\linewidth}
\includegraphics[width=0.48\linewidth]{figures/e-structure-ASJ-1-out}
\includegraphics[width=0.48\linewidth]{figures/e-structure-ASJ-2-out}\\
\includegraphics[width=0.48\linewidth]{figures/e-structure-KPT-1-out}
\includegraphics[width=0.48\linewidth]{figures/e-structure-KPT-2-out}
\end{minipage}
}
\subfigure[]
{
\begin{minipage}[b]{0.31\linewidth}
\includegraphics[width=0.48\linewidth]{figures/h-structure-ASJ-1-out}
\includegraphics[width=0.48\linewidth]{figures/h-structure-ASJ-2-out}\\
\includegraphics[width=0.48\linewidth]{figures/h-structure-KPT-1-out}
\includegraphics[width=0.48\linewidth]{figures/h-structure-KPT-2-out}
\end{minipage}
}
\caption{Top row: plotted correct matched ASJ in image pairs Fig.~\ref{fig:imageshow}(a), Fig.~\ref{fig:imageshow}(e) and Fig.~\ref{fig:imageshow}(h).
Bottom row: plotted correct matched keypoints by using Affine-SIFT~\cite{MorelY09,YuM11}. Although the number of correct matches for Affine-SIFT is more than ASJ, the ASJ can represent structure information for the input images while plotted key-points are confused if we do not have input image for reference.}
\label{fig:plot-ASJ-KPT}
\end{figure*}
In the aspect of absolute number of correct matches, ASJ is less than other approaches significantly. The approaches matching most number of correct matches are Affine-SIFT and SIFT. Since the junctions detected in indoor images represents the meaningful junctions in the scene, the fact that absolute number is less than SIFT key-points is not surprising. Nevertheless, ASJ represents the structure information compactly for scenes than key-points. To illustrate this, we plot the correct matched key-points and ASJs in the clean background, the structure of the scene can be represented by ASJs with their branches while plotted key-points are hard to understand without their input images. As shown in Fig.~\ref{fig:plot-ASJ-KPT}, the matched ASJs represents the geometric information with small number of ASJs (12 for Fig.~\ref{fig:plot-ASJ-KPT} (a), 50 for Fig.~\ref{fig:plot-ASJ-KPT} (b) and 65 for Fig.~\ref{fig:plot-ASJ-KPT}(c) while matched ASIFT key-points show confused results even though the amount of matches are much more than ASJs. Some example of match results are shown in Fig.~\ref{fig:match_ASJ}.
\begin{figure*}[htb!]
\centering
\subfigure[(\#correct matches, \#total matches) = (16, 20)]
{
\includegraphics[width=0.8\linewidth]{figures/d-Match-ASJ}
}
\subfigure[(\#correct matches, \#total matches) = (119, 121)]
{
\includegraphics[width=0.8\linewidth]{figures/i-Match-ASJ}
}
\caption{Matched ASJs for image pairs Fig.~\ref{fig:imageshow} (d) and Fig.~\ref{fig:imageshow} (i) are shown in the sub-figures (a) and (b) respectively. The false matches are connected as yellow lines while correct matches are connected by cyan lines.}
\label{fig:match_ASJ}
\end{figure*}
\subsubsection{Matching results for line-segments matching}
\begin{table*}[htb!]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Comparison of different matching methods for line segment matching. The number of correct matches are counted by the rule that endpoints of corresponding line-segments are correct. We compare ASJ with state-of-the-art approaches LPI~\cite{FanWH12B} and LJL~\cite{LiYLLZ16} and report the number of correct matches, number of total matches and the matching accuracy in this table. The average matching accuracy is also compare in the last column.
}
\resizebox{\textwidth}{!}{
\begin{tabular}{{c|c|cccccccccccc|c}}
\hline
\multicolumn{2}{r|}{\multirow{2}*{\diagbox{Methods}{Image pairs}}} &
\multirow{2}*{(a)} &
\multirow{2}*{(b)} &
\multirow{2}*{(c)} &
\multirow{2}*{(d)} &
\multirow{2}*{(e)} &
\multirow{2}*{(f)} &
\multirow{2}*{(g)} &
\multirow{2}*{(h)} &
\multirow{2}*{(i)} &
\multirow{2}*{(j)} &
\multirow{2}*{(k)} &
\multirow{2}*{(l)} & \multirow{2}*{\tabincell{c}{Average\\accuracy}}\\
\multicolumn{2}{r|}{} & & & & & & & & & & & & \\\hline
\multirow{3}*{\tabincell{c}{Ours\\(Line segments)}} &
\#correct &
15 &30 &14 &26 &85 &21 &349 &121 &232 &49 &27 &48
& \multirow{3}*{\bf{71.55}\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
24 &58 &26 &40 &120 &30 &428 &138 &242 &90 &38 &80
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
\textbf{62.50} & \textbf{51.72} & 53.85 & 65.00 & \textbf{70.83} & \textbf{70.00} & \textbf{81.54} & \textbf{87.68} & \textbf{95.87} & \textbf{54.44} & 71.05 & \textbf{60.00}
& \\\hline
\multirow{3}*{\tabincell{c}{LPI~\cite{FanWH12B}}} &
\#correct &
5 &0 &15 &19 &53 &3 &123 &60 &33 &17 &11 &16
& \multirow{3}*{48.83\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
9 &0 &18 &29 &90 &9 &193 &102 &59 &38 &15 &40
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
55.56 & 0.00 & \textbf{83.33} & \textbf{65.52} & 58.89 & 33.33 & 63.73 & 58.82 & 55.93 & 44.74 & \textbf{73.33} & 40.00
& \\\hline
\multirow{3}*{\tabincell{c}{LJL~\cite{LiYLLZ16}}} &
\#correct &
8 & 24 & 26 & 37 & 148 & 4 &221 &113 &129 &50 &26 &22
& \multirow{3}*{52.95\%}\\ \cline{2-14}
\multirow{3}*{} &
\#total &
30 &79 &32 &64 &251 &17 &376 &186 &138 &131 &50 &102
& \\\cline{2-14}
\multirow{3}*{} &
accuracy (\%) &
26.67 & 30.38 & 81.25 & 57.81 & 58.96 & 23.53 & 58.78 & 60.75 & 93.48 & 38.17 & 52.00 & 21.57
& \\\hline
\end{tabular}
}
\label{tab: matchlsg_results}
\end{table*}
We evaluate the matched line-segments with state-of-the-art approaches LPI~\cite{FanWH12B} and LJL~\cite{LiYLLZ16} for the comparison in a more strict rule that compare endpoints of corresponding line-segments instead of their line equation. For the example image pairs shown in Fig.~\ref{fig:imageshow}, our proposed method is better than existing methods in considerable advantage for most cases. Some matched results for line segments are shown in Fig.~\ref{fig:plot-ASJ-LSG-f} and Fig.~\ref{fig:matched-lsg-b}. The number of correct matched line-segments are also comparable with other approaches.
Besides of the matching accuracy, the result shown in
Fig.~\ref{fig:matched-lsg-b} for our method cover the scene more complete.
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.9\linewidth]{figures/f-LSG-ASJ}
\caption{Matched line-segments for image pair Fig.~\ref{fig:imageshow} (f). (\#correct matches, \#total matches) = (21, 30). Midpoints of matched line-segments are connect by cyan lines (if they are correct) or yellow lines (mismatches).}
\label{fig:plot-ASJ-LSG-f}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.24\linewidth]{figures/b-complete-ASJ-1-out}
\includegraphics[width=0.24\linewidth]{figures/b-complete-ASJ-2-out}
\includegraphics[width=0.24\linewidth]{figures/b-complete-LJL-1-out}
\includegraphics[width=0.24\linewidth]{figures/b-complete-LJL-2-out}
\caption{Matched line segments for image pair Fig.~\ref{fig:imageshow} (b). Left and mid-left: correct matched line-segments for ASJ; Right and mid-right: correct matched line-segments for LJL~\cite{LiYLLZ16}. The result of ASJ covers the scene more complete benefiting with the anisotropic scales for branches of junctions.}
\label{fig:matched-lsg-b}
\end{figure*}
Different from the approaches LPI~\cite{FanWH12B} and LJL~\cite{LiYLLZ16}, our approach performs better while not using any pre-estimated geometric information. As shown the Tab.~\ref{tab: matchlsg_results}, we will find that key-point driven approach for line segment matching is possible to be failed because of the erroneous estimated geometric relationship. Observing the failed case reported in Tab.~\ref{tab: matchlsg_results}, the image pair in Fig.~\ref{fig:matched-lsg-b} is dominant by repeated texture and severe viewpoint change which are challenging for key-point matching. In such scenario, the induced epipolar geometry might be unreliable and therefore produce poor line segment matching results. On the other hand, because our approach performs well in junction matching, we can also use the junction correspondences to refine the line segment matching result.
\section{Introduction}\label{sec:introduction}
Image correspondence is a key problem for many computer vision tasks, such as structure-from-motion~\cite{Wu13,CrandallOSH13,FuhrmannLG14,openMVG}, object recognition~\cite{WangBWLT10,ChiaRLR12} and many others~\cite{YanWZYC15,ShenLYXWW15}.
The past decades have witnessed the big successes on that problem achieved by detecting and matching local visual features~\cite{MikolajczykS02,Lowe04,MatasCUP02,TuytelaarsG04,YuM11}.
Although most of existing image matching algorithms relying on such local visual features perform well for images containing rich photometric information, e.g. outdoor images, they usually lose their efficiency on images that are less photometric and dominated by geometrical structures such as indoor images displayed in Fig.~\ref{fig:example-indoor-images}.
In the indoor scenario, images are often dominated by low-texture parts and are with severe viewpoint changes, in which case it is reported to be more effective to make the correspondence of geometrical structures~\cite{FanWH12B,LiYLLZ16} such as line segments~\cite{GioiJMR10,GioiJMR12} and junctions~\cite{ShenP00}.
The line segment matching problem has been studied in recent years since it can represent more structural information than key-points.
Many algorithms match line segments by using either photometric descriptors with individual line segments~\cite{WangWH09} or the initial geometric relation~\cite{FanWH12B,LiYLLZ16} to assist line segment matching.
The approaches using pre-estimated epipolar geometry usually perform better than those of using photometric descriptors~\cite{WangWH09}, but the epipolar geometry estimation still needs key-point correspondences in many situations. In the indoor scenes, due to the fact that descriptors for low-textured regions are not distinctive enough, it is very likely to produce unstable epipolar geometry for inferring the line segment matching.
It is thus of great interest to develop elegant ways to make the correspondences of geometrical structures of images while get rid of the errors raised by the key-point correspondences, for finally achieving better matching of indoor images.
Alternatively, as a kind of basic structural visual features, junctions, have been studied as the primary importance for perception and scene understanding in recent years~\cite{Marr82,Adelson00,GuoZW07}.
Being a combination of points and ray segments, junctions contain richer information than line segments, {\em i.e.} including a location and at least two ray segments (known as {\em branches}).
Ideally, the information contained by a pair of junctions enables us to recover the correspondences between images up to affine transformations.
However, due to the difficulties in the estimation of the endpoints of junction branches, most of junction detection algorithms~\cite{WuXZ07,MaireAFM08,Sinzinger08,PuspokiU15,PuspokiUVU16,XiaDG14} concentrate on identifying the locations and orientation of branches while ignoring their length.
This actually simplifies junctions as key-points and does not fully exploit their capabilities for image correspondences.
To characterize the structure of junctions better, the detector ACJ \cite{XiaDG14} estimates scale invariant junction and it can be represented isotropically as a circle region with two or more dominant orientations. Every orientation represents a branch of junction and the radius of circle is equal to the shortest length among these branches.
Although the orientation of branches is invariant with respect to viewpoint, it is not enough for estimating the affine transformation. Fortunately, if we can estimate the length of every branch, the affine transformation will be determined by a pair of junction correspondence.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/indoor1A.jpg}
\includegraphics[width=0.4\textwidth]{figures/indoor1B.jpg}
\caption{A pair of indoor images. It can be seen that these images are dominated by
geometrical structures. e.g. the edges of the door, and low-textured wall.}
\label{fig:example-indoor-images}
\end{figure}
Motivated by this, we are going to study for exploiting the invariance of junctions through estimating scale (length) of branches.
For indoor images, the inherent scale for junctions usually are the length of some (straightforward) boundary for salient objects in images, which contains rich structure information and beyond local features. More precisely, we proposed an \emph{a-contrario} approach that models the endpoints of a ray segment starting at given location with initial orientations, which check the proposed point if it should be a part of the ray segment according to \emph{number-of-false-alarms} (NFA). When the points that belong to the ray segment occurs continuously until the continuity broken, the inherent scale for the ray segment is determined. In reality, the initial orientations are noised, we also optimize them with the junction-ness based on the \emph{a-contrario} theory.
Once the anisotropic scale is estimated for each branch (ray segment), the local homography can be estimated from any pair of junctions extracting from two different images. Theoretically, the correct correspondence produce reasonable local affine homography while incorrect correspondences generate local homographies in their own way. Considering the certainty of junction locations, the regions around location can be mapped by correct or incorrect affine homography. Correct homographies will map one image to another with minimal patch distortion. Comparing the regions with induced affine homography for a pair of junctions can check if the pair are correspondence. When the corresponding junctions are identified in image pairs, the results will produce more structure information. Our contributions in this paper are
\begin{itemize}
\item We extent the junction detector in~\cite{XiaDG14} to anisotropic-scale geometrical structures, which can better depict the geometric aspect of indoor images.
\item We developed an efficient scheme for making the correspondence of \emph{anisotropic-scale junctions}. More precisely, as a detected \emph{anisotropic-scale junction} provides at least three points, each pair of junctions in images can induce an affine homography. We finally present a strategy by induced homographies to generate accurate and reliable correspondences for the location and anisotropic branches of junctions simultaneously.
\item We evaluate our method on challenging indoor image pairs, e.g. some of images are from the indoor image datasets
used in \cite{SrajerSPP14,FurukawaCSS09} and our results demonstrate that it can achieve state-of-the-art
performance on matching indoor images.
\end{itemize}
The rest of this paper is organized as follows. First, the existing research related to our work is given in Sec.~\ref{sec:related-work}. In Sec.~\ref{sec:problems}, the problem of detecting and matching junctions for indoor scene is discussed. Next to this section, an \emph{a-contrario} approach for detecting \emph{anisotropic-scale junction} is described. As for the junction matching, we design a dissimilarity in Sec.~\ref{sec:matching} to find the correspondences. The experimental results and analysis for our approach are given in Sec~.\ref{sec:exp}. Finally, we conclude our paper in Sec.~\ref{sec:con}.
\section*{Acknowledgments}
\bibliographystyle{IEEEtran}
\section{ASJ Matching for Indoor Images}
\label{sec:matching}
Since the ASJs contain rich geometric structure informations represented by the anisotropic scales, we are going to study the matching method taken full advantage of ASJs. For a pair of junction $\jmath^P$ and $\jmath^Q$ detected from images $I^P$ and $I^Q$, the homography can be estimated by the points set that contain their locations and endpoints for branches, which can be used to compare junctions for correct correspondences. Since there exist $L$-junctions, $Y$-junctions and $X$-junctions in an image and the type of a junction might be different across images because of occlusion, the homography estimated from a pair of junctions might be invalid. Fortunately, whatever the type of junction is, the location can be intersected from any two of branches that are not parallel each other, which is saying that a junction with more than two branches can be decomposed two several $L$-junctions. Without saying, the $L$-junction with two branches that their orientation $\theta^1$ and $\theta^2$ are equal up to $\pi$ should be filtered out. After decomposing and filtering, the detected in an image are all $L$-junctions.
The perspective effects are typically small on a local patch~\cite{MikolajczykS05}, which can be approximated by affine homography. We use a pair of $L$-junctions to estimate such homographies. Suppose there are $N^P$ and $N^Q$ decomposed $L$-junctions in image $I^P$ and $I^Q$, denoted as $\{\jmath_i^P\}_{i=1}^{N^P}$ and $\{\jmath_i^Q\}_{i=1}^{N^Q}$ respectively. If a pair of junctions $(\jmath_n^{P},\jmath_m^Q)$ are matched, an affine homography would be induced once the orientations are determined. In order to derive a unique affine homography, we define the partial order for two branches $(r^1,\theta^1)$ and $(r^2,\theta^2)$ of a $L$-junction as
\begin{equation}
\left\{
\begin{split}
(r^1,\theta^1) < (r^2,\theta^2), ~\mathit{if} \langle \theta^1,\theta^2\rangle <\pi\\
(r^1,\theta^1) < (r^2,\theta^2),
~\mathit{if} \langle \theta^1,\theta^2\rangle >\pi
\end{split}
\right. .
\end{equation}
Every junction need to be sorted by the order defined above. The affine homography for a pair of junction $\jmath_n^P$ and $\jmath_m^Q$ are estimated by using DLT (Direct Linear Transform) with their locations and endpoints for the branches. More precisely, we solve the equations
\begin{equation}
\begin{split}
\bm{q}_i = H\bm{p}_i, ~ i=0,1,2\\
\end{split},
\end{equation}
where $\bm{p}_i$ and $\bm{q}_i$ are the homogeneous representation of locations and two branches for $\jmath_n^P$ and $\jmath_m^Q$ respectively. The matrix $H$ is
$$
H = \begin{bmatrix}
h_1 & h_2 &h_3\\h_4 & h_5 & h_6\\0&0&1
\end{bmatrix}
$$
represents the affine transform induced by $\bm{p}_i$ and $\bm{q}_i$ for $i=0,1,2$.
From the image pair $(I^P,I^Q)$, there can be $N^P\times N^Q$ affine homographies, denoted by $H_{n,m}$, which maps the $n$-th junction in $I^P$ to $m$-th junction in $I^Q$. For correct correspondence , the matrix $H_{n,m}$ will map the image $I^P$ to $I^Q$ accurate around the location of junctions while the mismatch will map the image $I^P$ only correct at the endpoints and locations but erroneous at other positions. For the sake of saving computational resource, we just map a patch $\mathcal{P}(\jmath_n^P)$ around $\jmath_n^P$ to $\mathcal{P}(\jmath_n^Q)$ in $I^Q$ and map $\mathcal{P}(\jmath_m^Q)$ to $\mathcal{P}(\jmath_m^P)$ in $I^P$ by using matrix $H_{n,m}$ and its inverse $H_{n,m}^{-1}$. Then, the distance between two features $\jmath_n^P$ and $\jmath_m^Q$ are measured by
\begin{equation}
\mathcal{D}(\jmath_n^P, \jmath_m^Q)
=
\tilde{D}(\mathcal{P}(\jmath_n^P),
\mathcal{P}(\jmath_n^Q))
+
\tilde{D}(\mathcal{P}(\jmath_m^Q),
\mathcal{P}(\jmath_m^P))
\end{equation}
where the distance $\tilde{D}$ are the distance between two patches calculated by raw patches, SIFT descriptor or other descriptors.
Benefiting with the homographies induced by ASJ, the distance between original patches and mapped patches for correct correspondence is usually very small while larger for incorrect correspondence, we can use ratio test proposed in \cite{Lowe04} to filter out the incorrect correspondence.
\section{An a-contrario model for anisotropic-scale junction detection}
\label{sec:detection}
To solve the problems addressed in Sec.~\ref{sec:problems}, we derive a differential junction-ness model for depicting scale with given location and orientation. Since the scale $r_i$ for each branch of junction $\jmath$ is irrelevant, we just model the endpoint of each branch independently.
\subsection{Differential Junction-ness Model}
Suppose the isotropic junctions have been detected in a small scale $r_0$, the inherent scales of branches will be greater than $r_0$. If we increase the scale $r_0$ to larger $r_1$, though the junction-ness is still larger, the error $\omega_{err} = \omega_{\bm{p}}(r_1,\theta) - \omega_{\bm{p}}(r_0,\theta)$ will not be increased significant.
A reasonable way to recognize the un-significant variation is to study the variation of $\omega_{\bm{p}}(r,\theta)$ with respect to $r$ increased.
Here, we first reformulate the junction-ness for a branch \eqref{eq:junction-ness-branch} in continuous form. The junction-ness for position $\bm{p}$, scale $R$ and orientation $\theta$ is
\begin{equation}
\label{eq:strength-continous}
\omega_{\bm{p}}(r,\theta)
= \int_0^r dr\int_{\theta-\delta(r)}^{\theta+\delta(r)}
\gamma_{\bm{p}}\left(\bm{p}+r\left(\cos\psi,\sin\psi\right)\right)d\psi,
\end{equation}
where the $\delta(r)$ is the angle width for given scale, here, we select $\delta(r) = \frac{\tau}{r}$. The descrete partial derivative $\frac{\partial \omega_{\bm{p}}(r,\theta)}{\partial r}$ is given by
\begin{equation}
\begin{split}
\frac{\partial \omega_{\bm{p}}(r,\theta)}{\partial r}
= \sum_{i} \gamma_{\bm{p}}\left(\bm{p}+r\begin{bmatrix}
\cos\psi_i\\\sin\psi_i
\end{bmatrix}\right)\\
+ \delta'(r)\sum_{i} \gamma_{\bm{p}}\left(\bm{p}+r_i\begin{bmatrix}
\cos(\theta-\delta(r))\\
\sin(\theta-\delta(r))
\end{bmatrix}\right)\\\
+ \delta'(r)\sum_{i} \gamma_{\bm{p}}\left(\bm{p}+r_i\begin{bmatrix}
\cos(\theta+\delta(r))\\
\sin(\theta+\delta(r))
\end{bmatrix}\right)
\end{split},
\end{equation}
where $\psi_i$ is the $i$-th sample angle in the range $\left[\theta-\delta(r),\theta+\delta(r)\right]$ and $r_i$ is the $i$-th sample point in the range $[0,r]$.
\subsection{Null Hypothesis and Distribution}
\label{sec:Null-Hypothesis-1}
After the differential junction-ness model built, we need to find a robust way to check if the value of ${\partial\omega_{\bm{p}(r,\theta)}}/{\partial r}$ for specific $r$ is significant enough. One way to achieve this goal is developing an \emph{a-contrario} approach to control the threshold automatically. Since our work is an extension of ACJ~\cite{XiaDG14}, the null hypothesis here should be same, we say the variables $\left\|\nabla\tilde{I}(\bm{q})\right\|$ and $\phi(\bm{q})$ follow the null hypothesis $\mathcal{H}_0$ if
\begin{enumerate}
\item $\forall\bm{q}\in\Omega$, $\left\|\nabla\tilde{I}(\bm{q})\right\|$ follows a Rayleigh distribution with parameter 1;
\item $\forall\bm{q}\in\Omega$, $\phi(\bm{q})$ follows a uniform distribution over $[0,2\pi]$;
\item All of the random variables $\left\|\nabla\tilde{I}(\bm{q})\right\|, \phi(\bm{q})\}_{\bm{q}\in\Omega} $ are independent each other.
\end{enumerate}
According to the dicussion in \cite{XiaDG14}, every $\gamma_{\bm{p}}(\bm{q})$ follows the distribution \eqref{eq:acj-distribution} independently. The random variable
$\frac{\partial \omega_{\bm{p}(r,\theta)}}{\partial r}$ follows the distribution of the random variable
\begin{equation}
S_r = \sum_{i=1}^{m} X_i
+ \delta'(r)\sum_{i=1}^{k} (Y_i + Z_i),
\end{equation}
where the random variable $X_i,Y_i,Z_i$ follow the distribution in equation \eqref{eq:acj-distribution} , $m$ is the number of sampling points for $\psi_i$ and $k$ is the number of sampling points for $r_i$. The function $\delta'(r)$ will be very small for reasonable $r$ (for example, $r\geq4$ induced $|\delta'(r)| = \frac{\tau}{r^2} \leq \frac{\tau}{16}$) since the parameter $\tau$ should have small values. Hence, the random variable could be approximated with
$S_r \approx \sum_{i=1}^{m} X_i$
for computational simplicity.
In practice, $k$ is larger than 10 and therefore the PDF of $\sum_{i=1}^{k} (Y_i+Z_i)$ can be apprixmated accurately by using the Central Limit Theorem as
\begin{equation}
f(t) = \frac{1}{\sqrt{4k\sigma^2\pi}}
\exp\left(-\frac{(t-2k\mu)^2}{4k\sigma^2}\right)
\end{equation}
where $\mu$ and $\sigma^2$ are the expectation and variance of \eqref{eq:acj-distribution}. The PDF of $\delta'(R)\left(\sum_{i=1}^{k}Y_i+Z_i\right)$ is
\begin{equation}
\label{eq:distribution-approximate}
\tilde{f}(t)
= \frac{R^2}{\tau}f(-\frac{R^2}{\tau}t)
= \frac{1}{\sqrt{2\pi\cdot 2k\sigma^2\frac{\tau^2}{R^4}}}
\exp\left(-\frac{t+\frac{2\tau}{R^2}k\mu}{2\cdot2k\sigma^2\frac{\tau^2}{R^4}}\right),
\end{equation}
which is the Gaussian distribution with mean $-\frac{2\tau}{R^2}k\mu$ and variance $2k\sigma^2\tau^2/R^4$. Meanwhile, the random variable $\sum_{i=1}^{m} X_i$ follows $\mathcal{N}(m\mu,m\sigma^2)$. Therefore, the random variable $S$ follows the distribution $\mathcal{N}(m\mu - 2k\mu\frac{\tau}{R^2},m\sigma^2 + 2k\sigma^2\tau^2/R^4)$ approximately.
The probability $\mathbb{P} \left( S_r \geq \frac{\partial \omega_{\bm{p}}(r,\theta)}{\partial r} \right) $ for given $r$ and $\theta$ follows the distribution $f(t)$
\begin{equation}
\mathbb{P}_{\bm{p}}(r,\theta):=\mathbb{P} \left(S_r \geq \frac{\partial \omega_{\bm{p}}}{\partial r}\right) = \int_{{\partial \omega_{\bm{p}}}/{\partial r}}^{\infty} d\left(\mathop{\star}_{i=1}^M p\right),
\end{equation}
describes the fact that scale cannot be increased with a sufficient small incremental at $R$ along orientation $\theta$ under the hypothesis $\mathcal{H}_0$. The smaller probability $\mathbb{P}_{\bm{p}}(r,\theta)$ is, the more confident that scale $r$ is a reasonable scale. The small probability $\mathbb{P}(r,\theta)$ means that the point $\bm{p} + r\cdot[\cos\theta,\sin\theta] $ belongs to the branch with high possibility. Ideally, the existed branch should produce a series small probability in a interval $[r_0,r_1]$. Then, the (maximum) scale of the branch should be defined as $r_1$. We use the probability $\mathbb{P}_{\bm{p}}(r,\theta)$ to check if the point $\bm{p}_{r}^{\theta} = \bm{p} + r\cdot[\cos\theta,\sin\theta]^T$ belongs to the branch.
\subsection{Number of Test and Number of False Alarms}
\label{sec:model1}
In last subsection, we conclude that \emph{sufficient} small probability of $\mathbb{P}_{\bm{p}}(r,\theta)$ indicates that the point with certain direction $\theta$ and radius $R$ belongs to the branch more probably. The definition of \emph{sufficient} probability need to be cleared. According to the Helmholtz principle, we bound the \emph{sufficient} probability with the expectation of the number of occurrences of this event is less than $\varepsilon$ under the \emph{a-contrario} random assumption~\cite{DesolneuxMM07} with
$$
{\rm NFA}(r,\bm{p},\theta) = N_s\cdot \mathbb{P}_{\bm{p}}(r,\theta) \leq \varepsilon,
$$
where the $N_s$ denotes the number of occurrences of the point occurs along the given location and orientation. Since the location and orientation of the branch are known, expected number of false alarms should be smaller than $\sqrt{NM}$ where $N$ and $M$ are the number of rows and columns of the corresponding image. When the point $\bm{p}_r^{\theta}$ $\forall r\in (0,r_1]$ rejects the hypothesis $H_0$, the scale of the branch should be $r_1$. The scale $R$ is called as the maximum (meaningful) scale of the branch if the scale $R$ is the maximum scale that satisfies inequality
$$
{\rm NFA}(r,\bm{p},\theta) = \sqrt{NM}\cdot \mathbb{P}_{\bm{p}}(r,\theta) \leq \varepsilon, \forall r\in (0,R].
$$
Usually, the $\varepsilon$ is defined as $1$, which means the expected Number of False Alarm is not larger than 1.
\subsection{Scale Ambiguity for Branches}
Junctions are located at the intersections of line segments. Suppose there exist two junctions $$\jmath_1 = \{\bm{p}_1,\{r_1^1,\theta_1^1\},\{r_1^2,\theta_1^2\}\},$$ $$\jmath_2 = \{\bm{p}_2,\{r_2^1,\theta_2^1\},\{r_2^2,\theta_2^2\}\}$$ where the two-tuples $\{\{r_i^j,\theta_i^j\}\}$ denotes the scale and orientation for the $j$-th branch of the $i$-th junction and $\bm{p}_i$ is location of the $i$-th junction. In the case that the junction $\jmath_2$ is located at $\bm{p}_2 = \bm{p}_1 + r_1^1\left[\cos\theta_1^1, \sin\theta_1^1 \right]^T$ and $\theta_1^1 = \theta_2^1$, the scale ambiguity occurs since the line segment $\bm{p}_1\bm{p}_2$ and the branch $\{r_2^1,\theta_2^1\}$ are co-linear. The scale of the first branch of $\jmath_1$ can be regarded as either $r_1^1$ or $r_1^1+r_2^1$. For example, there are two junctions $\jmath_1$ and $\jmath_2$ located at $\bm{p}_1$ and $\bm{p}_2$ respectively in the Fig.~\ref{fig:scale-ambiguity}. The branch along the direction of ${\bm{p}_1\bm{p}_2}$ for $\jmath_1$ and $\jmath_2$ are co-linear with the line segment marked as red. For the branch of $\jmath_1$, its scales are $\left\|\bm{p}_2-\bm{p}_1\right\|$, $\left\|\bm{p}_3-\bm{p}_1\right\|$ and $\left\|\bm{p}_4-\bm{p}_1\right\|$ while the scales of the branch of $\jmath_2$ are $\left\|\bm{p}_3-\bm{p}_2\right\|$ or $\left\|\bm{p}_4-\bm{p}_2\right\|$. To eliminate the ambiguity, we define the scale for a branch as follow
\begin{definition}[Scale of a branch]
\label{def:scale}
Suppose there exist a branch starting at point $\bm{p}$ in the direction $\theta$, the possible salient scales are $r_1,r_2,\ldots,r_m$, we define the scale of this branch $r^*$as
$$
r^* = \max_{i} r_i.
$$
\end{definition}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8]
\node[anchor=south west,inner sep=0] at (0,0)
{\includegraphics[width=0.26\textwidth]{figures/chessboard.jpg}};
\coordinate (p0) at (0,1.33);
\coordinate (p1) at (1.37,1.33);
\coordinate (p2) at (2.75,1.33);
\coordinate (p3) at (4.13,1.33);
\coordinate (p4) at (5.47,1.33);
\coordinate (q0) at (1.37,0.00);
\coordinate (q1) at (p1);
\coordinate (q2) at (1.37,2.68);
\coordinate (q3) at (1.37,4.06);
\coordinate (q4) at (1.37,5.43);
\coordinate (o0) at (2.75,0.00);
\coordinate (o1) at (p2);
\coordinate (o2) at (2.75,2.68);
\coordinate (o3) at (2.75,4.06);
\coordinate (o4) at (2.75,5.43);
\draw[red,thick=8pt] (p0) -- (p1) -- (p2) -- (p3) -- (p4);
\draw[green,thick=8pt] (q0) -- (q1) -- (q2) -- (q3) -- (q4);
\draw[orange,thick=8pt] (o0) -- (o1) -- (o2) -- (o3) -- (o4);
\draw[blue] (p1) circle [radius = 0.6];
\draw[cyan] (p2) circle [radius = 0.6];
\filldraw[teal] (p0) circle [radius = 0.05];
\filldraw[teal] (p1) circle [radius = 0.05];
\filldraw[teal] (p2) circle [radius = 0.05];
\filldraw[teal] (p3) circle [radius = 0.05];
\filldraw[teal] (p4) circle [radius = 0.05];
\filldraw[teal] (q0) circle [radius = 0.05];
\filldraw[teal] (q2) circle [radius = 0.05];
\filldraw[teal] (q3) circle [radius = 0.05];
\filldraw[teal] (q4) circle [radius = 0.05];
\filldraw[teal] (o0) circle [radius = 0.05];
\filldraw[teal] (o2) circle [radius = 0.05];
\filldraw[teal] (o3) circle [radius = 0.05];
\filldraw[teal] (o4) circle [radius = 0.05];
\node[below left] at (p0) {$\bm{p}_0$};
\node[below left] at (p1) {$\bm{p}_1$};
\node[below right] at (p2) {$\bm{p}_2$};
\node[below left] at (p3) {$\bm{p}_3$};
\node[below right] at (p4) {$\bm{p}_4$};
\node[below left] at (q0) {$\bm{q}_0$};
\node[below right] at (q2) {$\bm{q}_2$};
\node[below left] at (q3) {$\bm{q}_3$};
\node[below right] at (q4) {$\bm{q}_4$};
\node[below right] at (o0) {$\bm{o}_0$};
\node[below left] at (o2) {$\bm{o}_2$};
\node[below right] at (o3) {$\bm{o}_3$};
\node[below left] at (o4) {$\bm{o}_4$};
\end{tikzpicture}
\centering
\caption{Scale ambiguity for branches. The junction $\jmath_1$ and $\jmath_2$ located at $\bm{p}_1$ and $\bm{p}_2$ have more than one scales respectively.}
\label{fig:scale-ambiguity}
\end{figure}
The branch with such scale is more stable and more global than other features. However, there exist some challenges to estimate such scales from images. Most existing approaches and the model proposed in Sec.~\ref{sec:model1} estimate the line segment or branches based on orientations of level-lines extracted from the gradient of image~\cite{GioiJMR10}.
The line segment detected from the image in Fig.~\ref{fig:scale-ambiguity} could be either $\bm{p}_1\bm{p}_2$ or $\bm{p}_1\bm{p}_3$ since the level-line around the points $\bm{p}_2$ have probability to aligned with the orientation of vector $\bm{o}_2-\bm{p}_2$, which will lead to the line segment that are co-linear with the branch of $\jmath_1$ across the point $\bm{p}_2$ to $\bm{p}_3$ or $\bm{p}_4$. When the viewpoint changed, illumination varied or noise increased, the orientations of level-lines around $\bm{p_2}$, $\bm{p}_3$ and $\bm{p}_4$ will be changed with uncertainty. Then, the scale cannot be estimated robust for different imaging conditions.
Fortunately, the inherent property for location of junctions is stable whatever the imaging condition is. Although the orientations of level-lines around the locations of junctions will change with uncertainty, most of them are still aligned to one of the lines that intersects the junction. Motivated by this, we use the very local isotropic-scale junctions in a small neighbor(e.g. $5\times 5$ or $7\times7$ window size) instead of gradient field and level-lines. For a pixel in an image, we calculate the junction-ness for different orientations in a small neighbor according to ACJ~\cite{XiaDG14} algorithm as
\begin{equation}
\label{eq:junction-ness-branch-fix-scale}
\omega_{\bm{p}}(\theta) = \frac{1}{\sigma}\left(\sum_{\bm{q}\in S_{\bm{p}}(\theta)}\frac{\gamma_{\bm{p}}(\bm{q})}{\sqrt{n}}
-\sqrt{n}\mu\right)
,
\end{equation}
where $S_{\bm{p}}(\theta)$ is defined in \eqref{eq:sector-def} with fixed radius (eg. $r=3,5,7$), $n$ is the cardinal number of set $S_{\bm{p}}(\theta)$. $\mu$ and $\sigma^2$ are the mean and variance defined in \eqref{eq:distribution-approximate}. Then, we leverage the non-maximal-suppression (NMS)~\cite{NeubeckG06} to obtain the very local junctions and filter out branches for these junctions with non-meaningful NFA values according to \eqref{eq:NFA-xia}. These very local junctions are denoted as $\jmath_{\bm{p}} = \{\theta_{\bm{p}}^i,\omega_{\bm{p}}^i,{\rm NFA}_{\bm{p}}^i \}_{i=1}^K$, where the $\omega_{\bm{p}}^i$ and the ${\rm NFA}_{\bm{p}}^i$ is the strength and corresponding NFA value for branch with $\theta_i$ orientation. In the case that pixel $\bm{p}$ is on (around) an edge, there will be two $\theta_i$ that align to the orientation of this edge up to $\pm\pi$. If the pixel $\bm{p}$ is around another junction, there will be multiple orientations aligned with different branches of this junction. Meanwhile, we incorporate the strength $\omega_{\bm{p}}^i$ instead of the norm of (normalized) gradient with into the \emph{a-contrario} model proposed in Sec.~\ref{sec:model1} with modified probabilistic distribution.
\subsection{Modified Probabilistic Distribution}
For the sake of estimating scale for a branch with definition \ref{def:scale}, the functions $\gamma_{\bm{p}}$ and $\omega_{\bm{p}}(r,\theta)$ measuring the junction-ness should be changed to
\begin{equation}
\label{eq:junction-ness-pairwise-modified}
\begin{split}
\tilde{\gamma}_{\bm{p}}(\bm{q}) = \omega_{\bm{q}}^i
\cdot \max(\left|\cos(\theta_{\bm{q}}^i-\alpha(\vec{\bm{pq}}))\right|-
\\ \left|\sin(\theta_{\bm{q}}^i-\alpha(\vec{\bm{pq}}))\right|,0)
\end{split}
\end{equation}
and
\begin{equation}
\label{eq:junction-ness-modified}
\tilde{\omega}_{\bm{p}}(r,\theta) = \sum_{\bm{q}\in S_{\bm{p}}(r,\theta)}
\tilde{\gamma}_{\bm{p}}(\bm{q}),
\end{equation}
where the index $i$ in Eq.~\eqref{eq:junction-ness-pairwise-modified} is the orientation $\theta_{\bm{q}}^i$ that is most close to $\theta$.
According to the Central Limit Theorem(CLT), the random variable $\omega_{\bm{p}}^i$ follows the Gaussian distribution with mean $0$ and variance $1$, the distribution for $\tilde{\gamma}_{\bm{p}}(\bm{q})$ is
\begin{equation}
\label{eq:distribution-updated}
\tilde{p}(z) = \frac{1}{2}\delta_0(z) + \mathbf{1}_{z\neq 0}\frac{\sqrt{2}}{\sqrt{\pi^3}}\int_{0}^{1}\frac{1}{y}e^{-\frac{z^2}{2y^2}}\frac{1}{\sqrt{2-y^2}}dy,
\end{equation}
Then, the null Hypothesis discussed in Sec.~\ref{sec:Null-Hypothesis-1} is updated to
\begin{equation}
\label{eq:probability-updated}
\mathbb{P}_{\bm{p}}(r,\theta):=\mathbb{P} \left(S_r \geq \frac{\partial \tilde{\omega}_{\bm{p}}}{\partial r}\right) = \int_{{\partial \tilde{\omega}_{\bm{p}}}/{\partial r}}^{\infty} d\left(\mathop{\star}_{i=1}^M \tilde{p}\right).
\end{equation}
\subsection{Junction Detection}
So far, the \emph{a-contrario} approach for anisotropic scale estimation is derived. For an input image, isotropic junctions and local junctions for each pixel are firstly detected by ACJ~\cite{XiaDG14} for initialization. The results for junctions are denoted as $\{\jmath_i\}_{i=1}^N$ and local junctions at fixed small scale (usually $r=3,5,7$) for every pixels are $\jmath_{\bm{p}} = \{\theta_{\bm{p}}^i,\omega_{\bm{p}}^i,{\rm NFA}_{\bm{p}}^i \}_{i=1}^K$ where $\bm{p}$ is the coordinate of a pixel.
We estimate the scale $r_i^j$ for branch $\theta_i^j$ according to the Number of False Alarm
$$
{\rm NFA}(r_i^j,\bm{p}_i,\theta_i^j) = \sqrt{NM}\cdot\mathbb{P}_{\bm{p}}(r_i^j,\theta_i^j)\leq \varepsilon,
$$
where the probability $\mathbb{P}_{\bm{p}}(r,\theta)$ is the updated version in Eq.~\eqref{eq:probability-updated}. The scale $r_i^j$ is searched starting at $r_i$ until the NFA is larger than $\varepsilon$.
The accuracy for orientations of branches detected by ACJ~\cite{XiaDG14} is depend on the scale $r_i$ which is bounded by a predefined parameter and hence noised. The scales for ASJ is more sensitive to the noise which should be refined. A branch with the most accurate orientation $\theta$ should have the maximum junction-ness with the scale $r_{\theta}$, we optimize the objective function
\begin{equation}
\hat{\theta} =\arg\max_{\theta} \sum_{\bm{q}\in S_{\bm{p}}(r_{\theta},\theta)}\gamma_{\bm{p}}(\bm{q}),
\end{equation}
to refine the orientation for $\theta$ and check the branch with orientation $\hat{\theta}$ and scale $r_{\hat{\theta}}$ is $\varepsilon$-meaningful branch.
\section{Problem Statement}
\label{sec:problems}
\subsection{Junction Model}
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{figures/aacj_template}
\caption{Template of isotropic-scale junction (left) defined in ACJ~\cite{XiaDG14} and anisotropic-scale junction (ASJ) (right) proposed in our work.}
\label{fig:junction-model}
\end{figure}
The early researches for junction detection usually focus on the orientations of branches and the locations while ignoring the length or scale of each branch.
Even though the junction locations and orientation of branches are important to depict geometric structure for images, lacking scale of branches limits their performance for image matching. Motivated by this, we want to propose a new junction model for characterizing junction better. We define the our junction model by considering the endpoint of each branch. Since the length of every branch is possible to be different, we call our model as \emph{anisotropic-scale junction}. As a special case, junction model with isometric branches is called \emph{isotropic-scale junction}.
\begin{definition}[Anisotropic-scale junction]
\label{def:aniso-junction}
An anisotrpic-scale junction with $M$ branches starting at the same location $\bm{p}$ is denote as
\begin{equation}
\jmath = \left\{\bm{p}, \left\{r_i,\theta_i \right\}_{i=1}^M\right\}
\end{equation}
where $r_i$ and $\theta_i$ are the scale and orientation for $i$-th branch, $M$ is the number of branches.
\end{definition}
Fig.~\ref{fig:junction-model} provides an example for the difference between anisotropic-scale (left) and isotropic-scale (right) junctions is shown. The \emph{isotropic-scale junction} is actually a special case for the anisotropic model when the length of all branches are identical.
\subsection{Detecting Junction Locations and Isotropic Branches}
Since junction is formed by several intersected line segments, the problem of localizing the intersection and identifying the normal angle of these line segments is easier to be focused. Once the isotropic junction model is defined, this problem becomes a template matching problem. Based on this idea, Xia~\emph{et al.} exploited the junction-ness for branches with given scale $r$ and orientations $\theta_i$ and then an \emph{a-contrario} approach is derived to determine meaningful junctions for input images~\cite{XiaDG14}. The junction-ness for given scale $r$ and orientation $\theta_i$ actually contains the neighbor information of normalized gradient. Different from points, the neighborhood for a given scale $r$ and $\theta_i$ is a sector. As shown in the left of Fig.~\ref{fig:junction-model}, the dark area with $\theta_1$ represent the sector neighbor of the branch. The sector neighbor for given location $\bm{p}$, scale $r$ and orientation $\theta$ can be denoted mathematically as
\begin{equation}
\label{eq:sector-def}
\begin{split}
S_{\bm{p}}(r,\theta):= \left\{
\bm{q}\in\Omega; \right.
\bm{q}\neq\bm{p},
\left\|\vec{\bm{pq}}\right\|\leq r,
\\
\left.
d_{2\pi}(\alpha(\vec{\bm{pq}}),\theta)\leq \Delta(r)
\right\}
\end{split}.
\end{equation}
where the $\Delta(r)$ is defined as $\frac{\tau}{r}$ with some predefined parameter $\tau$, $\Omega$ is the domain of input image, $d_{2\pi}$ is the distance along the unit circle, defined as $d_{2\pi}(\alpha,\beta) = \min\left(|\alpha-\beta|,2\pi-|\alpha-\beta|\right)$ and $\alpha(\vec{\bm{pq}})$ is the angle of the vector $\vec{\bm{pq}}$ in $[0,2\pi]$.
Since a junction is formed by edges and corner points, the normal angle for gradient should be consistent with the orientation of branches. Followed with this idea, if most of points $\bm{q}\in S_{\bm{p}}(r,\theta)$ have close normal angles with orientation $\theta$, the corresponding scale $r$ and orientation $\theta$ should be meaningful to be a branch of the junction. For a given sector $S_{\bm{p}}(r,\theta)$, the junction-ness can be measured by
\begin{equation}
\label{eq:junction-ness-branch}
\omega_{\bm{p}}(r,\theta) = \sum_{\bm{q}\in S_{\bm{p}}(r,\theta)}\gamma_{\bm{p}}(\bm{q}),
\end{equation}
and $\gamma_{\bm{p}}(\bm{q})$ is the pairwise junction-ness with
\begin{equation}
\begin{split}
\label{eq:junction-ness-pairwise}
\gamma_{\bm{p}}(\bm{q}) = \left\|\nabla\tilde{I}(\bm{q})\right\|
\cdot \max(\left|\cos(\phi(\bm{q})-\alpha(\vec{\bm{pq}}))\right|
\\ -\left|\sin(\phi({\bm{q}})-\alpha(\vec{\bm{pq}}))\right|,0)
\end{split},
\end{equation}
where the $\left\|\nabla\tilde{I}(\bm{q})\right\|$ is the norm of normalized gradient at point $\bm{p}$, $\phi(\bm{q})$ for pixel $\bm{q}$ is defined as $\phi(\bm{q}) = (\arctan\frac{I_y(\bm{q})}{I_x(\bm{q})}+\pi/2)~modulo~(2\pi)$, $I_x,I_y$ are the partial derivative of input image in $x$ and $y$ direction.
For the isotropic scale junction with two or more branches, the minimal junction-ness for one of the branches is used to describe the junction-ness for the entire junction with the equation \eqref{eq:junction-ness-entire}
\begin{equation}
\label{eq:junction-ness-entire}
t(\jmath):= \min_{m=1,\ldots,M}\omega_{\bm{p}}(r,\theta_m),
\end{equation}
where the number $M$ and $m$ represent the total number of branches and branch index for the junction $\jmath$.
\subsection{Analysis for Estimating Anisotropic-scale Branches}
Although the equation~\eqref{eq:junction-ness-entire} measures junction-ness for a given junction, it does not contain any anisotropic scale for branches. Such definition of junction-ness only keeps the information that each branch's scale $r_i$ is larger than $r$ and it cannot be used for handling more sophisticated transformations such as affine transform and projective transform. To overcome this problem, we define the anisotropic-scale junctions with independent scales in Def.~\ref{def:aniso-junction}. The difference between isotropic-scale junction and the anisotropic-scale one can be observed in Fig.~\ref{fig:junction-model}.
It is easy to see that the junction-ness for entire junction defined in Eq.~\eqref{eq:junction-ness-entire} cannot be used to exploit independent scales $r_i$. Fortunately, the isotropic-scale junctions detected by ACJ~\cite{XiaDG14} is meaningful and the problem of estimating scale $r_i$ and orientation $\theta_i$ can be simplified to estimating only scale $r_i$ with given location $\bm{p}$ and orientation $\theta_i$. In other words, for the detected isotropic junctions, we need to exploit a robust method to estimate the length of corresponding ray segment with specific orientation $\theta$.
One plausible way to model the unknown scale with respect to given location $\bm{p}$ and orientation $r$ is that simply modify the junction-ness defined in Eq~\eqref{eq:junction-ness-branch} to $\omega_{\bm{p}}(r)$ with specific $\theta$. Then, the \emph{a-contrario} approach in~\cite{XiaDG14} seems to be feasible to check whether the scale $r$ is $\varepsilon$-meaningful.
The corresponding cumulative distribution function (CDF) used to get $\varepsilon$-meaningful scale $r$ can be formulate to
\begin{equation}
\label{eq:acj-distribution-conv}
F(t;J(r,\theta)) = \mathbb{P}\{\omega_{\bm{p}}\geq t\} = \int_{t}^{+\infty}
d\left(\mathop{\star}_{j=1}^{J(r,\theta)} p\right)
\end{equation}
where the $p$ represents the distribution of random variable $\omega_{\bm{p}}(r,\theta)$ with
\begin{equation}
\label{eq:acj-distribution}
p(z) = \frac{1}{2}(\delta_0(z) + \frac{2}{\sqrt{\pi}}e^{-\frac{z^2}{4}} {\rm erfc}(\frac{z}{2}))\mathbf{1}_{z\geq 0}.
\end{equation}
$J(r,\theta)$ is the number of pixels in corresponding sector neighbor and the operator $\mathop{\star}_{j=1}^{J(r,\theta)}$ produces the convolutional probability density function (PDF) with $J(r,\theta)$ times, which actually represents the random variable of $\omega_{\bm{p}}(r,\theta)$.
The $\varepsilon$-meaningful scale for given orientation and location can be determined by the inequality
\begin{equation}
\label{eq:NFA-xia}
{\rm NFA}(\jmath) := \#\mathcal{J}(1)\cdot F(t;J(r,\theta))\leq \varepsilon,
\end{equation}
where $\#\mathcal{J}(1)$ is the number of test for junctions with $1$ branch.
However, the NFA defined in Eq.~\eqref{eq:NFA-xia} has to face the fact that there exist several junctions in indoor images which have extremely large scale branches. This fact would lead to the above inequality disabled. To illustrate this problem, we studied the relationship between convolution times $J(r,\theta)$ with the minimal junction-ness that can make the probability $F(t;J(r,\theta)) = 0$. As shown in Fig.~\ref{fig:plot-min-zero}, if the value of $\omega_{\bm{p}}(r,\theta)$ is greater than $\frac{J(r,\theta)}{2}$, the probability of $F(t;J(r,\theta))$ will be equal to $0$ constantly, which may cause the inequality degenerated to $0\leq \varepsilon$. In fact, the pairwise junction-ness defined in Eq~\eqref{eq:junction-ness-pairwise} can reach to $1$ and then the $\omega_{\bm{p}}(r,\theta)$ will be equal to $J(r,\theta)$. Therefore, the junction-ness in~\cite{XiaDG14} is infeasible to model the unknown scale.
\begin{figure}
\centering
\includegraphics[width = 0.4\linewidth]{figures/figure-plot-min-zero}
\caption{The relationship between the convolution times $J(r,\theta)$ and corresponding minimal value with $F(t;J(r,\theta)) = 0$}
\label{fig:plot-min-zero}
\end{figure}
\section{Related works}
\label{sec:related-work}
In this section, we briefly review the existing approaches for junction detection and matching as well as geometrical structure matching for indoor images.
\subsection{Junction detection}
Detecting junction structure in images has been studied for years\cite{Forstner86,HarrisS88,MikolajczykS04,MaireAFM08,ForstnerDS09,PuspokiU15,PuspokiUVU16}.
In the early stage, junction was studied as corner points~\cite{Forstner86,HarrisS88}.
For the sake of recognition, the scale of junctions or other key-points also have been studied~\cite{Lowe04,MikolajczykS04,XiaDG14}. These approaches estimate the scale around junction locations by using scale space theories~\cite{AlvarezM97,Lowe04,MikolajczykS02,MikolajczykS04} to handle the viewpoint changes across different images.
Since these approaches determine the scale of interested points in very local area, their precision and discriminability will be lost quickly. Besides, these methods mainly focus on the localizations and scales of corner points while ignoring the differences between different type of junctions.
To overcome these shortcomings, the ACJ detector~\cite{XiaDG14} was proposed to detect and characterize junctions with non-linear scale space. In this work, an \emph{a-contrario} approach is proposed for determining the location and branches of junctions with interpretable isotropic scales, which characterizes the ray segments as junction branches and locations explicitly.
The scales for detected junctions correspond to the optimal size at which one can observe the junction in the image.
Similar to junction detection, there is an elegant detector named edge based region (EBR) detector~\cite{TuytelaarsG04} for detecting affine invariant regions by estimating relative speed for two points that move away from a corner in both directions along the curve edges. This work can be regarded as a kind of junction detector in curve dominated images. The straight edges which are common in indoor scenario cannot be tackled in this way.
Although above mentioned approaches can extract junctions, their geometric representation is not exploited sufficiently. The scales estimated by these methods are local and insufficient for characterizing indoor scenes.
\subsection{Junction matching}
Junction matching has been attended since early years and shown promising matching accuracy\cite{ShenP00,VincentL04}.
In \cite{ShenP00},
a model for estimating endpoints of junction branches is proposed which is very close to our work that estimating anisotropic scales for each branch.
Differently, their approach~\cite{ShenP00} requires a roughly estimated fundamental matrix while our proposed method
estimating anisotropic scales for each branch directly without fundamental matrix. For known fundamental matrix, the local homography between a pair of junctions can be estimated to produce more accurate results and refining epipolar geometry meanwhile~\cite{VincentL04}. These results are very related to recent approach for the hierarchical line segment matching approach LJL~\cite{LiYLLZ16}. In this work, detected line segments are used to generate junctions with virtual intersections in the first stage. After that, junctions are regarded as key-points for matching initially. Finally the epipolar geometry induced from initial matching is used to estimate line segment correspondences with local junctions. The matching accuracy in fact relies on the descriptors of virtual intersections.
Although their matching results are promising,
the problem of estimating epipolar geometry need to other ways.
\subsection{Indoor image matching with geometric structure}
Most of indoor scenes can be described by using simple geometric elements such as points and line segments. As a combination of points and lines, junction is also a sort of useful geometrical structure for indoor scene.
There has been many approaches such as Canny edge detector~\cite{Ruff87} and line segment detector (LSD)~\cite{GioiJMR10,GioiJMR12} to extract line segments. LSD, which can produce more complete line segments than canny edge without any parameter tuning procedure, has been applied in many tasks such as line-segments matching~\cite{LiYLLZ16} and 3D reconstruction~\cite{RamalingamASLP15}. Compared with key-points, line-segments can produce more complete result that contain the primary sketch for the scene.
Most of algorithms for line-segments matching rely on key-point correspondences. More precisely, key-points for an input image pair are firstly detected by using SIFT~\cite{Lowe04} or other detectors while estimating the epipolar geometry between the image pair by using RANSAC~\cite{FischlerB81} and its variants. Based on the fundamental matrix $F$ induced by key-points matching, many approaches such as line-point-invariant (LPI)~\cite{FanWH12B} and line-junction-line (LJL)~\cite{LiYLLZ16} can match line segments correctly. LPI has ability to handle the relation between line-segments and matched key-points with viewpoint changes. LJL\cite{LiYLLZ16} method matches image pairs in multiple stages. In the first, detected line-segments are intersected with appropriate threshold to produce junctions and matching these intersections in the same way with key-points matching. Then, local homography are estimated for these junctions with the estimated fundamental matrices from key-points matching results.
Although these approaches produce good performance in many cases, the matching results are in favor of matching lines instead of line-segments. Their results show that lines are matched while the endpoints of line segments are not matched very well. Except for the reason that the estimated epipolar geometry is sometime erroneous, there is a important reason for failure of line-segments matching that line segment detectors can not guarantee that the line-segments are consistent across imaging condition varying. In many situations, a line segment $l_A$ detected in image $I_A$ might be decomposed to two or more collinear line segments $l_B^1,\ldots,l_B^k$ in another image $I_B$. In this case, the results of line matching can be regarded as correct if the line in $l_A$ is corresponding with $l_B^1,\ldots,l_B^k$. However, in the aspect of line-segments matching, there exists no correct corresponding line-segments for $l_A$ in image $I_B$. On the other hand, the existing line segment matchers rely on the results of key-point matching. Once the key-points matching failed or inaccurate, the induced result of line segment matching will be affected in some extent. |
2,877,628,088,624 | arxiv | \section{Introduction}
Direct collapse black holes (DCBH) have gathered much attention recently \citep{Dijkstra2014a,Ferrara14a,Agarwal12,Agarwal14,Habouzit16a} as a plausible solution to the problem of forming billion solar mass black holes very early in cosmic history as is required to explain the existence of very luminous quasars at redshifts $z>6$.
Pristine gas in an atomic cooling halo exposed to a critical level of Lyman--Werner (LW) radiation can rid itself of molecular hydrogen (cooling threshold $\sim 200$ K), thereby collapsing isothermally in the presence of atomic hydrogen (cooling threshold $\sim 8000$ K). This leads to a Jeans mass threshold of $10^6 ~\mathrm{M}_{\odot}$ at $n\sim 10^3 \rm \ cm^{-3}$, thereby allowing the entire gas mass in the halo \footnote{An atomic cooling halo, i.e. $\rm T_{vir}=10^4 \ K$ corresponds to a $\rm M_{DM} \approx 10^7 ~\mathrm{M}_{\odot}$ at $z\approx10$. If we assume that the baryon fraction in this halo is the same as the cosmological mean value, i.e. $f_b \approx 0.16$, then the baryonic mass of such a halo will be at least $10^6 ~\mathrm{M}_{\odot}$} to undergo runaway collapse eventually forming a $10^{4-5} ~\mathrm{M}_{\odot}$ black hole in one go \citep{Omukai:2001p128}. The collapse must withstand fragmentation into Population III (Pop III) stars, which requires the gas to get rid of its angular momentum via bars--within--bars instabilities \citep{Begelman:2006p3700}, low--spin disks \citep[e.g.][]{Bromm:2003p22,Koushiappas:2004p871,Regan08,Lodato:2006p375} or high inflow rates in turbulent medium \citep{Volonteri:2005p793,Latif:2013p3629,Schleicher:2013p3661,Borm13}.
In order for this mechanism to work, initially there must be a LW radiation field strong enough to delay Pop III star formation in a minihalo, $\rm 2000<T_{vir}\le10^4\ K$, till it reaches the atomic cooling limit of $\rm T_{vir} \ge 10^4\ K$ \citep{Machacek:2001p150,OShea:2008p41} . At this point, the flux of LW radiation illuminating the halo from nearby external stellar source(s) must be higher than a critical value $\rm{J}_{\rm crit}$ (conventionally written in units of $10^{-21} \rm\ erg/s/cm^2/sr/Hz$) to facilitate isothermal collapse of the pristine gas at 8000 K into a DCBH {\cite[e.g. recent simulations by ][]{2014ApJ...795..137R,2014MNRAS.445L.109I,2015MNRAS.446.2380B}}. Many previous studies of DCBH formation have adopted highly simplified prescriptions for the spectrum of this external radiation field, approximating the spectrum of a source dominated by Pop III stars as a $\rm T = 10^5$~K black body, and of a source dominated by Population II (Pop II) stars as a $\rm T=10^4$ K black body \citep{Omukai:2001p128,Shang:2010p33,WolcottGreen:2012p3854}. However, recent studies have emphasised the need for using more realistic spectral energy distributions (SED) for these sources as the value of $\rm{J}_{\rm crit}$ depends on the shape of the irradiating source's SED \citep[, A16 hereafter]{Sugimura:2014p3946, Agarwal15a, Agarwal15b}. These studies employed single stellar populations to represent the SEDs of Pop II stars, generating them using publicly available single stellar synthesis codes such as {\sc{Starburst99}} \citep{Leitherer:1999p112}, {\sc{Yggdrasil}} \citep{Zackrisson11} and \citet{Bruzual:2003p3256} model.
However, in reality it is likely that a significant number of the stars will be part of binary systems. Stellar populations with significant binary fractions have higher hydrogen ionising photon yields than single stellar populations \citep[e.g.][]{2016MNRAS.456..485S,2016MNRAS.459.3614M}, and so it is plausible that accounting for their existence will lead to significant differences in the value of J$_{\rm crit}$ that we derive.
\section{Methodology}
\begin{figure}
\includegraphics[width=0.7\columnwidth,angle=90,trim={1.75cm 2cm 2.25cm 2cm},clip]{plot1.eps}
\caption{The solid red curve is criterion for direct collapse derived described in A16, given by Eq.~\ref{eq.ratecurve}. The grey shaded region shows the range of $\rm k_{de}$ and $\rm k_{di}$ derived from SB99 stellar populations, while the blue region is the range derived from BPASS for a range of stellar populations described in Tab.~\ref{tab.stellarmodels}}
\label{fig.ratecurve}
\end{figure}
We apply the framework described in A16 to SEDs generated with the stellar population synthesis code `Binary Population and Spectral Synthesis' \citep{2016MNRAS.456..485S, 2016arXiv160203790E} in its second version, BPASSv2. This is done to assess the impact of binaries on the critical LW radiation field strength required to suppress H$_{2}$ formation and enable direct collapse black hole formation.
The unique feature of the BPASSv2 models is the inclusion of massive binary star evolution which, in the context of this work, has the effect of boosting the LW photon flux at older stellar ages (see Section. 3).
We have been motivated to consider the effects of binary star evolution by observations of local \mbox{H\,{\sc ii}} \ regions which have indicated that $\gtrsim 70 \%$ of massive stars undergo a binary interaction in their lifetimes \citep[e.g.][]{Sana12}.
Furthermore, it has been reported recently that the BPASSv2 models are better able to account for (i) the observed shape of the FUV continuum and (ii) UV + optical emission line ratios of star forming galaxies at $z \simeq 2 - 3$ \citep{2016ApJ...826..159S, 2016arXiv160802587S}, as well as the properties of massive star clusters in local galaxies \citep{2016MNRAS.457.4296W} and Pop III stars \citep{clark11sc,Greif:2012p2733,2013MNRAS.433.1094S}.
{Given this context, it is useful to know how the presence of massive binary stars in stellar population will affect direct collapse black hole formation.
Briefly, in the BPASSv2 models, the main consequence of close binary interactions is the removal of the hydrogen envelope in primary stars, part of which accretes onto the companion secondary star resulting in its rejuvenation \citep[e.g.][]{deMink13,Podsiadlowski92}. The resulting effect on a stellar population containing a significant binary fraction is more hot-helium and Wolf-Rayet stars in the primary population, and an effective increase in the main sequence lifetimes of secondary stars. The mass transfer is also accompanied by angular momentum transfer, which causes stars to spin-up and results in a rotational mixing of layers allowing hydrogen to burn more efficiently; this effect, known as quasi-homogeneous evolution (QHE), is particularly strong at low metallicities \citep[see][]{2016arXiv160203790E,2016MNRAS.456..485S}.
The most relevant consequence of these differences on the DCBH formation scenario is that compared to single star models, the BPASSv2 binary models extend the time period over which a stellar population can emit UV photons in the LW band.}
The SED grid explored in this study is described in Tab.~\ref{tab.stellarmodels}. It is compared to the SB99 case, which we have discussed in detail in the Appendix of A16.
For the BPASSv2 models we have assumed the instantaneous burst models with ages ranging from 10$^{6-9}$ yr, a metallicity of $0.05 Z_{\odot}$ {and a 70\% binary fraction}.
In order to understand the effect of these SEDs on DCBH formation, we make the following assumptions:
\begin{table}
\caption{Summary of the stellar populations considered in this study, BPASSv2 and SB99.}
\begin{threeparttable}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}l | cccc}
\hline
Instantaneous & Stellar Mass & Age $^{\rm}$ & Metallicity & IMF $^{\rm [b]}$\\
Burst & \tiny{($~\mathrm{M}_{\odot}$)} & \tiny{(yr)} & \tiny{($~\mathrm{Z}_{\odot}$)} \\
\\ [-1.5ex] \hline \\ [-1.5ex]
BPASSv2 &$10^{5-10}$ &$10^{6-9}$~yr &0.05 & Kroupa\\
SB99 &$10^{5-10}$ &$10^{6-9}$~yr &0.02 & Kroupa\\ [1.5ex] \hline
\end{tabular*}
\label{tab.stellarmodels}
\begin{tablenotes}
\item[b] {IMF of the form $\Psi(M_*) = M_*^{-\alpha} $ where $\alpha \sim 1.3$ for $0.1\leq M_* < 0.5$ and $\alpha \sim 2.35$ for $0.5\leq M_*\leq 100\ ~\mathrm{M}_{\odot}$}
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{enumerate}
\item The SEDs represent a galaxy of a certain age and stellar mass in a halo.
\item The DCBH formation region (in a pristine atomic cooling halo) is external to the galaxy, at an assumed separation of $5, 12, 20$ physical kpc \citep{Agarwal14}.
\item We parametrise the critical LW radaition requirement for DCBH formation in terms of the rate of photodissociation of molecular hydrogen $\rm k_{di}\ \rm (s^{-1})$, and rate of photodetachment of H$^-$, $\rm k_{de}\ \rm (s^{-1})$ where
\begin{eqnarray}
\rm k_{de} = \kappa_{de}\alpha J_{LW} \\
\rm k_{di} = \kappa_{di}\beta J_{LW}
\label{eq.destroy}
\end{eqnarray}
Here $\alpha$ and $\beta$ are rate parameters that depend on the shape of the SED (\citealt{Omukai:2001p128, Agarwal15a}; A16), {$\kappa_{\rm de} = 10^{-10} \: {\rm s^{-1}}$ and $\kappa_{\rm di} = 10^{-12} \: {\rm s^{-1}}$ are normalisation constants \citep{Agarwal15a}}, and J$_{\rm LW}$ is the mean specific intensity of the Lyman-Werner radiation field at 13.6~eV. {The latter depends on the choice of stellar population and the assumed separation between the galaxy and the atomic cooling halo.}\\
\item In A16 we showed that in our simple one-zone model of the thermal evolution of gas in the atomic cooling halo, DCBH formation occurs when the H$_{2}$ photodissociation rate exceeds a value given approximately by
\begin{equation}
\rm k_{di} \geq 10^{Aexp(\frac{-z^2}{2}) + D}\ (\rm s^{-1}),
\label{eq.ratecurve}
\end{equation}
where $z=\frac{\log_{10}(\rm k_{de}) - B}{C}$ and $A = -3.864,\ B = -4.763,\ C = 0.773$, and $D = -8.154$, for $\rm k_{de}< 10^{-5}\rm \ s^{-1}$.
\end{enumerate}
{Recently, \cite{Wolcott2017} have also advocated the usage of such a critical curve, albeit minor differences with A16 due to their computational setup.}
{By computing $k_{\rm di}$ and $k_{\rm de}$ for each different SED in our two grids of models and each different separation, we can therefore determine which combinations result in DCBH formation in the target atomic cooling halo and which do not. As an example, we show in Figure~\ref{fig.ratecurve} the full range of values of k$_{\rm de}$ and k$_{\rm di}$ we obtain with the SB99 SED grid (gray shaded region) and the BPASS SED grid (blue shaded region) for a halo-galaxy separation of 5~kpc. We see that for many combinations of stellar mass and stellar age, the LW flux reaching the atomic cooling halo is insufficient to enable DCBH formation, but that there are combinations of parameters that do yield a sufficiently large k$_{\rm di}$ (Eq.~\ref{eq.ratecurve}).}
{The inclusion of binaries could have important consequences on the LW escape fraction as a recent study by \citet{2017arXiv170107031S} demonstrates its sensitivity to SED dependent quantities such as ionising radiation, and subsequently, self--shielding of H$_2$ by H. The assumption of a non-zero binary fraction also has an important consequence for the X-ray
SED of the considered stellar populations. If the binaries in these systems have a similar
distribution of separations to those in local star-forming galaxies, then at least some will
eventually become high-mass X-ray binaries. High-mass X-ray binaries are the primary source
of X-rays in star-forming systems that lack an active galactic nucleus, and there is a
well-established correlation between the star formation rate (SFR) and the X-ray luminosity ($L_{\rm X}$)
of these systems \citep[see e.g.][]{gb03,mineo14}. If the high-redshift galaxies responsible for producing
the LW photons share this same correlation, then this can have important implications for the
resulting value of $J_{\rm crit}$ \citep{io11,it15,Latif15,regan16,gl16}. We note however
that the question of whether or not high redshift galaxies do show the same correlation between
SFR and $L_{\rm X}$ remains unanswered, and the size of their impact on $J_{\rm crit}$ remains
uncertain. In view of this, we do not account for the presence of X-rays in our current calculations.
In Sec. 3 we briefly discuss the possible impact of X-rays on our results.}
\section{Results}
We first plot the LW {output} and rate parameters from BPASSv2 and SB99 models in the top panel of Fig. 2. As expected, the LW output of BPASSv2 is higher than that of SB99 at ages $> 10 \rm \ Myr$. Considering this fact alone, one would expect the $\rm{J}_{\rm crit}$ from binary populations to be lower than the one from single stellar populations. However, the rate parameters, $\beta$ (middle panel) and $\alpha$ (bottom panel), for BPASSv2 are consistently lower than the ones produced by SB99 at all ages. This hints towards a more complicated interplay of the rates and the LW output leading to the need for a more in depth analysis of $\rm{J}_{\rm crit}$. {In order to facilitate comparison with previous studies, we note here that a BPASSv2 galaxy with $\rm M_{*}=10^6 ~\mathrm{M}_{\odot}$ and age$= 10$Myr has $\alpha / \beta \sim 0.5$ (See Fig. 2), which corresponds to a black body temperature of $3\times10^4\rm \ K$ \citep{Sugimura:2014p3946}.}
{In Fig.~\ref{fig.bpass}, we compare the results of our analysis from the SB99 SEDs (left) vs. BPASSv2 SEDs (right). We show the region in the M$_{\star}$--age parameter space in which DCBH formation is permitted (grey), where the labelled contours indicate various different values of $\rm{J}_{\rm LW}$. The figure is split in top, middle and bottom panels corresponding to separations of 5, 12 and 20 kpc.}
We find that the BPASS models produce systematically higher values of $\rm{J}_{\rm LW}$ for any given combination of M$_*$ and age, particularly when $\rm M_{*}$ and the age are both large. For example, a galaxy with an age, t$_{*} = 10^{7.5}$~yr, a stellar, mass M$_{\star}\sim 10^{9.5}~\mathrm{M}_{\odot}$ and a separation of 5~kpc from the atomic cooling halo of interest produces $\rm{J}_{\rm LW} \sim 700$ with the BPASSv2 model, but only $\rm{J}_{\rm LW} \sim 100$ with the SB99 model. This is because binary stellar populations yield more LW flux per stellar baryon especially at ages $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10$ Myr (Fig.~\ref{fig.ratescompare}). {Therefore, particularly at late times, one would expect them to be more effective in producing a higher $\rm{J}_{\rm LW}$ value at a given distance than single stellar populations. {From this one would naturally infer that DCBH can occur more easily in the vicinity of binary populations, than in the vicinity of single stellar populations. However, we find that the $\rm{J}_{\rm crit}$ that is required for DCBH formation is higher from binaries than when we assume that all stars are single.}This result is actually just a reflection of the fact that the value of $\rm{J}_{\rm crit}$ required for DCBH formation depends on the whole of the SED. Although BPASSv2 has a higher LW output, SB99 SEDs produce more lower energy photons and are thus much more effective at destroying H$^{-}$, as can be seen in Fig.~\ref{fig.ratescompare} where the values of $\alpha$ and $\beta$ plateau for BPASSv2 but steadily rise for SB99 at stellar ages $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$ 10 Myr. Consequently, with the SB99 SEDs, we require fewer LW photons in order to successfully suppress H$_{2}$ formation, and hence obtain a smaller $J_{\rm crit}$.
\begin{figure}
\includegraphics[width=0.75\columnwidth,angle=90,trim={-1cm 0cm 0cm -1.5cm},clip]{compare_J_rates.eps}
\caption{The J$_{\rm LW}$ computed at 1 kpc for $\rm M_{*}= 10^6 ~\mathrm{M}_{\odot}$ (top), and the rate parameter for
H$_2$ photodissociation $\beta$ (middle) and H$^-$ photodetachment $\alpha$ (bottom) as a function of time using BPASSv2 and SB99.}
\label{fig.ratescompare}
\end{figure}
\begin{figure*}
\includegraphics[width=0.8\columnwidth,angle=90,trim={0.25cm 1cm 1cm 1cm},clip]{ba_P5_grid_smass_contour_secondmodel.eps
\includegraphics[width=0.8\columnwidth,angle=90,trim={0.25cm 1cm 1cm 1cm},clip]{ba_P5_grid_smass_contour_binaries.eps
\caption{Stellar populations that allow for DCBH formation, from SB99 shown on the right (taken from the Appendix of A16), and BPASSv2 on the right. Grey regions bound the $\rm M_*-\rm Age$ parameter space for which the stellar populations produce an H$_2$ photodissociation rate that at the location of the atomic cooling halo that satisfies Eq.~\ref{eq.ratecurve}. The top, middle and bottom panels are computed for an assumed separation of 5, 12 and 20 kpc between the atomic cooling halo and the irradiating source. The contours of $\rm{J}_{\rm LW}$ at the respective distances are over-plotted in each of the panels.}
\label{fig.bpass}
\end{figure*}
{Further confirmation of this finding comes if we compare the distribution of $\rm{J}_{\rm crit}$ (for all three separations) in the Fig.~\ref{fig.jcrit_compare}. For the BPASSv2 SEDs, we find values in the range $\rm 100\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} J_{crit} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3000$, depending on the age of the stellar population, whereas for SB99, the same IMF yields a much wider distribution with $\rm 0.1\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} J_{crit} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3000$.
Although the curves are similar at ages $<$ 10 Myr, at later times, the $\rm{J}_{\rm crit}$ from binary populations is higher than the one required form single stellar populations. For example, at an age of 50 Myr, $\rm{J}_{\rm crit} \sim 100$ for the BPASSv2 SEDs, while it is only $\sim 10$ when derived using the SB99 SEDs. In fact, we see from Fig. 3 that a galaxy with $\rm M_* \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^9$ and same age (50 Myrs) can easily have $\rm{J}_{\rm LW}> \rm{J}_{\rm crit}$ when it is described by a SB99 SED, while for BPASSv2 SEDs $\rm{J}_{\rm LW}<\rm{J}_{\rm crit}$ at this age for all masses.}
These findings lead us to conclude
\begin{enumerate}
\item {\rm J$_{crit}$} does not solely depend on the LW photon yield, but on the 0.76 eV photon yield as well
\item The {\textit{distribution}} of {$\rm{J}_{\rm crit}$} depends on whether binaries are included in a galaxy's SED.
For a stellar population of a given age and mass, the $\rm{J}_{\rm crit}$ is higher when binaries are considered.
\item The {\textit{distribution}} of {$\rm{J}_{\rm crit}$} is critically altered by the inclusion of older stellar populations. Our analysis shows that $\rm{J}_{\rm crit}$ originating from older single stellar populations ($> 10$ Myr) is much lower than the one from similarly aged binary stellar populations
\item Formation of DCBHs must be understood in terms of a critical region in the k$_{\rm de}$--k$_{\rm di}$ parameter space (Eq.~\ref{eq.ratecurve})
\end{enumerate}
{We note that point (i) is not a new result: it was already remarked upon by \citet{Sugimura:2014p3946} and in A16. However, our results here do help to} emphasize the dependance of J$_{\rm crit}$ on the shape of the SED, which {in turn} depends on physical parameters such as the inclusion of binaries and older stellar populations.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=90,trim={0cm 0cm 1cm 1cm},clip]{jcrit_compare.eps}
\caption{Comparison of $\rm{J}_{\rm crit}$ for BPASSv2 (solid) and SB99 (dotted), for all separations. This is an age distribution of the histograms, for the entire separation range, shown in the right panel of Fig.~\ref{fig.bpass}.}
\label{fig.jcrit_compare}
\end{figure}
High-redshift star-forming galaxies may also be bright at
X-ray wavelengths leading to X-ray photoionization of the gas that produces additional free electrons.
This could lead to enhanced H$_{2}$ formation and hence X-rays can partially
counteract the negative feedback due to LW photons and the softer optical and near-IR photons
that destroy H$^{-}$ \citep[see e.g.][]{hrl96,gb03}. The impact of X-rays on $J_{\rm crit}$ has
been studied by several different authors \citep{io11,it15,Latif15,gl16}, but disagreements
remain in the size of the overall effect. \citet{it15} find that if the incident LW spectrum is
approximately described by a $T = 3 \times 10^{4}$~K black-body spectrum, then the value of
$J_{\rm crit}$ in the presence of X-rays is given by
\begin{equation}
J_{\rm crit} = J_{\rm crit, 0} \left(1 + \frac{J_{\rm X, 21}}{2.2 \times 10^{-3}} \right)^{0.56},
\end{equation}
where $J_{\rm crit, 0}$ is the value of $J_{\rm crit}$ in the absence of X-rays and $J_{\rm X, 21}$
is the strength of the X-ray background in units of $10^{-21} \: {\rm erg \, s^{-1} \, cm^{-2} \,
Hz^{-1} \, sr^{-1}}$, measured at an energy of 1~keV. They also argue that $J_{\rm X, 21} \simeq
4.4 \times 10^{-6} J_{\rm LW}$. Looking at the distribution of
$J_{\rm crit}$ in our models shown in Fig. 4, we see that the largest values obtained are around $J_{\rm crit}
\simeq 4000$. If this value is actually boosted by X-rays according to the \citet{it15}
prescription, this would change the value to $J_{\rm crit, X} \simeq 14000$.
However, we note that other recent studies report a smaller effect. For example, \citet{gl16} finds an enhancement in $J_{\rm crit}$ that is
roughly a factor of two smaller than in \citet{it15} at any given $J_{\rm X, 21}$, due to differences in the assumption
made regarding the effectiveness of X-ray shielding
in the target halo. In that case, accounting for X-rays would increase our largest values of
$J_{\rm crit}$ by less than a factor of two, and hence would not significantly change the
qualitative results of our study. \citet{Latif15} find an even smaller effect, even with very
large values of $J_{\rm X, 21}$ barely affecting the values of $J_{\rm crit}$ in their three--dimensional runs. In view of this uncertainty in the overall impact of X-rays, we neglect
them in our current study, although we hope to return to this point in future work.
{Recently \citet{Chon16a} studied the effects of tidal disruption of the DCBH host by the neighbouring galaxy responsible for the LW radiation field. They found that unless the DCBH host halo assembles via major mergers, it is prone to tidal disruption by the neighbouring galaxy. Thus if one interprets a higher J$_{crit}$ from binaries as a indication of a high stellar mass, then it is likely that tidal disruption events could render the neighbouring atomic cooling halo unsuitable for DCBH formation.}
\section{Summary}
We study the LW flux requirement for DCBH formation from galaxies that have a stellar population that includes a significant binary fraction. We show that despite their high LW output, binary populations are in fact inefficient at causing DCBH in their vicinity when compared to single stellar populations, contrary to what one would naively expect. This can be attributed to the SEDs of binary populations that are systematically bluer than those of populations composed only of single stars, meaning that the light from them is much less effective at causing H$^{-}$ photodetachment. The lower H$^{-}$ photodetachment rates mean that higher H$_{2}$ photodissociation rates are needed in order to bring about DCBH formation, and so the required values of $\rm{J}_{\rm crit}$ are larger.
Consistent with A16, we a find a distribution in the values of the $\rm{J}_{\rm crit}$ produced by binary populations, albeit narrower ($\rm{J}_{\rm crit} \sim 300-3000$) than the one produced by single stellar populations ($\rm{J}_{\rm crit} \sim 0.1- 3000$). Furthermore the need for older single stellar populations becomes clear as they produce the lowest values of $\rm{J}_{\rm crit}$ in both cases, due to a higher k$_{\rm de}$.
This pushes the idea further that the formation of DCBHs must be understood in terms of the k$_{\rm de}$--k$_{\rm di}$ parameter space (Eq.~\ref{eq.ratecurve}), and not in terms of a single flux value.
\section*{Acknowledgements}
BA would like to thank Laura Morselli, Eric Pellegrini and Claes-Erik Rydberg for useful discussions. BA, RSK, SCGO would like to acknowledge the funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) via the ERC Advanced Grant STARLIGHT (project number 339177).
Financial support for this work was provided by the Deutsche Forschungsgemeinschaft via SFB 881, "The Milky Way System" (sub-projects B1, B2 and B8) and SPP 1573, "Physics of the Interstellar Medium" (grant number GL 668/2-1).
\bibliographystyle{mn2e}
|
2,877,628,088,625 | arxiv | \section{Introduction}
\label{intro}
The research of dust growth in the disc of young stellar objects is motivated
from the observations of dust in the interstellar medium (ISM):
\cite{kemper2004} showed that the crystalline fraction of the ISM dust is
smaller than 2\%, and the size of the amorphous silicates is smaller than 0.1
micron (see Fig.~\ref{f_sizes}, left panel). It is this dust that eventually
will constitute the proto-planetary disc, so that, {\it initially}, the dust
grains in circumstellar (CS) discs must have small sizes too. However, as
planets are expected to form in these discs, the grains will have to grow to
larger sizes, forming planetesimals, and finally planets. This process,
however, is not yet well understood, especially not the timescale over which
it happens, or which physical properties of the star or the environment would
impede or hasten the growth of dust particles. It is those first steps of
grain growth, from submicron sizes to sizes of a few microns, that I will
discuss in this paper.
First, we will give a short introduction into the mineralogy of
astronomical dust, and show how we can retrieve information about
the dust properties from infrared spectroscopy. In a next step, we
will discuss the disc and dust properties of the bright Herbig Ae/Be
stars, and then we will apply the same technique for the solar-mass
T Tauri stars and the substellar mass brown dwarfs.
\section{Mineralogy in a nutshell}
\label{s_minera}
Astronomical dust is either oxygen or carbon rich. Oxygen-rich dust particles
are mainly silicates, which can be either iron or magnesium-rich, have a
crystalline or amorphous structure, and a wide variety in shapes and sizes. In
the 10 micron observational window, the most important dust species such as
pyroxenes, olivines and silica show features that can be used to derive their
size, composition and structure (see Fig.~\ref{f_sizes} for an example of
pyroxene and olivine features). For this purpose, optical constants that are
measured in the lab are of great importance (e.g. \cite{tamanai2006}).
\begin{figure}
\centerline{\hbox{
\psfig{figure=galcenter.ps,width=5cm,height=5cm}
\psfig{figure=different_sizes.ps,width=7cm}}}
\caption{Left: The optical depth towards the galactic center is clearly
caused by small, amorphous silicate grains, as indicated by the shape
observed \citep{kemper2004}.
Right: Absorption coefficients of amorphous silicates with a pyroxene and
olivine stoichiometry, in 3 different sizes that are relevant in the 10
micron window \citep{dorschner1995}. Solid
line: 0.1 micron, dashed line: 1.5 micron and dashed-dotted line:
6.0 micron. With increasing size, the feature shifts towards longer
wavelengths with an increasingly important red shoulder, and flattens.}
\label{f_sizes}
\end{figure}
\section{Herbig Ae/Be Stars}
\label{s_haebes}
\begin{figure}
\centerline{\psfig{figure=silfeatures.ps,width=13cm}}
\caption{The 10 micron feature for a sample of Herbig Ae/Be stars. They show
a wide variety in shape and strength, pointing to the presence of both small
and large grains in their discs, with a varying amount of crystallinity
\citep{meeus2001}.}
\label{f_haebefeatures}
\end{figure}
Infrared spectroscopy knew a large step forward through the launch of the
Infrared Space Observatory (ISO). For the first time, the dust in
circumstellar discs could be studied in detail. In particular, the discs of
the bright Herbig Ae/Be stars, a class of pre-main sequence stars with
spectral type A or B that are marked by the presence of the H$\alpha$ emission
line and an IR excess due to dust, were studied extensively between 2 to 45
micron. The dust particles that emit in this wavelength region are warm (a few
100 K), and are located in the optically thin disc atmosphere. The bulk of
the disc material, however, is cold (below 100 K) and resides in the midplane
of the disc, which is optically thick, hence invisible to us.
ISO unveiled a wide variety of dust features in Herbig Ae/Be stars: different
shapes and strengths, pointing to a wide range in grain size and crystallinity
were observed \citep{meeus2001,bouwman2001}. In Fig.~\ref{f_haebefeatures}, we
give a few examples of the emission features observed in the 10 micron window.
\cite{boekel2003} further related the 10 micron feature strength and shape
(triangular versus a more flattened shape) at 10 micron to the grain size of
the silicates, with the aid of laboratory spectra, and found clear evidence
for grain growth in HAEBEs.
\begin{figure}
\centerline{\psfig{figure=4sed.ps,width=11cm,angle=90}}
\caption{The spectral energy distributions of selected
Herbig Ae/Be stars. The slope in the millimetre region
is much steeper for AB Aur than for HD163296, pointing
to larger cold grains for the latter star \citep{meeus2001}.}
\label{f_haebesed}
\end{figure}
Radiative transfer models of the disc structure and evolution predict that
when small grains, located in the disc atmosphere, grow to larger particles,
they will gradually settle towards the disc midplane. This will be
accompanied by a flattening of the (initially) flared disc
\citep{dullemond2004}. This process has been confirmed through observations:
objects which have a smaller mid-IR excess (pointing to a less flaring disc)
on average have a shallower slope at millimetre wavelengths, indicating larger
sizes for the cold grains \citep{acke2004}.
\section{T Tauri Stars}
\label{s_tts}
T Tauri stars are the lower-mass counterparts of the Herbig Ae/Be
stars, with spectral types between G and M, thus temperatures below
$\sim$ 6000 K. Recent {\it Spitzer} observations allowed for a
characterisation of a large sample of T Tauri stars, thanks to its
good sensitivity. It soon became clear that T Tauri stars have similar
dust properties as the Herbig Ae/Be stars, although they have much
smaller luminosities: some objects show nearly unprocessed dust, while
others show larger grains and a substantial amount of crystalline silicates
\citep{meeus2003,kessler2006}.
\begin{figure}
\psfig{figure=mbm12_4sili.ps,width=12.5cm,height=4.5cm,angle=90}
\caption{The 10 micron feature for four T Tauri stars in MBM12. Some
features show evidence of small, amorphous grains, while others are
obviously caused by larger and more crystalline dust.
We also overplot a fit of the feature by the TLTD method.}
\label{f_mbm12_sili}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=fnorm_peakco12.ps,width=7cm}}
\caption{The shape of the 10 micron feature, determined by the flux ratio
at 11.3 and 9.8 micron, in function of the feature strength, determined
by the peak over continuum ratio for T Tauri stars.}
\label{f_peakco}
\end{figure}
In an unbiased sample of 12 TTS in the star forming region MBM12, with an age
of 2~Myr, dust processing was also observed in different stages
\citep{meeus2009}. In Fig.~\ref{f_peakco}, we show the relation between the
shape and the strength of the 10 micron feature, indicating grain grows in
these TTS discs. We derived the properties of the dust causing the 10 micron
feature by modelling the feature, using the two layer temperature distribution
method (TLTD) by \cite{juhasz2008} and including the following dust species in
the fit: amorphous silicates, forsterite, enstatite and silica, in sizes of
0.1, 1.5 and 6.0 micron. We found that those objects that have the latest
spectral types (lowest temperatures), tend to have the largest grains (see
Fig.~\ref{f_amsize}), a relation that was first remarked by
\cite{kessler2006}. This is merely because, when observing at 10 micron -
tracing dust grains with temperatures of a few hundred Kelvin - we trace a
region that is much closer in for those objects which have a lower luminosity
than for those with a higher luminosity. For instance, for T Tauri stars, this
is of the order 0.5 AU, while for Herbig Ae stars, radial distances of a few
AU are reached. When one, furthermore, considers that the density in the disc
decreases with radial distance from the central star, and that grain growth
increases with density, then it is natural to expect that the lower luminosity
sources show more grain growth, when observing the 10 micron region
\citep{kessler2007}.
In MBM12, the degree of flaring, as derived from the flux ratio at 24 and 8
micron is found to relate to the grain size, as derived from the 10 micron
feature \citep{meeus2009}. Furthermore, those sources that are most
turbulent and accreting, as indicated by the equivalent width of the H$\alpha$
feature, are found to host the largest grains in their disc atmosphere, while
the lowest-accreting source have more ISM-like silicates (see
Fig.~\ref{f_amsize}), as was already noted by \cite{sicilia2007} in another
sample of T Tauri stars.
\begin{figure}
\centerline{\hbox{
\psfig{figure=amsize_sptype.ps,height=5cm}
\psfig{figure=amsize_halpha.ps,height=4.9cm}}}
\caption{The size of the amorphous silicates, as determined
by the TLTD fit, in function of the spectral type (left) and the H$\alpha$
equivalent width (right). Although the sample is really small, we do see a
trend between both variables: the later the spectral type, the larger the
grains observed, and the stronger the H$\alpha$ line, the larger the
grain sizes.}
\label{f_amsize}
\end{figure}
\section{Brown Dwarfs}
\label{s_bd}
Even for the very faint brown dwarfs, {\it Spitzer} provided spectra, but only
for small samples. In Chamaeleon I, with an age of 2 Myr, 6 out of 8 brown
dwarfs show an IR excess, but the amount of flaring is quite low for most of
them. Both grain growth and crystallisation is observed to occur routinely in
this sample \citep{apai2005}, which might be attributed to their low
luminosity: in the innermost regions of their discs, where the 10 micron
radiation originates for brown dwarfs, it is likely that most of the dust has
grown to larger sizes. However, the large crystallinity observed could also
(partly) be a contrast effect, as larger grains cause weaker spectral
features, so that the crystalline features become more clear. Only at an age
of a few Myrs older, the discs and dust around brown dwarfs appear to have
evolved drastically: in the star forming region Upper Scorpius, with an age of
5 Myr, most discs are flat, pointing to a large degree of dust settling, while
the 10 micron feature is mainly absent or, in the rare case it is present,
very weak \citep{scholz2007}. At an age of 10 Myr, the brown dwarfs do not
even show the feature anymore \citep{morrow2008}. These observations show that
dust and disc evolution around brown dwarfs occurs on a much faster timescale
than in T Tauri stars or Herbig Ae/Be stars.
\section{Conclusions}
\label{s_conc}
When discussing disc and dust evolution, parameters that come to mind as
likely important players are age and effective temperature. Infrared spectral
observations, however, have shown that probably some of those are not that
important, and that the whole picture is more complex.
We presented observations of young objects with a wide range in masses (or
temperatures). In all groups, the discs are observed to show a variety in
degree of flaring, and also the dust features observed are both from
unprocessed and processed grains, as witnessed by their derived sizes and
crystallinity.
The lower the mass, the faster both the disc and dust evolution seem to
happen: for brown dwarfs most discs are flat already at an age of 5 Myr, while
the dust grains already have grown beyond a few microns at that age. Is this
because discs evolve from the inside, and in brown dwarfs we see the more
inward regions?
For the T Tauri stars, there is a clear relation between the strength of the
10 micron feature (a measure of the size of the emitting grains) and the
spectral type: later spectral types, on average, harbour larger grains. Also
the amount of accretion, as determined by the H$\alpha$ feature, is an
indicator of the dust properties: those sources that accrete most, hence have
more turbulent discs, show the largest grain sizes in their disc atmospheres.
For the Herbig Ae/Be stars, we did not find a relation between their age and
the amount of dust processing. This could be attributed to the difficulties to
determine their ages, as they are mostly isolated.
\section{Future directions}
\label{s_future}
Obvious future steps will include a comparison of (1) the 'average' dust
properties between clusters with different ages, densities and even
metalicities, and (2) in more detail, between young objects located within
the same cluster, hence with the same initial conditions. It is only when a
meaningful statistical sample is obtained, that one can pinpoint those
properties that influence dust processing, by keeping certain variables,
e.g. environment and spectral type, as a constant. The large database of
spectra that {\it Spitzer} has provided, will certainly help to solve (pieces
of) the puzzle.
Also longer wavelength studies are important to derive the composition and
temperature of the dust, in particular for olivines at 69 micron; it is in this
context that {\it Herschel} will play an important role in the coming years.
Finally, higher spatial resolution will allow to search for the radial
dependency of dust properties within a disc, it is here that interferometers
are important. And last but not least, adaptive optics can help to resolve
binaries, so that the influence of close companions on dust and disc evolution
can be properly studied.
\acknowledgements
Part of this work was supported by the {\it Deutsche Forschungsgemeinschaft}
(DFG) under project number ME2061/3-2.
|
2,877,628,088,626 | arxiv | \section{Introduction}
\label{sec:intro}
The EPOXI Mission of Opportunity is a re-purposing of the Deep Impact flyby spacecraft, and comprises the Extrasolar Planet Observation and Characterization (EPOCh) investigation and the Deep Impact eXtended Investigation (DIXI). The primary goal of EPOCh was to scrutinize a small set of known transiting extrasolar planets. From 2008 January to 2008 August, we used the high resolution imaging (HRI) instrument \citep{Hampton05} and a broadband visible filter to construct high precision, high phase coverage and high cadence light curves for seven targets. We observed each target nearly continuously for several weeks at a time. The main science goals of EPOCh were to refine the system parameters of the known planets, to search for additional planets both directly (via transits of the additional body) and indirectly (via induced changes in the transits of the known planet), and to constrain the reflected light from the known planet at secondary eclipse. It is also useful to provide updated periods and times of epoch for these systems in order to reduce uncertainties on predicted transit and eclipse times, and therefore maximize the return of follow-up observations. In previous EPOCh papers we have presented the search for additional planets in the GJ~436 system \citep{Ballard10} and the secondary eclipse constraints for HAT-P-7 \citep{Christiansen10}. In this paper we present the updated system parameters, including constraints on the transit timing and changes in the transit parameters, and secondary eclipse constraints for a further four targets: HAT-P-4, TrES-3, TrES-2 and WASP-3, introduced below. The search for additional planets in these systems will be presented in a separate paper (Ballard et al. in prep).
The exoplanet HAT-P-4b \citep{Kovacs07}, orbits a slightly evolved metal-rich late F star. With a mass of 0.68 $M_{\rm Jup}$ and a radius of 1.27 $R_{\rm Jup}$, it joined the ranks of inflated planets that have continued to challenge models of the physical structure of hot Jupiters.
TrES-3 \citep{ODonovan07} is notable for its very short orbital period of 1.30619 days. This proximity to the star makes TrES-3 a promising target for observations of reflected light at visible wavelengths; the planet-to-star flux ratio as measured in reflected light during the secondary eclipse is given by $A_g(R_p/a)^2$, where $A_g$ is the geometric albedo, $R_p$ is the planetary radius and $a$ is the semi-major axis of the planetary orbit. \citet{Winn08}, \citet{deMooij09} and \citet{Fressin10} have observed secondary eclipses of TrES-3 at visible and near-infrared wavelengths, and the emerging picture of the planetary atmosphere is one with efficient day-night re-circularization and no temperature inversion in the upper atmosphere. This is in contrast to predictions of a temperature inversion based on the high level of irradiation \citep{Fortney08}. \citet{Sozzetti09} studied the transit timing variations of TrES-3 and noted significant outliers from a constant period. \citet{Gibson09} monitored further transit times of TrES-3 and ruled out sub-Earth mass planets in the exterior and interior 2:1 resonances for circular orbits.
TrES-2 \citep{ODonovan06} was the first transiting planet found in the field of view of the NASA {\it Kepler} mission \citep{Borucki09}. \citet{Holman07} noted that the high impact parameter ($b \approx 0.85$) of TrES-2 made transit parameters such as inclination and duration sensitive to changes due to orbital precession. \citet{Mislis09} and \citet{Mislis10} claimed a significantly shorter duration for TrES-2 transits two years after the measurements of \citet{Holman07}. They proposed that this was caused by a change in orbit inclination due to precession, and that the duration would continue to decrease. However, \citet{Scuderi09} measured a duration consistent with \citet{ODonovan06} and \citet{Holman07} and did not see the predicted trend of decreasing transit duration. Secondary eclipses of TrES-2 have been observed in the near-infrared \citep{ODonovan10}, and the results favor a thermal inversion in the upper atmosphere, supporting the hypothesis that highly irradiated planetary atmospheres have inversions. The transit timing variations of TrES-2 have been studied by \citet{Raetz09} and \citet{Rabus09}, who find no statistically significant variations.
WASP-3b \citep{Pollacco08}, with a short period (1.84634 days) and a hot host star (F7-F8V, $T_{\rm eff}=6400$K), is one of the hottest transiting planets known, and another very good target for observing reflected light at secondary eclipse.
The paper is organized as follows. The observations and generation of the light curves are described in Section \ref{sec:data}, the transit analysis is presented in Section \ref{sec:res}, the secondary eclipse analysis is presented in Section \ref{sec:eclipses} and the results are discussed in Section \ref{sec:disc}.
\section{Observations and analysis}
\label{sec:data}
The EPOCh observations were made using the high resolution imager (HRI), which has a 30-cm aperture and a 1024$\times$1024 pixel CCD. For our observations we used a clear visible filter, covering 350--1000nm, in order to maximize the throughput of photons. The integration time for the science observations was 50 seconds, which for near-continuous observations results in roughly 1500 images per day. Since the on-board spacecraft memory is only 300Mb, we initially chose to read out only a $128\times128$ pixel sub-array of the full CCD, to ensure full phase coverage between data downlinks from the spacecraft. The CCD comprises four quadrants that are read out independently, and the sub-array is centered on the CCD where the four quadrants meet. The pixel scale is 0.4 arcsec pixel$^{-1}$, resulting in a sub-array field of view of 0.72 square arcminutes. The images are significant defocused, resulting in a stellar point-spread function (PSF) with a full-width half maximum of 4 arcseconds. Typically this meant that the target star was the only star in the field of view, and we were unable to employ relative photometry techniques for removing correlated noise in the light curves.
Table \ref{tab:obs} summarizes the observing schedules for each of the four targets. HAT-P-4 and TrES-3, along with GJ436 and XO-2, were observed during the initial observing block from 2008 January to 2008 May. The project was awarded an additional contingent observing block from 2008 June to 2008 August, during which time HAT-P-4 was re-observed, and TrES-2 and WASP-3 were also observed. During the contingent observations we began observing in a larger $256\times256$ pixel sub-array mode, to reduce losses from pointing drifts that occasionally resulted in the target star lying outside of the $128\times128$ pixel sub-array field of view. The number of images that could be obtained with the larger sub-array mode between data downlinks from the spacecraft was constrained by the data storage capacity on board the spacecraft. Therefore, in order to maximize the phase coverage we chose to restrict observations in the $256\times256$ pixel sub-array mode to the times of particular interest---during the transits and secondary eclipses. One event per data downlink could be observed in the larger sub-array mode without reducing the temporal coverage. Table \ref{tab:obs} shows the total number of transits and eclipses observed for each target, with the number observed in the $256\times256$ pixel sub-array mode given in parentheses. As discussed in Section \ref{sec:intro}, TrES-2 was claimed to show changes in the transit inclination with time. Therefore, we used the larger sub-array mode to observe the transits of TrES-2 where possible. WASP-3 was a promising target for secondary eclipse observations, and therefore we observed the secondary eclipses of WASP-3 in the larger mode where possible. For HAT-P-4 we observed two of the three transits and two of the three eclipses obtained in the contingent observations in the $256\times256$ pixel sub-array mode. TrES-3 was observed in the initial observing block and no observations were obtained in the larger mode.
\begin{deluxetable}{lclcc}
\tabletypesize{\scriptsize}
\tablecaption{EPOCh observations}
\tablewidth{0pt}
\tablehead{\colhead{Target} & \colhead{{\it V} Mag} & \colhead{UT Dates observed (2008)} & \colhead{No. of Transits\tablenotemark{a}} & \colhead{No. of Eclipses\tablenotemark{a}}}
\startdata
HAT-P-4 & 11.22 & 01/22--02/12, 06/29--07/07 & 10 (2) & 9 (2) \\
TrES-3 & 11.18 & 03/06--03/18 & 7 (0) & 6 (0) \\
TrES-2 & 11.41 & 06/27--06/28, 07/19--07/29 & 9 (7) & 8 (2) \\
WASP-3 & 10.64 & 07/17--07/18, 07/30--08/07, 08/10--08/15 & 8 (0) & 9 (8) \\
\enddata
\tablenotetext{a}{Including partial events. The number in brackets is the subset of events observed in $256\times256$ pixel sub-array mode.}
\label{tab:obs}
\end{deluxetable}
\subsection{Image Calibration and Time Series Extraction}
\label{sec:timeseries}
We receive calibrated FITS images from the extant Deep Impact data reduction pipeline \citep{Klaasen05}. These data have been bias- and dark-subtracted and flat-fielded, using calibration images obtained on the ground before launch. Due to the very high precision required in the light curves, we perform several additional calibration steps to account for changes in the CCD since launch. The spacecraft pointing drifts considerably with time, resulting in significant coverage of the CCD by the stellar PSF and placing paramount importance on the flat-fielding. The procedure is described in \citet{Ballard10} and summarized here.
For each target, we use a PSF constructed from the images to locate the position of the star to a hundredth of a pixel. At this stage we reject images with $10\sigma$ outliers from the PSF fit, assuming the stellar PSF to be contaminated by an energetic particle hit. We subtract a time-dependent bias calculated for each quadrant from the corresponding overscan region. We reduce the pixels in the central columns and rows of the CCD (forming the internal boundaries between the quadrants) by roughly 15\% and 1\% respectively, to correct an artifact produced by the CCD readout electronics. For data obtained in the $256\times256$ pixel mode, we scale the images by a constant (typically differing from unity by one part in a thousand) to correct an observed flux offset between the two sub-array modes.
In order to track time-dependent changes in the flat-field since launch, there is a small green LED stimulation lamp that can be switched on to illuminate the CCD. We obtained blocks of 200 calibration frames using this lamp, which were taken every few days throughout the observations, alternating between blocks in the smaller and larger sub-array modes in the contingent observations. We correct each science frame by the flat-field generated from lamp images taken in the same sub-array mode. We assume any remaining flat-field errors to be color-dependent and therefore unable to be addressed by the monochromatic lamp.
We perform aperture photometry, using a circular aperture of radius 10 pixels. The resulting light curves exhibit significant correlated noise on the order of 1\%, which is associated with the drift in the spacecraft pointing. In order to correct for this, we use the data itself to generate a sensitivity map of the CCD. We assume the out-of-transit and out-of-eclipse data to be of uniform brightness, with two caveats. First, the star may have intrinsic variations in stellar brightness due to spots. Only one of the four targets displayed long-period variability (Figure \ref{fig:tres3_fulllc_noise}), and this was removed by fitting and removing a polynomial in time before producing the CCD sensitivity map. Second, transits of additional planets may be present, which will be suppressed with this treatment \citep{Ballard10}. We randomly draw several thousand of the out-of-transit and out-of-eclipse points and find a robust average flux of the 30 spatially nearest neighbors. We use this set of averages to generate a two-dimensional surface spline to the flux distribution across the CCD. Each point in the light curve is then corrected by interpolating onto this surface. The entire procedure is iterated several times to converge on the positions and scaling factors that result in the lowest scatter in the out-of-transit and out-of-eclipse data in the final light curve.
The robustness of the surface spline for each target depends on the coverage of the CCD by that target. If the coverage is small and the corresponding density of photometry apertures high, then there is a high probability that the same pixel will be returned to multiple times over the observations. Having flux measurements separated in time reduces the influence of stellar activity on our calibration of the sensitivity of each pixel. Figure \ref{fig:positions_compare} shows the complete CCD coverage for two targets. TrES-2 is well confined on the CCD and the density of photometry apertures leads to a more robust surface spline. The TrES-2 light curve prior to and post the application of the surface spline is shown in Figure \ref{fig:tres2_fulllc_noise}. On the other hand, the photometry apertures for WASP-3 sample a much larger area of the CCD, and in addition many of the observations obtained in the 256$\times$256 pixel sub-array mode do not overlay the central 128$\times$128 pixel sub-array. The resulting surface spline is therefore more sensitive to noise introduced by stellar activity or systematics that are not an artifact of the pointing jitter. The WASP-3 light curve prior to and post the application of the surface spline is shown in Figure \ref{fig:wasp3_fulllc_noise}. The lower panel of Figure \ref{fig:wasp3_fulllc_noise} shows how the noise in the final calibrated WASP-3 light curve bins down compared with the expectation for Gaussian noise, and the poor quality of the data is due to the low density of the CCD coverage for WASP-3.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3in, angle=90]{newfig0.eps}
\caption{The CCD positions of the photometry apertures for two targets. {\it Left}: TrES-2 is confined to the center of the CCD and therefore the same pixels are sampled well for creating a robust surface spline. {\it Right}: WASP-3 is spread over a much larger fraction of the CCD, including large excursions out of the central 128$\times$128 pixel sub-array when observations were obtained in the 256$\times$256 pixel sub-array. This reduces the quality of the surface spline and results in a larger component of correlated noise in the WASP-3 light curve.}
\label{fig:positions_compare}
\end{center}
\end{figure}
\subsection{Details for each target}
\label{sec:details}
The final HAT-P-4 light curve is shown in Figure \ref{fig:hatp4_fulllc_noise}. HAT-P-4 was the first EPOCh target observed, initially for 22 days from 2008 January 22 to 2008 February 12, during the original EPOCh target schedule, and again for 8 days from 2008 June 29 to 2008 July 7 during the contingent observations. Of the 45,320 images obtained of HAT-P-4, 5434 were discarded due to the star being either out of the field of view or too close to the edge of the CCD to measure accurate photometry, 1305 were discarded due to energetic particle hits, and 76 were discarded due to readout smear, for a final total of 38,505 acceptable images. All of the data obtained in the initial run are in the $128\times128$ pixel sub-array mode. Of the contingent data, two of the three transits and two of the three eclipses are in the larger $256\times256$ pixel sub-array mode, and the remaining data are in the smaller mode. The bottom panel of Figure \ref{fig:hatp4_fulllc_noise} shows how the scatter in the final light curve scales down with increasing bin size---for Gaussian noise the expectation is the scatter will decrease as $1/\sqrt{N}$, where $N$ is the number of points in the bin.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{newfig1.eps}
\caption{{\it Upper panel}: The full HAT-P-4 EPOXI light curve. The left panel shows the original run of seven consecutive transits. The right panel shows the three transits observed five months later during the EPOCh contingent observations. In each panel the lower curve is before the first application of the surface spline and the upper curve is the final calibrated light curve. The red data points were obtained in the larger $256\times256$ pixel sub-array mode. {\it Lower panel}: The scatter in the out-of-transit data with increasing bin size (diamonds) and comparing to the expectation for Gaussian noise ($1/\sqrt{N}$, where $N$ is the number of points in the bin, shown as the solid line normalized to the unbinned value of the scatter). The points do not follow the line, indicating residual correlated noise in the light curve.}
\label{fig:hatp4_fulllc_noise}
\end{center}
\end{figure}
The final TrES-3 light curve is shown in Figure \ref{fig:tres3_fulllc_noise}. TrES-3 was the second EPOCh target observed, for 12 days from 2008 March 6 to 2008 March 18. The gap in the light curve from 2--5 days is due to a `pre-look' for the subsequent EPOCh target, XO-2, which was performed in order to refine the pointing for that target. We obtained a total of 14,195 images of TrES-3, of which we discarded 1165 due to the star being out of or too close to the edge of the field of view, 1632 due to energetic particle hits and 127 due to readout smear, leaving 11,271 images. We obtained all of the TrES-3 data in the $128\times128$ pixel sub-array mode. After the initial application of the two-dimensional surface spline a long timescale, low amplitude variability was evident in the light curve. This can be seen in the lower light curve in Figure \ref{fig:tres3_fulllc_noise}. In order to remove this variability we bin the out-of-transit data by two hours and fit with a time-dependent fifth-order polynomial for the data occurring later than 4.0 days. We divide out this feature before iterating over the previous steps to produce the final light curve. The polynomial is plotted on the lower light curve, and the final light curve is shown as the upper curve in Figure \ref{fig:tres3_fulllc_noise}. As with HAT-P-4, the bottom panel of Figure \ref{fig:tres3_fulllc_noise} shows the noise properties of the data.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{newfig2.eps}
\caption{{\it Upper panel}: The TrES-3 EPOXI light curve. The gap from 2.5--5 days is during the pre-look for a subsequent target. Seven transits of TrES-3 were observed in total. The lowest light curve is prior to the first application of the surface spline, the middle light curve is after the application of the spline but prior to the removal of the time-dependent polynomial, and the upper light curve is the final calibrated data set. {\it Lower panel}: See Figure \ref{fig:hatp4_fulllc_noise} for explanation. In the case of TrES-3, where all data were obtained in the smaller sub-array mode and the total time span is relatively short, the scatter bins down close to the expectation for Gaussian noise.}
\label{fig:tres3_fulllc_noise}
\end{center}
\end{figure}
The final TrES-2 light curve is shown in Figure \ref{fig:tres2_fulllc_noise}. We observed TrES-2 during the contingent EPOCh observations, from 2008 July 7 to 2008 July 30, in addition to a pre-look for pointing on 2008 June 28 and 29. In total, we obtained 31,210 images of TrES-2, with 1979 discarded due to the star lying out of or too close to the edge of the field of view, 1427 discarded due to energetic particle hits and 80 discarded due to readout smear, for a total of 27,724 acceptable images. We observed nine transits in total, including seven in the $256\times256$ pixel sub-array mode. The lower panel of Figure \ref{fig:tres2_fulllc_noise} shows that correlated noise remains in the final light curve.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{newfig3.eps}
\caption{{\it Upper panel}: The TrES-2 EPOXI light curve. The data obtained from days 1--3 are the pre-look, for refinement of the spacecraft pointing. From days 3--11 the spacecraft was observing a different target before returning to TrES-2 with updated pointing parameters. The gap from days 21--23 spans the pre-look for the subsequent target. Nine transits of TrES-2 were observed in total. The lower curve is prior to the first application of the surface spline and the upper curve is the final calibrated light curve. The red data points were obtained in the larger $256\times256$ pixel sub-array mode. {\it Lower panel}: See Figure \ref{fig:hatp4_fulllc_noise} for explanation.}
\label{fig:tres2_fulllc_noise}
\end{center}
\end{figure}
The final EPOCh light curve for WASP-3 is shown in Figure \ref{fig:wasp3_fulllc_noise}. We observed WASP-3 during the contingent observations, from 2008 July 29 to 2008 August 16, with a pre-look from 2008 July 17 to 2008 July 19. We obtained 24,015 images of WASP-3, of which we discarded 4,182 due to the star being out of or too close to the edge of the field of view, 403 due to energetic particle hits, and 808 due to readout smear, leaving 18,622 acceptable images. For WASP-3, none of the eight transits were observed in $256\times256$ pixel sub-array mode, however eight of the nine secondary eclipses were observed in this mode. The two-dimensional surface spline relies on multiple visits to the same part of the CCD to characterize robustly the interpixel variations. This is particularly true for the data that occur during the transits and eclipses, since they cannot be assumed to be of uniform flux and are therefore excluded from the creation of the surface. In order to effectively flat-field the data that are taken during transit and eclipse, the observations taken during these times must be gathered at the same spatial positions as data obtained at other times. In the case of WASP-3, four of the eight secondary eclipses occurred at locations that were poorly sampled. No out-of-transit or out-of-eclipse observations fell on these pixels, and therefore we cannot estimate the true sensitivity of these pixels in order to produce an effective flat-field. These eclipses occur at 1.0, 17.6, 19.3 and 26.6 days, and can be seen in the light curve as increases in flux. These four eclipses are discarded for the final analysis. Besides these events, a significant fraction of the WASP-3 data are distributed in poorly-sampled areas of the CCD, degrading the robustness of the two-dimensional surface spline. The bottom panel of Figure \ref{fig:wasp3_fulllc_noise} demonstrates the adverse effect this has on the noise properties of the final light curve, as the data do not bin down as expected for Gaussian noise.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{newfig4.eps}
\caption{{\it Upper panel}: The WASP-3 EPOXI light curve. The first two days of data are the pre-look to refine the pointing. The gap between 22 and 24 days is due to the pre-look for the subsequent target. The significant positive deviations seen at 1.0, 17.6, 19.3 and 26.6 days are instrumental in nature; see the text for details. The lower curve is prior to the first application of the spatial spline and the upper curve is the final calibrated light curve. The red data points were obtained in the larger $256\times256$ pixel sub-array mode. {\it Lower panel}: See Figure \ref{fig:hatp4_fulllc_noise} for explanation.}
\label{fig:wasp3_fulllc_noise}
\end{center}
\end{figure}
\section{Transit analysis}
\label{sec:res}
For the transit analysis, we make several additional calibration steps. The two-dimensional surface spline uses only a fraction of the data to generate the surface, in order to preserve as much of the information in the light curve as possible, and to minimize the suppression of transits of putative additional planets. However for the transit analysis, we use all of the available data to calibrate each event. For each transit, we define a window approximately three times the duration of the transit, centered on the predicted transit time. We take each point in this window and divide the flux by a robust average of the 30 spatially nearest points that do not fall in any of the transit windows. This is essentially a point-by-point application of the full two-dimensional surface spline. We then fit a slope, linear with time, to the out-of-transit data across each transit and divide it out, to remove any residual long timescale trends.
For TrES-3, TrES-2 and WASP-3, we generate non-linear limb-darkening coefficients of the form given by \citet{Claret00}, $I_{\mu}/I_1 = 1 - \sum_{n=1}^4 c_n(1-\mu^{n/2})$, where $I_1$ is the specific intensity at the center of the disk and $\mu=cos(\gamma)$, with $\gamma$ the angle between the emergent intensity and the line of sight. We use photon-weighted stellar atmosphere models of \citet{Kurucz94,Kurucz05} that bracket the published values of stellar $T_{\rm eff}$ and log $g$, and convolve these with the total EPOXI response function, including filter, optics and CCD response. We fit for the four coefficients of the non-linear form of the limb-darkening using 17 positions across the stellar limb, at 2~nm intervals along the 350--1000~nm bandpass. We calculate the final set of coefficients as the weighted average when integrated over the bandpass, and bi-linearly interpolate across $T_{\rm eff}$ and log~$g$ for each target. The final set of coefficient for each targets is given in Table \ref{tab:tres3} for TrES-3, Table \ref{tab:tres2} for TrES-2 and Table \ref{tab:wasp3} for WASP-3.
The quality of the EPOCh light curves is nearly sufficient to fit for the limb-darkening coefficients rather than assuming theoretical values. Ultimately, the degeneracies between the geometric parameters of the transiting system and the limb-darkening coefficients prevent us from placing meaningful constraints on the coefficients. In the case of HAT-P-4 however, the system is very close to edge-on ($i=89.9^{+0.1}_{-2.2}$ degrees), which reduces the parameter space considerably. Therefore, for HAT-P-4 we instead use a quadratic equation for the limb-darkening, $I_{\mu}/I_1 = 1 - a(1-\mu) - b(1-\mu)^2$, and allow two linear combinations of the coefficients, $c_1 = 2a + b$ and $c_2 = a-2b$ to be free parameters in the transit analysis, which produced a better fit to the data as defined below.
When fitting the transits, we use the analytic equations of \citet{Mandel02} to generate a model transit, and use $\chi^2$ as a goodness-of-fit estimator. We use the Levenberg-Marquardt algorithm to fit three dimensionless geometric parameters of the system: $R_p/R_{\star}$, $R_{\star}/a$ and cos $i$, where $R_p$ is the planetary radius, $R_{\star}$ is the stellar radius, $a$ is the semi-major axis of the planetary orbit and $i$ is the inclination of the orbit. We fix the period to the published value, but allow the time of center of transit to vary independently for each of the transits. We then use the published mass values for each of the systems to convert the transit parameters to physical properties, drawing values from \citet{Kovacs07} for HAT-P-4, \citet{Sozzetti09} for TrES-3, \citet{Sozzetti07} for TrES-2 and \citet{Pollacco08} for WASP-3. The final results of these fits are given in Table \ref{tab:hatp4} for HAT-P-4, Table \ref{tab:tres3} for TrES-3, Table \ref{tab:tres2} for TrES-2 and Table \ref{tab:wasp3} for WASP-3. We also give the transit duration from first to fourth contact for each best-fit model. For WASP-3, we discard the final transit (which was significantly offset in flux due to correlated noise), and also a partial transit (which included only the ingress), for a total of six transits. The phase-folded and binned transits for each target are shown in Figure \ref{fig:hatp4_phased} for HAT-P-4, Figure \ref{fig:tres3_phased} for TrES-3, Figure \ref{fig:tres2_phased} for TrES-2 and Figure \ref{fig:wasp3_phased} for WASP-3.
The errors on the parameters are calculated using the residual permutation ``rosary bead'' method \citep{Winn08}. For each target, we find the residuals to the best-fit model. We shift these residuals forward collectively to the next time stamp and add the best fit models back to the new residuals, generating a new realization of the light curve which retains the correlated noise signals in the original light curve. We repeat this process 8000 times (covering approximately six days) and each time we fit for and record the geometric parameters, times of center of transit, and limb-darkening coefficients where appropriate. For each parameter we construct a histogram of the 8000 measurements, to which we fit a Gaussian. We then define the error on that parameter by the half-width half-maximum value of the best fit Gaussian. We find that increasing the number of iterations beyond 4000 does not significantly change the calculated errors.
To find the errors in the transit times, we perform a second rosary bead analysis, holding the geometric and limb-darkening values fixed and allowing only the times of center of transit to vary. We find that 4000 iterations are sufficient to sample the range of correlated noise signals, and calculate the errors in the same fashion as the geometric parameters. For each target we calculate a new orbital period and epoch by performing a weighted linear fit to the EPOCh transit times and any published transit times.
\begin{deluxetable}{lc}
\tabletypesize{\scriptsize}
\tablecaption{HAT-P-4 system parameters}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \colhead{Value}}
\startdata
Adopted values\tablenotemark{a} & \\
$M_{\star}$ $(M_{\odot})$ & $1.26\pm0.14$ \\
$M_p$ $(M_{\rm Jup})$ & $0.68\pm0.04$ \\
\\
Transit fit values & \\
$R_p/R_{\star}$ & $0.0855\pm0.0078$\\
$a/R_{\star}$ & $0.1672\pm0.0078$\\
$i$ (deg) & $89.67\pm0.30$ \\
\\
Derived values & \\
$P$ (days) & $3.0565114\pm0.0000028$ \\
$T_c$ (BJD) & $2,454,502.56227\pm0.00021$ \\
$R_{\star}$ ($R_{\odot}$) & $1.602\pm0.061$ \\
$R_p$ ($R_{\rm Jup}$) & $1.332\pm0.052$ \\
$\tau$ (mins) & $255.6\pm1.9$ \\
\\
Limb-darkening coefficients & \\
$a$ & 0.314 \\
$b$ & 0.366 \\
\\
Transit times (BJD) & $ 2,454,490.33445\pm 0.00072$\\
& $ 2,454,493.39232\pm 0.00061$\\
& $ 2,454,496.44984\pm 0.00056$\\
& $ 2,454,499.50426\pm 0.00070$\\
& $ 2,454,502.56156\pm 0.00056$\\
& $ 2,454,505.62006\pm 0.00082$\\
& $ 2,454,508.67569\pm 0.00056$\\
& $ 2,454,649.27624\pm 0.00064$\\
& $ 2,454,652.33053\pm 0.00065$\\
& $ 2,454,655.38842\pm 0.00065$\\
\enddata
\tablenotetext{a}{Masses are from \cite{Kovacs07}.}
\label{tab:hatp4}
\end{deluxetable}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig5.eps}
\caption{The seven HAT-P-4 transits from the original observing schedule, phase-folded and binned in five minute intervals. light curve. The solid line is the best fit transit model. {\it Lower panel}: The residuals when the best-fit model is subtracted from the data.}
\label{fig:hatp4_phased}
\end{center}
\end{figure}
\begin{deluxetable}{lc}
\tabletypesize{\scriptsize}
\tablecaption{TrES-3 system parameters}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \colhead{Value}}
\startdata
Adopted values\tablenotemark{a} & \\
$M_{\star}$ $(M_{\odot})$ & $0.928^{+0.028}_{-0.048}$ \\
$M_p$ $(M_{\rm Jup})$ & $1.910^{+0.075}_{-0.080}$ \\
\\
Transit fit values & \\
$R_p/R_{\star}$ & $0.1661\pm0.0343$\\
$a/R_{\star}$ & $0.1664\pm0.0204$\\
$i$ (deg) & $81.99\pm0.30$ \\
\\
Derived values & \\
$P$ (days) & $1.30618608\pm0.00000038$ \\
$T_c$ (BJD) & $2,454,538.58069\pm0.00021$ \\
$R_{\star}$ ($R_{\odot}$) & $0.817\pm0.022$ \\
$R_p$ ($R_{\rm Jup}$) & $1.320\pm0.057$ \\
$i$ (deg) & $81.99\pm0.30$ \\
$\tau$ (mins) & $81.9\pm1.1$ \\
\\
Limb-darkening coefficients & \\
$c_1$ & 0.5169\\
$c_2$ & -0.6008\\
$c_3$ & 1.4646\\
$c_4$ & -0.5743\\
\\
Transit times (BJD) & $2,454,532.04939 \pm 0.00033$\\
& $2,454,533.35515 \pm 0.00035$\\
& $2,454,537.27463 \pm 0.00038$\\
& $2,454,538.58126 \pm 0.00035$\\
& $2,454,539.88703 \pm 0.00040$\\
& $2,454,541.19261 \pm 0.00035$\\
& $2,454,542.49930 \pm 0.00041$\\
\enddata
\tablenotetext{a}{Masses are from \cite{Sozzetti09}.}
\label{tab:tres3}
\end{deluxetable}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig6.eps}
\caption{{\it Upper panel}: The seven TrES-3 transits, phase-folded and binned in two minute intervals. The solid line is the best-fit transit model. {\it Lower panel}: The residuals when the best-fit model is subtracted from the data.}
\label{fig:tres3_phased}
\end{center}
\end{figure}
\begin{deluxetable}{lc}
\tabletypesize{\scriptsize}
\tablecaption{TrES-2 system parameters}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \colhead{Value}}
\startdata
Adopted values\tablenotemark{a} & \\
$M_{\star}$ $(M_{\odot})$ & $0.98\pm0.062$ \\
$M_p$ $(M_{\rm Jup})$ & $1.198\pm0.053$ \\
\\
Transit fit values & \\
$R_p/R_{\star}$ & $0.1278\pm0.0094$\\
$a/R_{\star}$ & $0.1230\pm0.0179$\\
$i$ (deg) & $84.15\pm0.16$ \\
\\
Derived values & \\
$P$ (days) & $2.47061344\pm0.0000075$ \\
$T_c$ (BJD) & $2,454,4664.23039\pm0.00018$ \\
$R_{\star}$ ($R_{\odot}$) & $0.940\pm0.026$ \\
$R_p$ ($R_{\rm Jup}$) & $1.169\pm0.034$ \\
$\tau$ (mins) & $107.3\pm1.1$ \\
\\
Limb-darkening coefficients & \\
$c_1$ & 0.3899\\
$c_2$ & -0.1391\\
$c_3$ & 0.9662\\
$c_4$ & -0.0329\\
\\
Transit times (BJD) & $2,454,646.93735 \pm 0.00032$\\
& $2,454,656.81879 \pm 0.00034$\\
& $2,454,659.28871 \pm 0.00042$\\
& $2,454,661.76005 \pm 0.00044$\\
& $2,454,664.23072 \pm 0.00050$\\
& $2,454,669.17156 \pm 0.00028$\\
& $2,454,671.64117 \pm 0.00028$\\
& $2,454,674.11318 \pm 0.00033$\\
& $2,454,676.58257 \pm 0.00051$\\
\enddata
\tablenotetext{a}{Masses are from \cite{Sozzetti07}.}
\label{tab:tres2}
\end{deluxetable}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig7.eps}
\caption{{\it Upper panel}: The nine TrES-2 transits, phase-folded and binned in two minute intervals. The solid line is the best-fit transit model. {\it Lower panel}: The residuals when the best-fit model is subtracted from the data.}
\label{fig:tres2_phased}
\end{center}
\end{figure}
\begin{deluxetable}{lc}
\tabletypesize{\scriptsize}
\tablecaption{WASP-3 system parameters}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \colhead{Value}}
\startdata
Adopted values\tablenotemark{a} & \\
$M_{\star}$ $(M_{\odot})$ & $1.24^{+0.06}_{-0.11}$ \\
$M_p$ $(M_{\rm Jup})$ & $1.76^{+0.08}_{-0.14}$ \\
\\
Transit fit values & \\
$R_p/R_{\star}$ & $0.1051\pm0.0124$\\
$a/R_{\star}$ & $0.1989\pm0.0287$\\
$i$ (deg) & $84.15\pm0.16$ \\
\\
Derived values & \\
$P$ (days) & $1.8468373\pm0.0000014$ \\
$T_c$ (BJD) & $2,454,686.82069\pm0.00039$ \\
$R_{\star}$ ($R_{\odot}$) & $1.354\pm0.056$ \\
$R_p$ ($R_{\rm Jup}$) & $1.385\pm0.060$ \\
$i$ (deg) & $84.22\pm0.81$ \\
$\tau$ (mins) & $167.3\pm1.3$ \\
\\
Limb-darkening coefficients & \\
$c_1$ & 0.2185\\
$c_2$ & 0.6183\\
$c_3$ & -0.1040\\
$c_4$ & -0.0426\\
\\
Transit times (BJD) & $2,454,679.43264 \pm 0.00050$\\
& $2,454,681.27911 \pm 0.00040$\\
& $2,454,683.12740 \pm 0.00035$\\
& $2,454,684.97486 \pm 0.00027$\\
& $2,454,686.82053 \pm 0.00059$\\
& $2,454,690.51381 \pm 0.00055$\\
& $2,454,692.36117 \pm 0.00043$\\
& $2,454,694.20711 \pm 0.00042$\\
\enddata
\tablenotetext{a}{Masses are from \cite{Pollacco08}.}
\label{tab:wasp3}
\end{deluxetable}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig8.eps}
\caption{{\it Upper panel}: The eight WASP-3 transits, phase-folded and binned in two minute intervals. The solid line is the best-fit transit model. {\it Lower panel}: The residuals when the best-fit model is subtracted from the data. The significant in-transit deviation from the model is discussed in Section \ref{sec:disc}.}
\label{fig:wasp3_phased}
\end{center}
\end{figure}
\section{Secondary eclipse constraints}
\label{sec:eclipses}
Our constraints of the secondary eclipse depths are limited by the correlated noise in the {\it EPOXI} data. Ideally, for each target we would combine our multiple observations of the secondary eclipses to increase the signal to noise. However, the fluctuations due to the correlated noise preclude this. For example, Figure \ref{fig:tres3_eclipses} shows six of the TrES-3 secondary eclipses, where in some cases correlated noise results in an increase in flux at the time of secondary eclipse, instead of the expected decrement. If we assume that the secondary eclipse in the EPOCh bandpass, with a central wavelength of 650~nm, is due exclusively to the reflected light of the planet, then the eclipse depths we would anticipate, for a geometric albedo of 1, would range from 0.02\% for HAT-P-4 to 0.08\% for TrES-3.\footnote{In fact, the CCD is found to be quite efficient at the redder wavelengths, and it is therefore feasible that for the hottest planets there may be a contribution from the thermal emission of the planet, resulting in deeper secondary eclipses.}
Since the fluctuations from correlated noise in the measured eclipse depths are sometimes larger than the signal we expect to measure, we choose not to combine the multiple observations and instead analyze each eclipse independently. Our intent is to use the scatter of individual eclipse measurements to constrain the amplitude of the correlated noise. As for the transits, for each eclipse in the data we apply a point-by-point correction to the data in and adjacent to the eclipse. For the targets presented here we assume that $e=0$ and therefore that the secondary eclipse occurs at a phase of 0.5. For TrES-3 and TrES-2 this assumption is strongly supported by previous secondary eclipse measurements with the {\it Spitzer} IRAC instrument which demonstrated no evidence of non-zero eccentricity (Fressin et al. 2010 and O'Donovan et al. 2010 respectively). For HAT-P-4 and WASP-3, the extant radial velocity data are consistent with circular orbits (Kovacs et al. 2007 and Pollacco et al. 2008 respectively). In addition to the point-by-point correction, we fit a linear time-dependent slope to the adjacent out-of-eclipse data to remove any remaining long timescale trends. Finally, we separate the data into 10 minute bins and remove 3$\sigma$ flux outliers from each bin. For TrES-2, we discard the first observed eclipse, which was obtained during the pre-look for this target, since the pre-look data are not well calibrated by the surface spline generated for the remaining data. We assume this is due to changes in the CCD in the time that occurred between the pre-look and the full set of observations. We also discard eclipses where less than half the event is observed, one for TrES-2, one for WASP-3 and one for HAT-P-4. As discussed in Section \ref{sec:details}, we finally discard four of the nine WASP-3 secondary eclipses that fall on regions of the CCD we cannot calibrate.
We fit the eclipses using a transit model with the best-fit parameters from the transit analysis and no limb-darkening. We then scale the depth of this model to fit the data, finding the depth that minimizes the $\chi^2$ value. For each target we then find the mean ($\bar{x}$) and standard deviation ($\sigma_x$) of the individual best-fit depths, and define the 95\% confidence upper limit on the eclipse depth as $\bar{x}+2\sigma_x$. The secondary eclipses of HAT-P-4, TrES-3, TrES-2 and WASP-3 are shown in Figures 9--12. The upper limits are given in Table \ref{tab:eclipses}. We note that we achieve a useful constraint only in the case of TrES-3.
\begin{deluxetable}{lccc}
\tabletypesize{\scriptsize}
\tablecaption{EPOCh secondary eclipse measurements}
\tablewidth{0pt}
\tablehead{\colhead{Target} & \colhead{Eclipse Depth} & \colhead{Upper limit} & \colhead{Implied $A_g$}}
\startdata
HAT-P-4 & $-0.0069\pm0.0397$\% & 0.073\% & 3.5 \\
TrES-3 & $-0.020\pm0.041$\% & 0.062\% & 0.81 \\
TrES-2 & $0.023\pm0.071$\% & 0.16\% & 6.6 \\
WASP-3 & $0.023\pm0.044$\% & 0.11\% & 2.5 \\
\enddata
\label{tab:eclipses}
\end{deluxetable}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in]{fig9.eps}
\caption{Eight EPOCh secondary eclipse observations of HAT-P-4, offset in relative flux for clarity and binned in five minute intervals. The error on each point is $\sigma/\sqrt{N}$, where $\sigma$ is the scatter in the bin and $N$ the number of points. The solid lines are the best fit eclipse model in each case. The bottom three eclipses were obtained in the contingent block of observations.}
\label{fig:hatp4_eclipses}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in]{fig10.eps}
\caption{Six EPOCh secondary eclipse observations of TrES-3, offset in relative flux for clarity. The error on each point is $\sigma/\sqrt{N}$, where $\sigma$ is the scatter in the bin and $N$ the number of points. The solid lines are the best fit eclipse model in each case.}
\label{fig:tres3_eclipses}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig11.eps}
\caption{Six EPOCh secondary eclipse observations of TrES-2, offset in relative flux for clarity. The error on each point is $\sigma/\sqrt{N}$, where $\sigma$ is the scatter in the bin and $N$ the number of points. The solid lines are the best fit eclipse model in each case.}
\label{fig:tres2_eclipses}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig12.eps}
\caption{Four EPOCh secondary eclipse observations of WASP-3, offset in relative flux for clarity. The error on each point is $\sigma/\sqrt{N}$, where $\sigma$ is the scatter in the bin and $N$ the number of points. The solid lines are the best fit eclipse model in each case.}
\label{fig:wasp3_eclipses}
\end{center}
\end{figure}
\section{Discussion}
\label{sec:disc}
\subsection{HAT-P-4}
\label{sec:hatp4}
For HAT-P-4, our estimates of the system parameters are consistent with (and in the case of inclination, more precise than) those published by \citet{Kovacs07} and \citet{Torres08}. We calculate $R_p=1.332\pm0.052$ $R_{\rm Jup}$, $R_{\star}=1.602\pm0.061$ $R_{\odot}$, $i=89.67\pm0.30$ degrees and $\tau=255.6\pm1.9$ minutes, where $\tau$ is the transit duration from first to fourth contact. We use the discovery epoch and the ten EPOCh transit times presented in this paper to produce a new refined ephemeris of $T_c({\rm BJD}) = 2454245.81531\pm0.00021 + 3.0565114\pm0.0000028E$. Figure \ref{fig:hatp4_ttvs} shows the residuals to the new ephemeris. We see no evidence for transit timing variations in the residuals which have a scatter of roughly 2 minutes.
We use eight of the nine observed secondary eclipses to constrain the depth of the eclipse in the EPOCh bandpass, discarding the ninth due to poor coverage of the event. The eclipses are shown in Figure \ref{fig:hatp4_eclipses}. We set a 95\% confidence upper limit on the eclipse depth of 0.073\%, which, if it were produced entirely by reflected light, would correspond to a planetary geometric albedo of $A_g=3.5$, a physically impossible value. In the future, full phase curves of HAT-P-4 are scheduled to be observed in the near-infrared 3.6 and 4.5 micron IRAC bands, as part of the Warm {\it Spitzer} census of exoplanet atmospheres, at which point we may begin to study the atmosphere in more detail.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig13.eps}
\caption{The transit times of HAT-P-4. The open diamonds are the EPOCh transit times from this paper; the asterisk is the discovery epoch \citep{Kovacs07}. {\it Lower panel}: An expanded view of the EPOCh transit times, with $1\sigma$ errors of 48-71 seconds.}
\label{fig:hatp4_ttvs}
\end{center}
\end{figure}
\subsection{TrES-3}
\label{sec:tres3}
For TrES-3, we find system parameters consistent with those published by \citet{ODonovan07}, \citet{Sozzetti09} and \citet{Gibson09}, with $R_p=1.320\pm0.057$ $R_{\rm Jup}$, $R_{\star}=0.817\pm0.022$ $R_{\odot}$, $i=81.99\pm0.30$ degrees and $\tau=81.9\pm1.1$ minutes. In the upper panel of Figure \ref{fig:tres3_trends} we plot the published values of inclination with time and note a weak trend towards decreasing inclination, however it is present at only the 1.5$\sigma$ level, and hence not significant (and largely dependent on the most recent value from Sozzetti et al. 2009). A more model-independent way of constraining changes in the transit parameters with time is by measuring the transit duration. Where available, we use the quoted transit duration and error, and otherwise we calculate the transit duration from the published parameters, using equation (4) from \citet{Charbonneau06}. Following the analytic approximation of \citet{Carter08}, we set the error on these calculated transit durations to twice the error in the measured transit times for each source. Although this error was originally derived for the transit duration from mid-ingress to mid-egress, as compared to the transit duration from first to fourth contact, we find that for the {\it EPOCh} data the errors calculated using this approximation and the errors measured from the data themselves are nearly identical (1.1 minutes and 1.0 minutes respectively). We plot the derived values in the lower panel of Figure \ref{fig:tres3_trends}, and we see no evidence of a change in the transit duration with time.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig14.eps}
\caption{{\it Upper panel}: The estimates of the inclination for TrES-3 as a function of time. {\it Lower panel}: The estimates of the TrES-3 transit durations.}
\label{fig:tres3_trends}
\end{center}
\end{figure}
In Section \ref{sec:details} we noted that in the process of calibrating the light curve, a long term variability was evident. This variability is consistent with stellar variability due to spots. Using a $v$sin$i$ of $<$2 km s$^{-1}$ \citep{ODonovan07}, the rotational period of TrES-3 must be $>$21 days, considerably longer than our observation span of 12 days. We can therefore not place any additional constraints on the rotational period of TrES-3, however we note that if the variability is due to spots on the stellar surface rotating in and out of view then additional monitoring of TrES-3 may reveal the rotational period.
For TrES-3, we calculate a new ephemeris of $T_c({\rm BJD})=2454538.58069\pm0.00021+1.30618606\pm0.00000038E$ using the published transit times and the seven EPOCh transits presented in this paper. Figure \ref{fig:tres3_ttvs} shows the residuals to the new ephemeris. We see no evidence of the period changing with time or transit timing variations larger than 1 minute.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig15.eps}
\caption{The times of transit of TrES-3. O'D07: \citet{ODonovan07}; S09: \citet{Sozzetti09}; G09: \citet{Gibson09}; EPOXI: this paper. {\it Lower panel}: An expanded view of the EPOCh transit times, with $1\sigma$ errors of 28--35 seconds.}
\label{fig:tres3_ttvs}
\end{center}
\end{figure}
Using the six EPOCh secondary eclipse observations of TrES-3, shown in Figure \ref{fig:tres3_eclipses}, we set a 95\% confidence upper limit on the eclipse depth of 0.062\%. This indicates the planetary geometric albedo must be $A_g<0.81$ in the EPOCh bandpass. \citet{Winn08} observed secondary eclipses of TrES-3 in the $i$, $z$ and $R$ bands, and were able to put 99\% confidence upper limits on the eclipse depths of 0.024\%, 0.050\% and 0.086\% respectively. The EPOCh upper limit at 0.65 $\micron$ is consistent with the $R$ band upper limit. \citet{deMooij09} observed the secondary eclipse in the $K$ band and found a depth of 0.241$\pm$0.043\%. \citet{Fressin10} observed secondary eclipses of TrES-3 with the {\it Spitzer} IRAC instrument, measuring depths of 0.356$\pm$0.036\%, 0.372$\pm$0.054\%, 0.449$\pm$0.097\% and 0.475$\pm$0.046\% in the 3.6, 4.5, 5.8 and 8.0 micron bands respectively. The secondary eclipse measurements are shown in Figure \ref{fig:tres3_spectrum} as a function of wavelength.
Given the high levels of stellar irradiation, the atmosphere of TrES-3 was anticipated to host a thermal inversion \citep{Fortney08,deMooij09}. Using all data sets, however, \citet{Fressin10} found the observations to be best fit with a dayside atmosphere model without a thermal inversion.
Our model spectra are computed using the exoplanet atmosphere model developed in \citet{Madhusudhan09}. The model consists of a line-by-line radiative transfer model, with constraints of hydrostatic equilibrium and global energy balance, and coupled to a parametric pressure-temperature (P-T) structure and parametric molecular abundances (parametrized as deviations from thermochemical equilibrium and solar abundances). Our modeling approach allows one to compute large ensembles of models, and efficiently explore the parameter space of molecular compositions and temperature structure.
We confirm previous findings that existing detections of day-side observations can be explained to within the 1$\sigma$ uncertainties by models without thermal inversions. The black curve in Figure \ref{fig:tres3_spectrum} shows one such model spectrum, which has a chemical composition at thermochemical equilibrium and solar abundances for the elements. The model is also consistent with the EPOCh upper-limit at 0.65 microns, and with the upper-limits from \citet{Winn08}. The dark green dashed curve shows a 1600K blackbody spectrum of the planet, indicating that the data cannot be explained by a pure blackbody. The model reported here has a day-night energy redistribution fraction of 0.4, indicating very efficient redistribution. Therefore, based on previous studies and our current finding, existing data do not require the presence of a thermal inversion in TrES-3. However, a detailed exploration of the model parameter space would be needed to rule out thermal inversions with a given statistical significance \citep{Madhusudhan10}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6in]{fig16.eps}
\caption{The optical and near-infrared secondary eclipse measurements of TrES-3. The EPOCh upper limit of 0.062\% is shown in blue at 0.65 microns. The remaining upper limits in the optical are from \citet{Winn08}; the measurement at 2.2 microns is from \citet{deMooij09}; and the four measurements from 3.6 to 8.0 microns are from \citet{Fressin10}. The solid black line is a representative model from the set of models that fit the data to within 1$\sigma$, and the dashed lined shows a black-body spectrum for a temperature of 1600K. The green circles represent the model integrated to the {\it Spitzer} bandpasses. The inset is the temperature-pressure profile for the model shown.}
\label{fig:tres3_spectrum}
\end{center}
\end{figure}
\subsection{TrES-2}
\label{sec:tres2}
For TrES-2, we derive system parameters that are consistent at the 1.5$\sigma$ level with estimates published by \citet{ODonovan06}, \cite{Sozzetti07}, and \citet{Holman07}, finding $R_p=1.169\pm0.034$ $R_{\rm Jup}$, $R_{\star}=0.940\pm0.026$ $R_{\odot}$, $i=84.15\pm0.16$ degrees and $\tau=107.3\pm1.1$ minutes.
As discussed in Section \ref{sec:intro}, there is currently a debate as to whether the inclination of the planetary orbit and duration of the TrES-2 transit are decreasing with time due to orbital precession. In the upper panel of Figure \ref{fig:tres2_trends} we plot the estimates for the inclination as a function of time. For the inclination, the error bars of \citet{Mislis09}, \citet{Mislis10} and \citet{Scuderi09} were calculated by fixing the stellar and planetary radii and allowing only the inclination and time of center of transit to vary. The remainder of the inclination error bars were calculated allowing all of the geometric parameters to vary simultaneously, which explains why they are considerably larger than the later results. Since the errors skew any weighted linear fit towards an unrealistically large {\it increase} in the inclination with time, we instead plot an unweighted linear fit to guide the eye. We note that TrES-2 is in the {\it Kepler} field and that any change in inclination with time will soon be measured with exquisite precision.
The inclination measured from a particular transit light curve will necessarily depend on the geometric parameters and to some extent the choice of limb-darkening treatment. However, the transit duration is directly measurable from the light curve and should not depend on the limb darkening. The lower panel of Figure \ref{fig:tres2_trends} shows the published transit durations as a function of time. Where they were not given, we calculated the durations and errors as described for TrES-3. In this case, we perform a weighted linear fit and do see a formally significant decrease in the transit duration with time. However, this conclusion is heavily dependent on one point, in this case the duration calculated from \citet{Holman07}. If this point is excluded from the fit, then $d\tau/dt=-0.0015\pm0.0015$, consistent with no change in the transit duration with time and therefore we do not claim to have detected a change in the transit duration with time. Again, we expect {\it Kepler} to provide a clear answer to this question.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig17.eps}
\caption{{\it Upper panel}: The estimates of the inclination of TrES-2 as a function of time. The dotted line is an unweighted linear fit. O'D07: \citep{ODonovan07}; H07: \citep{Holman07}; R09: \citep{Rabus09}; M09/10: \citep{Mislis09,Mislis10}; S09: \citep{Scuderi09}; EPOXI: this paper. {\it Lower panel}: The TrES-2 transit durations. In this case the dotted line is a weighted linear fit.}
\label{fig:tres2_trends}
\end{center}
\end{figure}
Using the published transit times of TrES-2 and the nine transits observed by EPOCh presented in this paper, we find a new weighted ephemeris of $T_c({\rm BJD})=24544664.23039\pm0.00018+2.47061344\pm0.00000075E$. The residuals to this ephemeris are shown in Figure \ref{fig:tres2_ttvs}. In the EPOCh residuals, we see no variations in the transit times above the level of 2 minutes; excluding the amateur data from the Exoplanet Transit Database due to the large error bars, the scatter in the full set of residuals is less than 5 minutes. We see no evidence for long term drifts in the period.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig18.eps}
\caption{{\it Upper panel}: The transit times of TrES-2. O'D06: \citet{ODonovan06}; H07: \citet{Holman07}; R09: \citet{Raetz09}; R09(ETD): \citet{Raetz09} (from the Exoplanet Transit Database, http://var.astro.cz/ETD); EPOXI: this paper. {\it Lower panel}: An expanded view of the EPOCh transit times, with $1\sigma$ errors of 24--44 seconds.}
\label{fig:tres2_ttvs}
\end{center}
\end{figure}
We used six of the eight EPOCh secondary eclipses of TrES-2 to place a 95\% confidence upper limit on the eclipse depth of 0.16\%. This corresponds to a planetary geometric albedo of $A_g = 6.6$. As for HAT-P-4, this is not a physically plausible value.
\subsection{WASP-3}
\label{sec:wasp3}
For WASP-3, we measure system parameters that are consistent with, and an improvement upon, previously published parameters from \citet{Pollacco08} and \citet{Gibson08}, finding $R_p=1.385\pm0.060$ $R_{\rm Jup}$, $R_{\star}=1.354\pm0.056$ $R_{\odot}$, $i=84.22\pm0.81$ degrees and $\tau=167.3\pm1.3$ minutes. We generate a new refined ephemeris from the published transit times and the eight EPOCh transits in this paper, finding $T_c({\rm BJD})=2454686.82069\pm0.00039+1.8468373\pm0.0000014E$. The residuals to this ephemeris are shown in Figure \ref{fig:wasp3_ttvs}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig19.eps}
\caption{The transit times of WASP-3. P08: \citet{Pollacco08}; G08: \citet{Gibson08}; EPOXI: this paper. {\it Lower panel}: An expanded view of the EPOXI transit times, with $1\sigma$ errors of 23--51 seconds.}
\label{fig:wasp3_ttvs}
\end{center}
\end{figure}
The phase-folded light curve of WASP-3 (Figure \ref{fig:wasp3_phased}) shows correlated residuals in the latter half of transit. Since the noise in the transit exceeds the noise out of transit, one conclusion could be spot activity on the surface of the star being eclipsed during transit. However, if we examine the transits individually we observe that the correlated noise in the full light curve is not typically larger in transit than out of transit. The six transits used in the analysis are shown in Figure \ref{fig:wasp3_transits}. In the transits numbered 2, 3, 4, and 6 large deviations can be seen in the second half of the transit, which leads to residuals in the phased light curve. If there were star spots producing correlated residuals in the transits, we would not necessarily expect them to occur at the same phase for each transit. The $v$ sin $i$ for WASP-3 has been measured by \citet{Simpson09} to be $15.7^{+1.4}_{-1.3}$ km s$^{-1}$, which corresponds to a rotational period for the star of 4.2 days. Transits of WASP-3 are spaced by 1.85 days, so it is improbable for spot activity to appear at the same phase in successive transits. Given these constraints, we conclude that the alignment of signals with phase in the EPOCh transits of WASP-3 are coincidental and are due to instrumental artifacts.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in, angle=90]{fig20.eps}
\caption{The six EPOCh transits of WASP-3. The best fit model for the combined set of transits is plotted in each case. The scatter around the model is not typically larger in transit than out of transit, indicating that the in-transit residuals cannot be attributed to star spots.}
\label{fig:wasp3_transits}
\end{center}
\end{figure}
We use four of the nine EPOCh secondary eclipse observations of WASP-3 to set a 95\% confidence upper limit on the eclipse depth of 0.11\%. This corresponds to a planetary geometric albedo of $A_g=2.5$ in the EPOCh bandpass, which is not a useful constraint for the planetary atmosphere.
\section{Conclusion}
\label{sec:conc}
We have presented time series photometry from the NASA {\it EPOXI} Mission of Opportunity for four known transiting planet systems: HAT-P-4, TrES-3, TrES-2 and WASP-3. For each system we provided an updated set of system parameters and orbital period, and placed upper limits on the secondary eclipse depth. For TrES-3, we see evidence of stellar variability over long timescales. We combined the {\it EPOCh} secondary eclipse upper limit for TrES-3 with previously published measurements and confirm that the data are best fit using an atmosphere model with no temperature inversion. For TrES-2, the EPOCh data weaken the claimed trends of decreasing inclination and transit duration \citep{Mislis09,Mislis10}. We have also performed a search for additional transiting planets in the EPOCh photometry for these systems, which we will present in a forthcoming paper.
\acknowledgments
We are extremely grateful to the {\it EPOXI}~Flight and Spacecraft Teams that made these difficult observations possible. At the Jet Propulsion Laboratory, the Flight Team has included M. Abrahamson, B. Abu-Ata, A.-R. Behrozi, S. Bhaskaran, W. Blume, M. Carmichael, S. Collins, J. Diehl, T. Duxbury, K. Ellers, J. Fleener, K. Fong, A. Hewitt, D. Isla, J. Jai, B. Kennedy, K. Klassen, G. LaBorde, T. Larson, Y. Lee, T. Lungu, N. Mainland, E. Martinez, L. Montanez, P. Morgan, R. Mukai, A. Nakata, J. Neelon, W. Owen, J. Pinner, G. Razo Jr., R. Rieber, K. Rockwell, A. Romero, B. Semenov, R. Sharrow, B. Smith, R. Smith, L. Su, P. Tay, J. Taylor, R. Torres, B. Toyoshima, H. Uffelman, G. Vernon, T. Wahl, V. Wang, S. Waydo, R. Wing, S. Wissler, G. Yang, K. Yetter, and S. Zadourian. At Ball Aerospace, the Spacecraft Systems Team has included L. Andreozzi, T. Bank, T. Golden, H. Hallowell, M. Huisjen, R. Lapthorne, T. Quigley, T. Ryan, C. Schira, E. Sholes, J. Valdez, and A. Walsh.
Support for this work was provided by the {\it EPOXI} Project of the National Aeronautics and Space Administration's Discovery Program via funding to the Goddard Space Flight Center, and to Harvard University via Co-operative Agreement NNX08AB64A, and to the Smithsonian Astrophysical Observatory via Co-operative Agreement NNX08AD05A. The authors acknowledge and are grateful for the use of publicly available transit modeling routines by Eric Agol and Kaisey Mandel, and also the Levenberg-Marquardt least-squares minimization routine MPFITFUN by Craig Markwardt. This work has used data obtained by various observers collect in the Exoplanet Transit Database, http://var.astro.cz/ETD.
\newpage
|
2,877,628,088,627 | arxiv | \section{Introduction}%
\vspace{0.3cm}
\par The problem of finding the number of solutions over a finite field of a polynomial equation has been of interest to mathematicians for many
years. A typical result in this direction is the \emph{Hasse-Weil bound}, which states that a smooth projective curve
of genus $g$ defined over a finite field with $q$ elements has between $q+1-2g\sqrt{q}$ and $q+1+2g\sqrt{q}$ points. A natural question to ask is whether there are simple formulas for counting points in terms of interesting mathematical objects.
\vspace{0.3cm}
Classical hypergeometric functions and their relations to counting points on curves over finite fields
have been investigated by mathematicians since the beginnings of 1900. Recall that for $a_1, \dots , a_r, b_1, \dots , b_s$, $x \in \mathbb{C}$, the classical hypergeometric series is defined by
\vspace{0.1cm}
\begin{equation}
_{r}F_{s} \left(
\begin{matrix}
a_1, & a_2, & \dots, & a_r \\
b_1, & b_2, & \dots, & b_s \\
\end{matrix}
\Bigg{\vert} x
\right)
:= \sum_{k=0}^{\infty}\frac{(a_1)_k (a_2)_k \cdots (a_r)_k}{(b_1)_k (b_2)_k \cdots (b_s)_k }\frac{x^k}{k!}
\end{equation}
where $(a)_k:=a(a+1)\cdots (a+k-1)$ is the Pochhammer symbol.
\vspace{0.3cm}
Many connections between classical hypergeometric series, elliptic curves and modular forms have been discovered. For example, if we consider the Legendre family of elliptic curves given by $y^2=x(x-1)(x-t)$, $t\neq 0,1$, and denote
$$_2F_1[a,b;c|t]:={}_{2}F_{1} \left(
\begin{matrix}
a, & b \\
& c \\
\end{matrix}
\Bigg{\vert} t
\right),$$ the specialization
$_2F_1[\frac{1}{2},\frac{1}{2};1|t]$ is a multiple of an elliptic integral which represents a period of the lattice
associated to the previous family, as Kummer showed.
For another examples, Beukers \cite{Be93} related a
period of $y^2=x^3-x-t$ to the values $_{2}F_{1}[\frac{1}{12},\frac{5}{12};\frac{1}{2}|\frac{27}{4}t^2]$.
\vspace{0.3cm}
\par In the 1980's, J. Greene \cite{Gr84,Gr87} initiated a study of hypergeometric series over finite fields.
Let $q$ be a power of a prime, and let $\widehat{\mathbb{F}_{q}^{\times}}$ denote the group of multiplicative characters $\chi$ on $\ensuremath{\mathbb{F}}_{q}^{\times}$, extended to all of $\ensuremath{\mathbb{F}}_q$ by setting $\chi(0)=0$. If $A, B \in \widehat{\mathbb{F}_{q}^{\times}}$ we let the binomial coefficient be a Jacobi sum. Specifically,
$$\binom{A}{B}:=\frac{B(-1)}{q}J(A,\overline{B})=\frac{B(-1)}{q} \sum_{x \in \widehat{\mathbb{F}_{q}}} A(x)\overline{B}(1-x)$$
In this notation, we recall Greene's definition of hypergeometric functions over $\ensuremath{\mathbb{F}}_q$.
If $A_0,A_1,\dots,A_n,$ and $B_1,B_2,\dots,B_n$ are characters of $\widehat{\mathbb{F}_{q}^{\times}}$ and $x\in\ensuremath{\mathbb{F}}_q$, then the \emph{Gaussian hypergeometric function over $\ensuremath{\mathbb{F}}_{q}$} is defined by
\begin{equation}\label{hypergeometric function}
_{n+1}F_{n} \left(
\begin{matrix}
A_0, & A_1, & \dots, & A_n \\
& B_1, & \dots, & B_n \\
\end{matrix}
\Bigg{\vert} x
\right)
:= \frac{q}{q-1} \sum_{\chi\in\widehat{\mathbb{F}_{q}^{\times}}} \binom{A_0\chi}{\chi} \binom{A_1\chi}{B_1\chi} \dots \binom{A_n\chi}{B_n\chi} \chi(x)
\end{equation}
where $n$ is a positive integer.
\par Greene explored the properties of these functions and found that they satisfy many summation and transformation formulas analogous to
those satisfied by the classical functions. These similarities generated interest in finding connections that hypergeometric functions over finite fields may have with other objects, for example elliptic curves. In recent years, many results have been proved in this direction and as expected, certain families of elliptic curves are closely related to particular hypergeometric functions over finite fields. Motivated by these types of results, we have explored more relations between Gaussian hypergeometric functions and counting points on varieties over finite fields.
\vspace{0.3cm}
\par Throughout, let $\ensuremath{\mathbb{F}}_q$ denote the finite field with $q$ elements, where $q$ is some prime power. For $z \in \ensuremath{\mathbb{F}}_{q}$ let $\mathcal{C}_{z}$ be the smooth projective curve with affine equation \begin{equation}\label{curve_ms}
\mathcal{C}_{z}: y^l=t^m(1-t)^s(1-zt)^m
\end{equation}
where $l \in \ensuremath{\mathbb{N}}$ and $1\leq m,s<l$ such that $m+s=l$.
Our first result provides an explicit relation between the number of points on certain family of curves over finite fields and values of particular hypergeometric functions.
\begin{teo} \label{thmformula}
Let $a=m/n$ and $b=s/r$ be rational numbers such that $0<a,b<1$, and let $z \in \ensuremath{\mathbb{F}}_{q}$, $z \neq 0,1$. Consider the smooth projective algebraic curve with affine equation given by
$$\mathcal{C}_{z}^{(a,b)}: y^l = t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}$$
where $l:=\text{lcm}(n,r)$.
If $q \equiv 1 \pmod{l}$ then:
\begin{equation}\label{formula}
\# \mathcal{C}_{z}^{(a,b)}(\ensuremath{\mathbb{F}}_{q})= q+1+q\sum_{i=1}^{l-1} \eta_{q}^{ilb}(-1) \,{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q} ^{il(1-a)}, & \eta_{q} ^{il(1-b)} \\
& \varepsilon \\
\end{array} \: z \right)
\end{equation}
where $\eta_{q} \in \widehat{\ensuremath{\mathbb{F}}_{q}^{\times}}$ is a character of order $l$, and $\# \mathcal{C}_{z}^{(a,b)}(\ensuremath{\mathbb{F}}_{q})$ denotes the number of points that the curve $\mathcal{C}_{z}^{(a,b)}$ has over $\ensuremath{\mathbb{F}}_q$.
\end{teo}
\vspace{0.4cm}
\par After recalling, in section \ref{recent history}, a few recent results that relate counting points on varieties over $\ensuremath{\mathbb{F}}_p$ to hypergeometric functions, we set up the necessary preliminaries in section \ref{preliminaries} and give the proof of Theorem \ref{thmformula} and some consequences of it in section \ref{hgf and ac}. Our next interest has been to find a closed formula for hypergeometric functions over finite fields, and more specifically, we have been interested in relating each particular term that appears in the right hand side of sum (\ref {formula}) to the curve $\mathcal{C}_{z}^{(a,b)}$. Explicitly, in section \ref{mainconj} we state a conjecture that relates the values of the hypergeometric functions appearing in (\ref {formula}) to counting points on the curves $\mathcal{C}_{z}^{(a,b)}$ over $\ensuremath{\mathbb{F}}_q$, and in sections \ref{proof l=3} and \ref{proof l=5} we prove this conjecture for some particular cases. These results give a closed formula for the values of hypergeometric functions over finite fields in terms of the traces of Frobenius of certain curves. Finally, in section \ref{generalconj} we show advances toward the proof of the conjecture in its full generality.
\vspace{0.3cm}
\section{Recent History}\label{recent history}
\vspace{0.3cm}
\par As mentioned at the beginning of this paper, following Greene's introduction of hypergeometric functions over $\ensuremath{\mathbb{F}}_q$ in the 1980s,
results emerged linking their values to counting points on varieties over $\ensuremath{\mathbb{F}}_q$.
\vspace{0.3cm}
Consider the two families of elliptic curves over $\ensuremath{\mathbb{F}}_p$ defined by
\begin{align*}
E_1(t)&:y^2=x(x-1)(x-t),\,\,\,\, t\neq 0,1\\
E_2(t)&:y^2=(x-1)(x^2+t),\,\,\,\, t \neq 0,-1.
\end{align*}
Then, for $p$ and odd prime define the traces of Frobenius on the above families by
\begin{align}\label{traces Frobenius}
a_1(p,t)&=p+1- \#E_1(t)(\ensuremath{\mathbb{F}}_p) \nonumber\\
a_2(p,t)&=p+1- \#E_2(t)(\ensuremath{\mathbb{F}}_p)
\end{align}
where, for i=1,2
$$\#E_{i}(t)(\ensuremath{\mathbb{F}}_{p}):=\#\{(x,y)\in E_{i}(t): x,y\in \ensuremath{\mathbb{F}}_{p}\}\cup \{P\}$$
denotes the number of points the curve $E_{i}(t)$ has over the finite field $\ensuremath{\mathbb{F}}_{p}$, with $P=[0:1:0]$ being the point at infinity.
Denote by $\phi$ and $\varepsilon$ the quadratic and trivial characters on $\ensuremath{\mathbb{F}}_{p}^{\times}$ respectively.
Then, the families of elliptic curves defined above are closely related to particular hypergeometric functions over $\ensuremath{\mathbb{F}}_p$. For example, $_2F_1[\phi,\phi;\varepsilon|t]$ arises in the formula for Fourier coefficients of a modular form associated to $E_1(t)$ \cite{Ko92,On98}. Further, Koike and Ono, respectively, gave the following explicit relationships:
\begin{teo}[(1) Koike \cite{Ko92}, (2) Ono \cite{On98}] \label{koike_ono} Let $p$ be an odd prime. Then
\begin{enumerate}
\item for $t \neq 0,1$:
$$p \, _{2}F_{1}\left(
\begin{matrix}
\phi,&\phi\\
&\varepsilon
\end{matrix}
\Bigg{\vert} t
\right) =-\phi(-1) a_1(p,t)$$\\
\item for $t \neq 0,-1$:
$$p^2 \,
_{3}F_{2} \left(
\begin{matrix}
\phi, & \phi, & \phi \\
& \varepsilon, & \varepsilon \\
\end{matrix}
\Bigg{\vert} 1+\frac{1}{t}
\right)
=
\phi(-t)(a_2(p,t)^2-p).$$
\end{enumerate}
\end{teo}
\vspace{0.3cm}
In addition, Frechette, Ono, and Papanikolas \cite{FOP04} gave explicit relations between the number of points over $\ensuremath{\mathbb{F}}_{p}$ in the more general varieties defined by
\begin{align*}
\mathcal{U}_{k}&: y^2=\prod_{i=1}^{k-2}(x_{i}-1)(x_{i}^2+t),\\
\mathcal{V}_{k} &: y^2=\prod_{i=1}^{k-2}x_{i}(x_{i}-1)(x_{i}-t),\\
\mathcal{W}_{k} &: y^2=\prod_{i=1}^{k-2}x_{i}(x_{i}-1)(x_{i}-t^2)
\end{align*}
and the traces of Frobenius defined in (\ref{traces Frobenius}). (A different approach and other applications of these hypergeometric functions can be found in \cite{Ka90})
\section{{Preliminaries on multiplicative characters, hypergeometric functions and the Zeta function of a variety}\label{preliminaries}}
\vspace{0.3cm}
\par In this section we fix some notation and recall a few facts regarding multiplicative characters, hypergeometric functions and the Zeta function of a variety that will be needed in later sections.
\par Let $p$ be a prime and let $\ensuremath{\mathbb{F}}_{q}$ be a finite field with $q$ elements, with $q=p^r$ for some positive integer $r$. We will denote by $\ensuremath{\mathbb{F}}_{q}^{\times}$ the multiplicative group of $\ensuremath{\mathbb{F}}_{q}$, i.e., $\ensuremath{\mathbb{F}}_{q}^{\times}=\ensuremath{\mathbb{F}}_{q}-\{0\}$. Now we state the \emph{orthogonality relations} for multiplicative characters, of which we will make use in section \ref{hgf and ac}.
\begin{lemma}
Let $\chi$ be a multiplicative character on $\fqm$. Then
$(a) \quad \displaystyle{\sum_{x\in\fq}\chi(x)=
\begin{cases}
q-1 & \textnormal{if} \,\, \chi=\varepsilon\\
0 & \textnormal{if} \,\, \chi\neq\varepsilon
\end{cases}}$
$(b) \quad \displaystyle{\sum_{\chi \in \fqmc}\chi(x)=
\begin{cases}
q-1 & \textnormal{if} \,\, x=1\\
0 & \textnormal{if} \,\, x \neq 1.
\end{cases}}$
\end{lemma}
We also require a few properties of hypergeometric functions over $\ensuremath{\mathbb{F}}_p$ that Greene proved in \cite{Gr87}.
Greene defined the \emph{Gaussian hypergeometric functions over $\fq$} as the following character sum:
\begin{df}[\cite{Gr87} Defn. 3.5] \label{gauss_hpgf}
For characters $A, B, C \in \widehat{\mathbb{F}_{q}^{\times}}$ and $x \in \fq$
\vspace{0.1cm}
\begin{equation}
_{2}F_{1} \left(
\begin{matrix}
A, & B \\
& C \\
\end{matrix}
\Bigg{\vert} x
\right)
:= \varepsilon(x)\frac{BC(-1)}{q} \sum_{y \in \fq}B(y)\overline{B}C(1-y)\overline{A}(1-xy).
\end{equation}
\end{df}
\vspace{0.3cm}
More generally, Greene proved the following theorem which connects these functions to Jacobi sums, and
extended the previous definition to a higher number of multiplicative characters (see formula (\ref{hypergeometric function})).
\begin{teo}[\cite{Gr87} Theorem 3.6]
For characters $A, B, C \in \widehat{\mathbb{F}_{q}^{\times}} $ and $x \in \fq$,
\vspace{0.1cm}
$$_{2}F_{1} \left(
\begin{matrix}
A, & B \\
& C \\
\end{matrix}
\Bigg{\vert} x
\right)
=\frac{q}{q-1}\sum_{\chi \in \fqmc}\binom{A\chi}{\chi}\binom{B\chi}{C\chi}\chi(x).$$
\end{teo}
A comprehensive introduction to these functions can be found in Greene's paper \cite{Gr87}, where he presented many properties and transformation identities they satisfy. One transformation that is of interest to us is presented in the next theorem, and it allows to replace the arguments $A, B \in \fqmc$ by $\overline{A}, \overline{B}$ respectively.
\begin{teo}[\cite{Gr87} Theorem 4.4]
If $A,B,C \in \fqmc$ and $x \in \fq$, then
\begin{align}\label{conjugate_hyperg_function}
_{2}F_{1} \left(
\begin{matrix}
A, & B \\
& C \\
\end{matrix}
\Bigg{\vert} x
\right)
&= C(-1)C\,\overline{AB}(1-x)\,_{2}F_{1} \left(
\begin{matrix}
C\overline{A}, & C\overline{B} \\
& C \\
\end{matrix}
\Bigg{\vert} x
\right) \nonumber\\
& \qquad + A(-1)\binom{B}{\overline{A}C}\delta(1-x)
\end{align}
where $\delta(x)=
\begin{cases}
1 & \textnormal{if} \,\, x=0\\
0 & \textnormal{if} \,\, x \neq 0.
\end{cases}
$
\end{teo}
In particular, when $A$ and $B$ are inverses of each other and $C=\varepsilon$ we get the following result.
\begin{cor}\label{conjugate_hpgf}
Let $A \in \fqmc$ and $x \in \fq\backslash \{1\}$. Then
$$_{2}F_{1} \left(
\begin{matrix}
A, & \overline{A} \\
& \varepsilon \\
\end{matrix}
\Bigg{\vert} x
\right)
={}_{2}F_{1} \left(
\begin{matrix}
\overline{A}, & A \\
& \varepsilon \\
\end{matrix}
\Bigg{\vert} x
\right) $$
\begin{proof}
Just notice that, since $x \neq 1$ then the last term in the right hand side of (\ref{conjugate_hyperg_function}) vanishes, and $A\overline{A}(1-x)=1$.
\end{proof}
\end{cor}
Now, recall that the \emph{Zeta function of a projective variety} is a generating function for the number of solutions of a set of polynomial equations defined over a finite field $\ensuremath{\mathbb{F}}_{q}$, in finite extension fields $\ensuremath{\mathbb{F}}_{q^n}$ of $\ensuremath{\mathbb{F}}_{q}$. In this way, we collect all the information about counting points into a single object.
\vspace{0.3cm}
\begin{df}
Let $\mathcal{V}$ be a projective variety. The zeta function of $\mathcal{V}/\fq$ is the power series
$$Z(\mathcal{V}/\fq; T):= \text{exp} \left( \sum_{n=1}^{\infty} \# \mathcal{V}(\mathbb{F}_{q^n})\frac{T^n}{n}
\right) \in \mathbb{Q}[[T]].$$
\end{df}
\vspace{0.3cm}
In 1949, Andr\'e Weil \cite {We49} made a series of conjectures concerning the number of points on
varieties defined over finite fields which were proved by Weil, Dwork \cite {Dw60} and Deligne \cite{De74} in later years.
Applying these conjectures to a smooth projective curve $\mathcal{V}$ of genus $g$ defined over $\fq$, we obtain that
\begin{equation}\label{zetafunction}
Z(\mathcal{V}/\fq;T)=\frac{(1-\alpha_{1}T)(1-\overline{\alpha_{1}}T)\cdots(1-\alpha_{g}T)
(1-\overline{\alpha_{g}}T)}{(1-T)(1-qT)}
\end{equation}
\vspace{0.2cm}
where $\alpha_{i} \in \ensuremath{\mathbb{C}}, \,\, |\alpha_{i}|=\sqrt{q}$ for all $i=1,\dots, g$.
In this case we have a beautiful formula for counting points on $\mathcal{V}$ over $\mathbb{F}_{q^n}$, namely
\begin{equation}\label{formulapuntos}
\#\mathcal{V}(\mathbb{F}_{q^n})=q^n+1-\sum_{i=1}^{g}(\alpha_{i}^n+\overline{\alpha_{i}}^n)
\end{equation}
\noindent (For details see \cite{IR90}) We will make strong use of formulas (\ref{zetafunction}) and (\ref{formulapuntos}) applied to particular families of curves to prove the results in the following sections.\\
\vspace{0.3cm}
\section{Counting Points on Families of Curves over Finite Fields}\label{hgf and ac}
\vspace{0.4cm}
\par
We consider the problem of connecting the number of points that certain families of curves have over finite fields to values of particular hypergeometric functions over finite fields. Throughout, let $\fq$ denote the finite field with $q$ elements, where $q$ is some prime power. We start with a result that allows to count the number of solutions of a particular equation by using multiplicative characters on $\fq$.
\begin{lemma}\label{countingwithsum}
Let $q$ be a prime and $a \in \fq\backslash \{0\}$. If $n | (q-1)$ then
$$\#\{x \in \ensuremath{\mathbb{F}}_{q} : x^n = a \} = \sum_{\chi ^n =\varepsilon} \chi(a)$$
where the sum runs over all characters $\chi \in \fqmc$ of order dividing $n$.
\begin{proof}
We start by seeing that there are exactly $n$ characters of order dividing $n$. Let $\chi : \fqm \to \ensuremath{\mathbb{C}}^{\times}$ be a character such that $\chi^{n}=\varepsilon$ and let $g \in \fqm$ be a generator. Since $\chi ^n = \varepsilon$, the value of $\chi(g)$ must be an $n$th root of unity, hence there are at most $n$ such characters. Consider $\chi \in \fqmc$ defined by $\chi(g)=e^{2\pi i/n}$ (i.e. $\chi(g^k)=e^{2\pi ik/n}$). It is easy to see that $\chi$ is a character and $\varepsilon, \chi, \chi^2, \cdots , \chi^{n-1}$ are $n$ distinct characters of order dividing $n$. Therefore, there are exactly $n$ characters of order dividing $n$.
\vspace{0.1in}
Now let $a \neq 0$ and suppose that $x^n=a$ is solvable; i.e., there is an element $b \in \ensuremath{\mathbb{F}}_{q}$ such that $b^n=a$. Since $\chi^n=\varepsilon$ we have that $\chi(a)=\chi(b^n)=\chi(b)^n=1$. Thus $$\sum_{\chi^n=\varepsilon} \chi(a)=\sum_{\chi^n=\varepsilon} 1 =n$$
Also notice that in this case, $\#\{x \in \ensuremath{\mathbb{F}}_{q} : x^n = a \}=n$ because if $x^n =a \pmod{q}$ is solvable then there exist exactly gcd$(n,\varphi(q))$ solutions, where $\varphi$ denotes the Euler function. But since $\varphi(q)=q-1$ and $n|(q-1)$ it follows that gcd$(n,q-1)=n$ (for a proof of this result see \cite{IR90} Proposition 4.2.1).
\vspace{0.1in}
To finish the proof we need to consider the case when $x^n=a$ is not solvable, in which case $\#\{x \in \ensuremath{\mathbb{F}}_{q} : x^n = a \}=0$.
Call $T:=\sum_{\chi^n=\varepsilon} \chi(a)$. Since $x^n =a$ is not solvable, there exist a character $\rho$ such that $\rho ^n=\varepsilon$ and $\rho (a)\neq 1$ (take $\rho (g)=e^{2\pi i/n}$ where $\langle g\rangle=\fqm$). Since the characters of order dividing $n$ form a group, it follows that $\rho (a)T=T$. Then $(\rho (a)-1)T=0$ which implies that $T=0$ since $\rho (a)\neq 1$.
\end{proof}
\end{lemma}
\vspace{0.1cm}
We are now ready to prove Theorem \ref{thmformula}.
\vspace{0.3cm}
\begin{proof}[Proof of Theorem \ref{thmformula}]
To simplify the notation, we will denote the curve $\mathcal{C}_{z}^{(a,b)}=\mathcal{C}_{z}$. Since $\fqmc$ is a cyclic group of order $q-1$ and $l | (q-1)$ there exists a character $\eta_{q} \in \fqmc$ of order $l$.
Recall that $\mathcal{C}_{z}$ is a projective curve, so adding the point at infinity we have
$$\# \mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q}) = 1+ \sum_{t \in \ensuremath{\mathbb{F}}_{q}}\# \{ y \in \ensuremath{\mathbb{F}}_{q} : y^l = t^{l(1-b)}(1-t)^{lb}(1-zt)^{la} \}$$
\noindent Breaking the sum and applying Lemma \ref{countingwithsum} we see that:
\allowdisplaybreaks{
\begin{align}\label{suma}
\# \mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q}) & = 1+ \sum_{\substack {t \in \ensuremath{\mathbb{F}}_{q} \\ t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}\neq 0}} \# \{ y \in \ensuremath{\mathbb{F}}_{q} : y^l = t^{l(1-b)}(1-t)^{lb}(1-zt)^{la} \}\nonumber\\
&\qquad + \# \{ t \in \ensuremath{\mathbb{F}}_{q} : t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}=0 \}\nonumber\\
& = 1+ \sum_{t \in \ensuremath{\mathbb{F}}_{q}} \sum_{i=0}^{l-1} \eta_{q}^i (t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}) \hspace{1.5in} \text{(Lemma \ref{countingwithsum})}\nonumber\\
&\qquad + \# \{ t \in \ensuremath{\mathbb{F}}_{q} : t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}=0 \}.\nonumber
\end{align}
}
\noindent Now, by separating the sum according to whether $i=0$, and collecting the second and last terms into a single one we have
\begin{align}
\#\mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q}) & = 1+ \sum_{t \in \ensuremath{\mathbb{F}}_{q}} \varepsilon(t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}) + \sum_{t \in \ensuremath{\mathbb{F}}_{q}} \sum_{i=1}^{l-1} \eta_{q}^i (t^{l(1-b)}(1-t)^{lb}(1-zt)^{la})\nonumber\\
&\qquad + \# \{ t \in \ensuremath{\mathbb{F}}_{q} : t^{l(1-b)}(1-t)^{lb}(1-zt)^{la}=0 \}\nonumber\\
& = 1+ q + \sum_{t \in \ensuremath{\mathbb{F}}_{q}} \sum_{i=1}^{l-1} \eta_{q}^i (t^{l(1-b)}(1-t)^{lb}(1-zt)^{la})\nonumber \\
& = 1+ q + \sum_{i=1}^{l-1} \sum_{t \in \fq}\eta_{q}^{il(1-b)}(t)\,\eta_{q}^{ilb}(1-t)\,\eta_{q}^{la}(1-zt).
\end{align}
The last equality follows from the multiplicativity of $\eta_{q}$ and switching the order of summation.
\vspace{0.3cm}
\noindent On the other hand, by Definition \ref{gauss_hpgf} in section \ref{preliminaries}, we have
\allowdisplaybreaks{
\begin{align}\label{hypergeometric}
q\,{}_{2}F_{1} \left(
\begin{array}{cc|}
\eta^{il(1-a)}, & \eta^{il(1-b)} \\
& \varepsilon \\
\end{array} \: z \right)& =
\varepsilon(z) \eta^{il(1-b)}(-1)\sum_{t\in \ensuremath{\mathbb{F}}_{q}} \eta^{il(1-b)}(t)\,\overline{\eta^{il(1-b)}}(1-t)\overline{\eta^{il(1-a)}}(1-zt)\nonumber \\
& = \varepsilon(z) \eta^{il(1-b)}(-1)\sum_{t\in \ensuremath{\mathbb{F}}_{q}} \eta^{il(1-b)}(t)\,\eta^{ilb}(1-t)\,\eta^{ila}(1-zt).
\end{align}
}
\noindent Since $z \neq 0$, combining (\ref{suma}) and (\ref{hypergeometric}) we get the desired result.
\end{proof}
\vspace{0.1cm}
In the proof of Theorem \ref{thmformula} we applied Lemma \ref{countingwithsum} which requires for $q$ to be a prime number in a particular congruence class modulo $l$. However, Theorem \ref{thmformula} is valid over any finite field extension $\fqk$ of $\fq$ as we see in the next Corollary.
\begin{cor}\label{generalthm}
With same notation as in Theorem \ref{thmformula}, we have that
$$\# \mathcal{C}_{z}^{(a,b)}(\ensuremath{\mathbb{F}}_{q^k})= q^k+1+q^k\sum_{i=1}^{l-1} \eta_{q^k}^{ilb}(-1)\,{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q^k} ^{il(1-a)}, & \eta_{q^k} ^{il(1-b)} \\
& \varepsilon \\
\end{array} \: z \right)$$
where $\eta_{q^k} \in \fqkmc$ is a character of order $l$.
\begin{proof}
Again, denote the curve by $\mathcal{C}_{z}$. First notice that $\fqkmc$ is a cyclic group of order $q^k-1$. Then, if $l|(q-1)$ it also divides $q^k-1$, hence there exists $\eta_{q^k} \in \fqkmc$ of order $l$.
Next, we show that Lemma \ref{countingwithsum} is also true over $\fqk$ for any positive integer $k$. The proof is almost identical. We only need to check that if $a \in \fqkm$ and $x^n=a$ is solvable, then $\#\{x \in \fqk : x^n=a\}=n$.
For this recall the following two statements, one of which was already used in the proof of Lemma \ref{countingwithsum} (for proofs of them see \cite{IR90} Propositions 4.2.1 and 4.2.3):
\begin{enumerate}
\item If $(a,q)=1$, then $x^n\equiv a \pmod{q}$ is solvable $\iff$ $a^{\varphi(q)/d}\equiv 1 \pmod{q}$, where $d:=\text{gcd}(n,\varphi(q))$. Moreover, if a solution exists then there are exactly $d$ solutions.
\item Let $q$ be an odd prime such that $q \nmid a$ and $q \nmid n$. If $x^n \equiv a \pmod{q}$ is solvable, then $x^n\equiv a \pmod{q^k}$ is also solvable for all $k \geq 1$. Moreover all these congruence have the same number of solutions.
\end{enumerate}
\noindent Then, for $q$ prime and in the case $x^n =a$ is solvable we have $$\#\{x \in \ensuremath{\mathbb{F}}_{q^k} : x^n=a\}=\# \{x \in \ensuremath{\mathbb{F}}_{q} : x^n=a\}=\text{gcd}(n,\varphi(q))=\text{gcd}(n,q-1)=n$$ since $n|(q-1)$.
Hence, Lemma \ref{countingwithsum} generalizes over $\ensuremath{\mathbb{F}}_{q^k}$. The proof of the Corollary now follows analogously to the proof of Theorem \ref{thmformula}.
\end{proof}
\end{cor}
\vspace{0.2cm}
As a consequence of Corollary \ref{generalthm} we get the following result that relates the number of points of certain curves over finite extensions of $\fq$.
\begin{cor}\label{curves_have_same_number_points}
Let $l$ be a prime, $m,m',s,s'$ be integers satisfying $1\leq m,m',s,s'< l$ and $m+s=m'+s'=l$, and consider the curves with affine equations given by $\mathcal{C}_{z}^{(m,s)}: y^l=t^m(1-t)^s(1-zt)^m$ and
$\mathcal{C}_{z}^{(m',s')}:y^l=t^{m'} (1-t)^{s'} (1-zt)^{m'}$ with $z \neq 0,1$. Then, for a prime $q$ such that $q \equiv 1 \pmod{l}$ we have
$$\#\mathcal{C}_{z}^{(m,s)}(\mathbb{F}_{q^k})=\#\mathcal{C}_{z}^{(m',s')}(\mathbb{F}_{q^k})$$
for all $k \in \mathbb{N}$.
\begin{proof}
Again, we drop the dependency of the curves on the integers $m,m',s,s'$ and denote $\mathcal{C}_{z}^{(m,s)}=\mathcal{C}_{z}$ and $\mathcal{C}_{z}^{(m',s')}=\mathcal{C}'_{z}$.
Let $\eta_{q^k} \in \fqkmc$ be a character of order $l$. If $l=2$ then $\mathcal{C}_{z}=\mathcal{C}'_{z}$ since $(m,s)$ and $(m',s')$ are both $(1,1)$. Therefore, there is nothing to prove in this case.
\vspace{0.3cm}
Suppose now that $l$ is an odd prime. Then, the order of $\eta_{q^k}$ is odd and so $\eta_{q^k}(-1)=1$. Next, consider $a:=m/l,\,\, b:=s/l$ and $a':=m'/l,\,\, b':=s'/l$ in Theorem \ref{thmformula}. The curves defined by these values are exactly $\mathcal{C}_{z}$ and $\mathcal{C'}_{z}$, hence by Corollary \ref{generalthm} and taking into account that $m+s=l$ and $m'+s'=l$, we have
\begin{equation}\label{pointsCz} \#\mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q^k})-(q^k+1)= q^k \sum_{i=1}^{l-1} {}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q^k}^{i(l-m)}, & \eta_{q^k}^{im} \\
& \varepsilon \\
\end{array} \: z \right) \end{equation}
\begin{equation} \label{pointsC'z} \#\mathcal{C}'_{z}(\ensuremath{\mathbb{F}}_{q^k})-(q^k+1)= q^k \sum_{i=1}^{l-1} {}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q^k}^{i(l-m')}, & \eta_{q^k}^{im'} \\
& \varepsilon \\
\end{array} \: z \right)\end{equation}
As we can see, the exponents of the characters appearing in the hypergeometric functions in (\ref{pointsCz}) and (\ref{pointsC'z}) add up to $0 \pmod{l}$.
Also notice that
\begin{itemize}
\item $\# \{(r,t): 1\leq r,t \leq l-1, r+t=l\}=l-1.$
\item $i(l-m)\equiv j(l-m) \pmod{l} \iff im\equiv jm \pmod{l} \iff l| m(i-j).$
Since $l$ is prime and $0<m<l$, $l$ must divide $i-j$. But $1\leq i,j \leq l-1$, then $i(l-m)\equiv j(l-m) \pmod{l} \iff i=j$
\end{itemize}
\noindent By these two observations, we see that the terms appearing in the RHS of (\ref{pointsCz}) are the same ones appearing in the RHS of (\ref{pointsC'z}), therefore we conclude that $$\#\mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q^k})=\#\mathcal{C}'_{z}(\ensuremath{\mathbb{F}}_{q^k})$$
\end{proof}
\end{cor}
It is not hard to see that the previous result can be generalized to the case when $l$ is an odd integer and $(l,m)=(l,m')=1$, and the argument is the same done above. However, the result is not true in general if we just ask for $m+s=m'+s'$, as we can see in the following example for $l=5$ and $m+s=4$:
\begin{itemize}
\item If $(m,s)=(1,3)$ then $Z(\mathcal{C}_{2}|\ensuremath{\mathbb{F}}_{11},T)=\frac{(11T^2+3T+1)^4}{(1-T)(1-11T)}$ hence $$|\#\mathcal{C}_{2}(\ensuremath{\mathbb{F}}_{11})-(11+1)|=12$$
\item If $(m',s')=(2,2)$ then $Z(\mathcal{C'}_{2}|\ensuremath{\mathbb{F}}_{11},T)=\frac{(11T^2-2T+1)^4}{(1-T)(1-11T)}$ hence $$|\#\mathcal{C}'_{2}(\ensuremath{\mathbb{F}}_{11})-(11+1)|=8$$
\end{itemize}
\vspace{0.3cm}
\section{The genus of $\mathcal{C}_{z}$}\label{genus of the curve}
\vspace{0.4cm}
\par
In this section, we start by recalling the Riemann-Hurwitz genus formula, which is extremely useful when trying to compute the genus of an algebraic curve.
\vspace{0.3cm}
\begin{teo}[Riemann-Hurwitz genus formula]\label{RHgenus}
Let $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ be two smooth curves defined over a perfect field $K$ of genus $g_{1}$ and $g_{2}$ respectively. Let $\psi:\mathcal{C}_{1}\to \mathcal{C}_{2}$ be a non-constant and separable map. Then
$$2g_{1}-2\geq deg(\psi)(2g_{2}-2)+\sum_{P\in \mathcal{C}_{1}} (e_{\psi}(P)-1)$$
where $e_{\psi}(P)$ is the ramification index of $\psi$ at $P$. Moreover, there is equality if and only if either char$(K)=0$ or char$(K)=p$ and $p$ does not divide $e_{\psi}(P)$ for all $P\in\mathcal{C}_{1}$.
\end{teo}
Next, we apply the Riemann-Hurwitz formula to compute the genus of the smooth projective curve $\mathcal{C}_{z}$ with affine equation \begin{equation}\label{curve_ms}
\mathcal{C}_{z}: y^l=t^m(1-t)^s(1-zt)^m
\end{equation}
where $l$ is prime and $1\leq m,s<l$ such that $m+s=l$. For that, we consider the map $$\psi: \mathcal{C}_{z}\to \mathbb{P}^{1}, \,\,\,\,\,\,\,\, [x:y:z] \mapsto [x:z]$$
and notice that $[0:1:0]\mapsto [1:0]$. Generically, every point in $\mathbb{P}^{1}$ has $l$ preimages, so the degree of this map is $l$. Now, the genus of $\mathbb{P}^{1}$ is $0$ and $\psi$ is ramified at 4 points, namely $P_{1}=[0:0:1], P_{2}=[1:0:1], P_{3}=[z^{-1}:0:1]$ and $P_{4}=[0:1:0]$ the point at infinity, with ramification indices $e_{\psi}(P_{i})=l$ for all $i=1, \dots,4$. Denoting $g:=\text{genus}(\mathcal{C}_{z})$, we obtain that $2g-2= -2l+4(l-1)=2l-4$, hence $g=l-1$.
\vspace{0.3cm}
\begin{remark}
The fact that the curve $\mathcal{C}_{z}$ has genus $l-1$ can also be seen by noticing that $\mathcal{C}_{z}$ is a hyperelliptic curve and has model $Y^2=F(X)$ with deg$(F(X))=2l$ (see section \ref{generalconj} Theorem \ref{birational_curve_general_case}). Hence, $2l=2\,\text{genus}(\mathcal{C}_{z})+2$, therefore, genus$(\mathcal{C}_{z})=l-1$.
\end{remark}
\vspace{0.3cm}
Now, applying Theorem \ref {thmformula} to the curve (\ref{curve_ms}), we see that the upper limit in the sum is the genus of the curve. Also, as we mentioned in the previous section, since $l$ is prime and $\eta_{q}\in \fqmc$ is a character of order $l$, we have that $\eta_{q}(-1)=1$. Then,
\begin{align}\label{puntoscurva}
\# \mathcal{C}_{z}(\ensuremath{\mathbb{F}}_{q}) &= q+1 + q\sum_{i=1}^{g}{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q} ^{is}, & \eta_{q} ^{im} \\
& \varepsilon \\
\end{array} \: z \right)\nonumber\\
& = q+1+F_{1,q}(z)+F_{2,q}(z)+ \cdots +F_{g,q}(z)
\end{align}
where $F_{i,q}(z)= q \,{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q} ^{is}, & \eta_{q} ^{im} \\
& \varepsilon \\
\end{array} \: z \right)$.
\vspace{0.3cm}
Notice the resemblance between formulas (\ref{formulapuntos}) and (\ref{puntoscurva}). With this similarity in mind, we are now interested in finding relations between the terms in these formulas, i.e., relations between the $F_{i,q}(z)$ in formula (\ref{puntoscurva}) and the $\alpha_{i,q}(z)+\overline{\alpha_{i,q}}(z)$ in formula (\ref{formulapuntos}).
\vspace{0.3cm}
\section{The Main Conjecture}\label{mainconj}
\vspace{0.4cm}
In this section we state our main conjecture, which proposes an equality between values of ${}_{2}F_{1}$- hypergeometric functions and reciprocal roots of the zeta function of
$\mathcal{C}_{z}$, and in the next sections we prove the conjecture in some particular cases. Denote $a_{i,q}(z):=\alpha_{i,q}(z)+\overline{\alpha_{i,q}}(z)$.
\begin{conj}\label{conject}
Let $l$ and $q$ be odd primes such that
$q\equiv 1 \pmod{l}$ and let
$z \in \ensuremath{\mathbb{F}}_{q}$, $z\neq 0,1$.
Consider the smooth projective curve with affine equation given by
$$\mathcal{C}_{z}^{(m,s)}: y^l=t^m(1-t)^s(1-zt)^m$$
where $1\leq m,s< l$ are integers such that $m+s=l$. Then, using the notation from previous section and after rearranging terms if necessary
$$F_{i,q}(z)=-a_{i,q}(z) \,\,\,\, \text{for all}\,\,\, 1\leq i\leq g.$$
\end{conj}
The previous conjecture gives a closed formula for the values of some hypergeometric functions over finite fields in terms of the traces of Frobenius of certain curves.
\vspace{0.3cm}
\section{Proof of Conjecture for $l=3$}\label{proof l=3}
\vspace{0.3cm}
\par Throughout this section fix $l=3$ and let $q$ be a prime such that $q \equiv 1 \pmod{3}$. Let $z \in \ensuremath{\mathbb{F}}_{q}$, $z\neq 0,1$, and consider the smooth projective curve with affine equation given by
\begin{equation}\label{curve_l=3}
\mathcal{C}_{z}^{(1,2)}: y^3=t(1-t)^{2}(1-zt)
\end{equation}
\noindent Denote $\mathcal{C}_{z}^{(1,2)}=\mathcal{C}_{z}$. The idea will be to show that the $L$-polynomial of the curve $\mathcal{C}_{z}$ is a perfect square, and from that and formulas (\ref{formulapuntos}) and (\ref{puntoscurva}) conclude that the values of the traces of Frobenius must agree with the values of the hypergeometric functions, up to a sign.
\vspace{0.2cm}
\par Recall that, by the Riemann-Hurwitz formula, $\mathcal{C}_{z}$ has genus 2.
Now, every curve of genus 2 defined over $\ensuremath{\mathbb{F}}_{q}$ is birationally equivalent over $\ensuremath{\mathbb{F}}_{q}$ to a curve of the form
\begin{equation}\label{canonical_form_genus_2}
\mathcal{C}: Y^2=F(X)
\end{equation}
where $$F(X)=f_{0}+f_{1}X+f_{2}X^2+ \dots + f_{6}X^6 \in \ensuremath{\mathbb{F}}_{q}[X]$$ is of degree 6 and has no multiple factors (see \cite{Cass96}). This identification is unique up to a fractional linear transformation of $X$, and associated transformation of $Y$,
\begin{equation}\label{frac_linear_transf}
X\to \frac{aX+b}{cX+d}, \,\,\,\,\, Y\to \frac{eY}{(cX+d)^3}
\end{equation}
where
$$a,b,c,d \in \ensuremath{\mathbb{F}}_{q}, \,\,\,\, ad-bc\neq 0, \,\,\,\, e\in \ensuremath{\mathbb{F}}_{q}^{\times}.$$
In our particular case we have
\begin{lemma}\label{curve_biratl}
The curve $\mathcal{C}_{z}: y^3=t(1-t)^{2}(1-zt)$ is birationally equivalent to
\begin{equation}\label{curve_bir_equiv}
\mathcal{C}: Y^2=X^6+2(1-2z)X^3+1.
\end{equation}
\begin{proof}
We begin by translating $t\to 1-t$, so the double point is now at the origin. We get:
\begin{align*}\mathcal{C}_{(1)}: y^3& = (1-t)t^2(1-z(1-t))\\
& =(1-z)t^2+(2z-1)t^3-zt^4.
\end{align*}
\noindent Since $z\neq 0$, multiply both sides by $z^{-1}$ and define
$$G_{2}(t,y):= (1-z^{-1})t^2$$
$$G_{3}(t,y):=z^{-1}y^3-(2-z^{-1})t^3$$
$$G_{4}(t,y):=t^4.$$ Then, each $G_{i}$ is a homogeneous polynomial of degree $i$ in $\ensuremath{\mathbb{F}}_{q}[t,y]$ and $\mathcal{C}_{z}$ is birationally equivalent to $$\mathcal{C}_{(1)}: G_{2}(t,y)+G_{3}(t,y)+G_{4}(t,y)=0.$$
\noindent Next, put $y=tX$ and complete the square to get:
\allowdisplaybreaks{
\begin{align*}
\mathcal{C}_{(2)} & : 0=t^4+(z^{-1}X^3+z^{-1}-2)t^3+(1-z^{-1})t^2 \\
&\qquad = \left(t^2+\frac{1}{2}(z^{-1}X^3+z^{-1}-2)t\right)^2-\frac{(z^{-1}X^3+z^{-1}-2)^2}{4}t^2+(1-z^{-1})t^2
\end{align*}
}
\noindent Multiply by 4 (char($\ensuremath{\mathbb{F}}_{q}\neq 2)$) and divide by $t^2$ to get that $\mathcal{C}_{z}$ is birationally equivalent to
$$\mathcal{C}: Y^2=F(X)$$
where
$$Y=2G_{4}(1,X)t+G_{3}(1,X)$$
$$F(X)=G_{3}(1,X)^2-4G_{2}(1,X)G_{4}(1,X)$$
\noindent By substituting $G_{2}, G_{3}$ and $G_{4}$ in $F(X)$, and rescaling $Y\to z^{-1}Y$ we get the desired result, i.e., $\mathcal{C}_{z}$ is birationally equivalent to
$$\mathcal{C}: Y^2=X^6+2(1-2z)X^3+1.$$
\end{proof}
\end{lemma}
In order to show that the L-polynomial of the curve $\mathcal{C}_{z}$ over $\fq$ is a perfect square, we will start by showing that the Jacobian of $\mathcal{C}_{z}$, Jac$(\mathcal{C}_{z})$, is isogenous to the product of two elliptic curves, i.e., that the Jac($\mathcal{C}_{z}$) is \emph{reducible}. To do that, it is convenient to find a slightly different model for our curve as we can see in the next criterion. First, we need to introduce the concept of equivalent curves.
\begin{df}
We say that two curves $Y^2=F(X)$ are equivalent if they are taken into one another by a fractional linear transformation of $X$ and the related transformation of $Y$ given by (\ref{frac_linear_transf}).
\end{df}
\noindent
\begin{teo}[\cite{Cass96} Theorem 14.1.1]\label{criterion_genus_2}
The following properties of a curve $\mathcal{C}$ of genus 2 are equivalent:
\begin{enumerate}
\item It is equivalent to a curve
\begin{equation}\label{criterion_condition1}
Y^2=c_{3}X^6+c_2X^4+c_1X^2+c_0
\end{equation}
with no terms of odd degree in $X$.
\item It is equivalent to a curve
\begin{equation}
Y^2=G_{1}(X)G_{2}(X)G_{3}(X)
\end{equation}
where the quadratics $G_{j}(X)$ are linearly dependent.
\item It is equivalent to
\begin{equation}
Y^2=X(X-1)(X-a)(X-b)(X-ab)
\end{equation}
for some $a,b$.
\end{enumerate}
If one (and so all) of the previous conditions is satisfied, the Jacobian of $\mathcal{C}$ is reducible.
\end{teo}
There are two maps of (\ref{criterion_condition1}) into elliptic curves
\begin{equation}
\mathcal{E}_{1}: Y^2=c_3Z^3+c_2Z^2+c_1Z+c_0
\end{equation}
with $Z=X^2$ and
\begin{equation}
\mathcal{E}_{2}: V^2=c_0U^3+c_1U^2+c_2U+c_3
\end{equation}
with $U=X^{-2}, V=YX^{-3}$. These maps extend to maps of the Jacobian, which is therefore reducible (see \cite{Cass96}).
Hence, to apply Theorem \ref{criterion_genus_2} we find a different model for $\mathcal{C}_{z}$. In particular we will put our curve in form (\ref{criterion_condition1}).
\begin{lemma}\label{curve_no_odd_terms}
The curve (\ref{curve_bir_equiv}) is equivalent to the curve
\begin{equation}\label{curve_equiv}
Y^2= (1-z)X^6+3(2+z)X^4+3(3-z)X^2+z.
\end{equation}
\begin{proof}
Consider the fractional linear transformation given by
$$X\to \frac{X+1}{X-1}$$
$$Y \to \frac{2Y}{(X-1)^3}$$
\end{proof}
\end{lemma}
\par Combining Lemma \ref{curve_no_odd_terms} and the observation at the end of Theorem \ref{criterion_genus_2}, we find two maps from (\ref{curve_equiv}) to the elliptic curves
\begin{equation}
\mathcal{E}_{1,z}: Y^2: (1-z)Z^3+3(2+z)Z^2+3(3-z)Z+z
\end{equation}
\begin{equation}
\mathcal{E}_{2,z}: V^2=zU^3+3(3-z)U^2+3(2+z)U+(1-z)
\end{equation}
Notice that $\mathcal{E}_{1,z}$ and $\mathcal{E}_{2,z}$ have discriminant $6912z(1-z)$, which is non-zero since $z\neq 0,1$.
Also, after rescaling, we can write
\begin{equation}\label{elliptic_curve_E1}
\mathcal{E}_{1,z}: Y^2: Z^3+3(2+z)Z^2+3(3-z)(1-z)Z+z(1-z)^2
\end{equation}
and
\begin{equation}\label{elliptic_curve_E2}
\mathcal{E}_{2,z}: V^2=U^3+3(3-z)U^2+3(2+z)zU+(1-z)z^2
\end{equation}
\vspace{0.3cm}
As we mentioned above, the existence of these two maps implies that Jac$(\mathcal{C}_{z})$ is isogenous to
$\mathcal{E}_{1,z}\times \mathcal{E}_{2,z}$. Next, we see that these elliptic curves are not totally independent of each other. In fact, one is isogenous to a twist of the other as we see in our next result.
\begin{prop}\label{elliptic_curves_isogenous}
The curve $\mathcal{E}_{1,z}$ is isogenous to the twisted curve $(\mathcal{E}_{2,z})_{-3}$.
\begin{proof}
Consider the equation for the twisted curve $(\mathcal{E}_{2,z})_{-3}$:
\begin{equation}
(\mathcal{E}_{2,z})_{-3} : V^2=U^3-9(3-z)U^2+27(2+z)zU-27 (1-z)z^2
\end{equation}
Define $\varphi : \mathcal{E}_{1,z}\to (\mathcal{E}_{2,z})_{-3}$ such that $\varphi[0:1:0]=[0:1:0]$ and
\begin{equation}\label{isogeny_genus2}
\varphi[x:y:1] = \left[\frac{x^3+Ax^2+Bx+C}{(x+(z-1))^2}:\frac{(x^3+Dx^2+Ex+F)y}{(x+(z-1))^3}:1\right]
\end{equation}
where
$\begin{cases} A=9 \\ B=3(1-z)(z+9)\\ C=(27-2z)(z-1)^2 \\ D=3(z-1)\\ E=3(z+15)(z-1)\\ F=(z-81)(z-1)^2.\end{cases}$\\
\noindent One can check by hand or with Maple for example, that the map $\varphi$ is well defined and gives an isogeny between the two curves.
\end{proof}
\end{prop}
\vspace{0.3cm}
Denote by $L(\mathcal{C}_{z}/\fq,T)$ the $L$-polynomial of $\mathcal{C}_{z}$ over $\fq$. Recall that we want to show that, for $q \equiv 1 \pmod{3}$ we have $L(\mathcal{C}_{z}/\fq,T)=(1+aT+qT^2)^2$ for some $a \in \ensuremath{\mathbb{R}}$. So far, we have seen that
$$L(\mathcal{C}_{z}/\fq,T)=(1+a_{1,q}(z)T+qT^2)(1+a_{2,q}(z)T+qT^2)$$
where $a_{1,q}(z)$ and $a_{2,q}(z)$ are the traces of Frobenius on the curves $\mathcal{E}_{1,z}$ and $\mathcal{E}_{2,z}$ respectively.
Therefore, we need to show that $a_{1,q}(z)=a_{2,q}(z)$, or equivalently, that $\# \mathcal{E}_{1,z}(\fq)=\# \mathcal{E}_{2,z}(\fq)$ for $q \equiv 1 \pmod{3}$. This is the statement of our next result.
\begin{cor}\label{same_points_ell_curves}
Let $q$ be a prime such that $q \equiv 1 \pmod{3}$. Then
$$\# \mathcal{E}_{1,z}(\ensuremath{\mathbb{F}}_{q})=\# \mathcal{E}_{2,z}(\ensuremath{\mathbb{F}}_{q}).$$
\begin{proof}
Fix $q$ in the conditions of the corollary. Let $a_{1,q}(z)$ and $a_{2,q}(z)$ be the traces of Frobenius on the elliptic curves $\mathcal{E}_{1,z}$ and $\mathcal{E}_{2,z}$ respectively, i.e.
$$\# \mathcal{E}_{1,z}(\ensuremath{\mathbb{F}}_{q})=q+1-a_{1,q}(z)$$
$$\# \mathcal{E}_{2,z}(\ensuremath{\mathbb{F}}_{q})=q+1-a_{2,q}(z)$$
\noindent Since $(\mathcal{E}_{2,z})_{-3}$ is a twist of $\mathcal{E}_{2,z}$ we have
$$\# (\mathcal{E}_{2,z})_{-3}(\ensuremath{\mathbb{F}}_{q})= 1+q-\left(\frac{-3}{q}\right)a_{2,q}(z)$$
where $\left(\frac{\cdot}{q}\right)$ is the Legendre symbol.
\noindent Now, by Proposition (\ref{elliptic_curves_isogenous}) we know that
$$\# (\mathcal{E}_{1,z})(\ensuremath{\mathbb{F}}_{q})=\# (\mathcal{E}_{2,z})_{-3}(\ensuremath{\mathbb{F}}_{q})$$
hence
$$a_{2,q}(z)=\left(\frac{-3}{q}\right)a_{1,q}(z).$$
\noindent To finish the proof, it only remains to see that $\left(\frac{-3}{q}\right)=1$ for all primes $q \equiv 1 \pmod{3}$.
\noindent Since the Legendre symbol is completely multiplicative on its top argument, we can decompose $\left(\frac{-3}{q}\right)=\left(\frac{-1}{q}\right)\left(\frac{3}{q}\right)$. Also
\begin{equation}\label{computation(-1/q)} \left(\frac{-1}{q}\right)=(-1)^{(q-1)/2}=\begin{cases} 1 & \textnormal{if} \,\,\, q\equiv 1 \pmod{4}\\
-1 &\textnormal{if}\,\,\, q\equiv 3 \pmod{4}. \end{cases}
\end{equation}
and
\begin{equation}\label{computation(3/q)} \left(\frac{3}{q}\right)=(-1)^{\lceil(q+1)/6\rceil}=\begin{cases} 1 & \textnormal{if} \,\,\, q\equiv 1,11 \pmod{12}\\
-1 &\textnormal{if}\,\,\, q\equiv 5,7 \pmod{12}. \end{cases}\end{equation}
We will divide the analysis in cases. First notice that since $q \equiv 1 \pmod{3}$ then $q$ must be congruent to either $1$ or $7$ $\pmod{12}$.
\begin{itemize}
\item Suppose $q \equiv 1 \pmod{12}$ and therefore $\left(\frac{3}{q}\right)=1$ by (\ref{computation(3/q)}).
Also, since $q \equiv 1 \pmod{12}$, we have that $q \equiv 1 \pmod{4}$, hence $\left(\frac{-1}{q}\right)=1$ by (\ref{computation(-1/q)}).
Then $\left(\frac{-3}{q}\right)=1$ as desired.
\item Suppose $q \equiv 7 \pmod{12}$, then $\left(\frac{3}{q}\right)=-1$. Also, in this case $q \equiv 3 \pmod{4}$, and so $\left(\frac{-1}{q}\right)=-1$, giving that $\left(\frac{-3}{q}\right)=1$ as desired.
\end{itemize}
Hence
$$\# \mathcal{E}_{1,z}(\ensuremath{\mathbb{F}}_{q})=\# \mathcal{E}_{2,z}(\ensuremath{\mathbb{F}}_{q}) \,\,\,\, \text{for all}\,\,\, q \equiv 1 \pmod{3}.$$
\end{proof}
\end{cor}
\vspace{0.2cm}
We have now all the necessary tools to complete the proof of Conjecture \ref{conject} for the case when $l=3$.
\begin{teo}\label{conjecture l=3}
Conjecture \ref{conject} is true for $l=3$.
\begin{proof}
First notice that when $l=3$ we have two different cases to consider, namely the curves with $(m,s)=(1,2)$ and $(m,s)=(2,1)$. However, by section \ref{hgf and ac} Corollary \ref{curves_have_same_number_points} these two curves have the same number of points over every finite field extension of $\ensuremath{\mathbb{F}}_{q}$, therefore they have the same zeta function over $\ensuremath{\mathbb{F}}_{q}$. Also, the hypergeometric functions that appear on the right hand side of equation (\ref{formula}) are the same for both curves. Because of these, it is enough to prove that the conjecture is true for one of these curves, say $\mathcal{C}_{z}: y^3=t(1-t)^2(1-zt)$.
As above, write the zeta function of $\mathcal{C}_{z}$ as
\begin{align*}
Z(\mathcal{C}_{z}/\ensuremath{\mathbb{F}}_{q};T)&=\frac{(1-\alpha_{1,q}(z)T)(1-\overline{\alpha_{1,q}(z)}T)(1-\alpha_{2,q}(z)T)(1-\overline{\alpha_{2,q}(z)}T)}{(1-T)(1-qT)}\\
& =\frac{(1-a_{1,q}(z)T+qT^2)(1-a_{2,q}(z)T+qT^2)}{(1-T)(1-qT)}
\end{align*}
where $a_{i,q}(z)=\alpha_{i,q}(z)+\overline{\alpha_{i,q}}(z)$.
Using the same notation as in equation (\ref{puntoscurva}), we have that
\begin{align}\label{formula1}
F_{1,q}(z)+ F_{2,q}(z)
& =-(a_{1,q}(z)+a_{2,q}(z))
\end{align}
\noindent Recall that $F_{1,q}(z)=\,_2F_1[\eta_{q}^{2},\eta_{q};\varepsilon|z]$ and $F_{2,q}(z)=\, _{2}F_{1}[\eta_{q},\eta_{q}^{2};\varepsilon|z]$, therefore, Corollary \ref{conjugate_hpgf} in section \ref{preliminaries} implies that $F_{1,q}(z)=F_{2,q}(z)$.
Also, as we have seen in Corollary \ref{same_points_ell_curves}, $a_{1,q}(z)=a_{2,q}(z)$.
Hence, (\ref{formula1}) becomes
$$2F_{1,q}(z)=-2a_{1,q}(z)$$
so $a_{1,q}(z)=-F_{1,q}(z)$ and $a_{2,q}(z)=-F_{2,q}(z)$, proving the conjecture for $l=3$.
\end{proof}
\end{teo}
\vspace{0.3cm}
\section{Proof of Conjecture for $l=5$}\label{proof l=5}
\vspace{0.3cm}
Our next objective is to prove that Conjecture \ref{conject} also holds when $l=5$.
The proof has some ingredients in common with the previous case, however is not completely analogous and requires some different techniques as we will see.
Consider the smooth projective curve with affine model
\begin{equation}\label{curve_l=5}
\mathcal{C}_{z}: y^5=t(1-t)^4(1-zt)
\end{equation}
over a finite field $\ensuremath{\mathbb{F}}_{q}$ with $q$ prime, $q \equiv 1 \pmod{5}$ and $z\in\ensuremath{\mathbb{F}}_{q}\backslash\{0,1\}$.
Notice that, by performing the same transformations done in Lemma \ref{curve_biratl} and the fractional linear transformation
$$X\to \frac{X+1}{X-1}$$
$$Y \to \frac{2Y}{(X-1)^5}$$
on the curve (\ref{curve_l=5}) we get the following result.
\begin{lemma}
The curve $\mathcal{C}_{z}: y^5=t(1-t)^4(1-zt)$ is equivalent to the curve
\begin{equation}\label{curve_equiv_l=5}
\mathcal{C}:Y^2= (1-z)X^{10}+(20+5z)X^8+(110-10z)X^6+(100+10z)X^4+(25-5z)X^2+z.
\end{equation}
\end{lemma}
\noindent Define the curves $\mathcal{H}_{1,z}: y^2=f(x)$ and $\mathcal{H}_{2,z}:y^2=g(x)$ where
\begin{equation}\label{H1}
f(x) = (1-z)x^{5}+(20+5z)x^4+(110-10z)x^3+(100+10z)x^2+(25-5z)x+z
\end{equation}
and
\begin{equation}\label{H2}
g(x)=zx^5+(25-5z)x^4+(100+10z)x^3+(110-10z)x^2+(20+5z)x+(1-z).
\end{equation}
\noindent Then, by the same argument in the previous section, we can find two maps from $\mathcal{C}$ to $\mathcal{H}_{1,z}$ and $\mathcal{H}_{2,z}$, and extending these maps to the Jacobians of the curves, we conclude that Jac($\mathcal{C}$) is isogenous to Jac($\mathcal{H}_{1,z}$)$\times$ Jac($\mathcal{H}_{2,z}$). We start by showing that the L-polynomial of $\mathcal{C}_{z}$ over $\ensuremath{\mathbb{F}}_{q}$ with $q \equiv 1 \pmod{5}$ is a perfect square. First, we recall some results about abelian varieties.
\vspace{0.3cm}
Let $k$ be a perfect field, which will eventually be finite. Recall that an \emph{abelian variety over $k$} is a subset of some projective $n$-space over $k$ which
\begin{enumerate}
\item is defined by polynomial equations on the coordinates (with coefficients in $k$),
\item is connected, and
\item has a group law which is algebraic (i.e., the coordinates of the sum of two points are rational functions of the coordinates of the factors).
\end{enumerate}
We say that an abelian variety over $k$ is \emph{simple} if it has no nontrivial abelian subvarieties. We have the following result.
\begin{teo}(Poincar\'e-Weil)
Every abelian variety over $k$ is isogenous to a product of powers of nonisogenous simple abelian varieties over $k$.
\end{teo}
\vspace{0.3cm}
Consider $\mathcal{C}_{z}: y^5=t(1-t)^4(1-zt)$ over the algebraically closed field $\overline{\ensuremath{\mathbb{Q}}}$ and let $\zeta:=e^{2\pi i/5}$ be a fifth root of unity. Then the map $[\zeta]:\mathcal{C}_{z}\to \mathcal{C}_{z}$ defined by $[\zeta](t,y)=(t,\zeta y)$ defines an automorphism on the curve $\mathcal{C}_{z}$. Denote $J_{z}:= \text{Jac}(\mathcal{C}_{z})$ and $J_{i,z}:=\text{Jac}(\mathcal{H}_{i,z})$, for $i=1,2$.
The automorphism $[\zeta]$ induces a map from $J_{z}$ to itself, hence
$$[\zeta]\in \text{End}(J_{z}).$$
\noindent On the other hand, as we mentioned above, we can find an isogeny over $\ensuremath{\mathbb{Q}}$
$$\phi: J_{1,z}\times J_{2,z}\to J_{z}.$$ Applying $\phi$ we get
$$\phi(J_{1,z})\subseteq J_{z}$$
where $J_{1,z}$ here denotes $J_{1,z}\times \{0\}$. Similarly
$$\phi(J_{2,z})\subseteq J_{z}.$$
We also have $$[\zeta](\phi(J_{i,z}))\subseteq J_{z}$$
for $i=1,2$.
\vspace{0.3cm}
Consider now the curve $y^5=t(1-t)^4(1-zt)$ defined over $\overline{\ensuremath{\mathbb{Q}}(z)}$.
We can apply to this curve the same argument we did before, and we can see that $J_{i,z}$ are simple abelian varieties over $\overline{\ensuremath{\mathbb{Q}}(z)}$. Otherwise, if $J_{i,z}$ is isogenous to the product of two elliptic curves, then, for all $z$ the L-polynomial would have two quadratic factors, which is not the case. (See example in section \ref{example} at the end of this section). Therefore, we have $\phi(J_{1,z})$ and $[\zeta](\phi(J_{1,z}))$ two simple abelian varieties inside $J_{z}$. By Poincar\'e complete reducibility theorem, we have that
either $\phi(J_{1,z})\cap [\zeta](\phi(J_{1,z}))$ is finite or $\phi(J_{1,z})=[\zeta](\phi(J_{1,z}))$.
\vspace{0.3cm}
\begin{itemize}
\item Case 1: $\phi(J_{1,z})\cap [\zeta](\phi(J_{1,z}))$ is finite.\\
In this case, by dimension count we have
$$[\zeta](\phi(J_{1,z}))+\phi(J_{1,z})=J_{z}.$$
Then, since $\phi(J_{1,z})$ and $\phi(J_{2,z})$ are simple abelian varieties, the Poincar\'e -Weil Theorem implies that
$$[\zeta](\phi(J_{1,z})) \approx \phi(J_{2,z})$$
over $\ensuremath{\mathbb{Q}}(\zeta)$, where $\approx$ denotes isogeny. Notice that this isogeny will exist over any field containing a fifth root of unity, therefore, finite fields $\ensuremath{\mathbb{F}}_{q}$ with $q \equiv 1 \pmod{5}$ are fine. Then, we get that $[\zeta](\phi(J_{1,z}))$ is isogenous to $\phi(J_{2,z})$ over $\ensuremath{\mathbb{F}}_{q}$ for $q \equiv 1 \pmod{5}$.
\item Case 2: $[\zeta](\phi(J_{1,z}))=\phi(J_{1,z})$.\\
For this case, we recall first some facts about abelian varieties (for details see \cite{La83} or \cite{Mu85}). Suppose $A/\overline{k}$ is a simple abelian variety of dimension $g$, and denote $\Delta:= \text{End}_{\overline{k}}(A)\otimes _{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{Q}}$. Then, Poincar\'e's complete reducibility Theorem implies that $\Delta$ is a division algebra. Also, from the theory of division algebras we know that the dimension of a division algebra over its center is a perfect square, hence, if $K=\{x \in \Delta : xa=ax\,\, \text{for all} \,\, a \in \Delta\}$ is the center of $\Delta$, we have $[\Delta : K]=d^2$ for some integer $d$. On the other hand, if $[K:\ensuremath{\mathbb{Q}}]=e$ then $de|2g$, moreover, in characteristic zero, we have that $d^2e|2g$.
Now that we have reviewed the results we need we can go back to case 2.
In this case, let $\Delta :=\text{End}(\phi(J_{1,z}))\otimes_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{Q}}$ and $K$ be its center. Applying the results above, we can assume that $[\Delta : K]=d^2$ and $[K:\ensuremath{\mathbb{Q}}]=e$, for some integers $d$ and $e$. Since the dimension of $\phi(J_{1,z})=2$ and char$(\overline{\ensuremath{\mathbb{Q}}(z)})=0$, we have that $d^2e|4$.
By assumption, we have that
$$[\zeta] \in \text{End}(\phi(J_{1,z})),$$
hence
$$\ensuremath{\mathbb{Q}}(\zeta) \subseteq \text{End}(\phi(J_{1,z}))\otimes _{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Q}} :=\Delta.$$
Now, we have $$\ensuremath{\mathbb{Q}} \subseteq \ensuremath{\mathbb{Q}}(\zeta)\subseteq \Delta,$$
$$[\ensuremath{\mathbb{Q}}(\zeta):\ensuremath{\mathbb{Q}}]=4$$
and
$$[\Delta: \ensuremath{\mathbb{Q}}]|4.$$
Therefore, $4=4[\Delta:\ensuremath{\mathbb{Q}}(\zeta)]$, hence $$\ensuremath{\mathbb{Q}}(\zeta)=\Delta$$
i.e., $\text{End}(\phi(J_{1,z}))\otimes _{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{Q}}$ is a field of degree 4 over $\ensuremath{\mathbb{Q}}$.
Recall the following definition.
\begin{df}
A totally imaginary quadratic extension of a totally real field is called a CM field (Complex Multiplication).
\end{df}
Then, $\Delta$ is a CM field, since it is equal to $\ensuremath{\mathbb{Q}}(\zeta)$ and every cyclotomic field is a CM field ($\ensuremath{\mathbb{Q}} \subseteq \ensuremath{\mathbb{Q}}(\zeta, \overline{\zeta}) \subseteq \ensuremath{\mathbb{Q}}(\zeta)$).
\begin{teo}[Shimura \cite{Sh05}]
Let $k$ be a field of characteristic zero. Over $k$ there do not exist non-constant families of abelian varieties with full CM (i.e., the endomorphism ring has maximal dimension).
\end{teo}
However, our family $\phi(J_{1,z})$ is non-constant, as it can be computationally checked with Magma using Igusa invariants.
\end{itemize}
\noindent Therefore, only case 1 is possible, and we have
$$[\zeta](\phi(J_{1,z})) \approx \phi(J_{2,z}).$$
\vspace{0.4cm}
We now state and prove our theorem.
\begin{teo}\label{conjecture l=5}
Conjecture \ref{conject} holds for $l=5$ over $\ensuremath{\mathbb{F}}_{q}$, for a prime $q \equiv 1 \pmod{5}$.
\vspace{0.3cm}
\begin{proof}[Proof of Theorem \ref{conjecture l=5}]
By the same argument done in the proof of Conjecture \ref{conject} for $l=3$, it is enough to prove the conjecture for the curve $\mathcal{C}_{z}:y^5=t(1-t)^4(1-zt)$.
Also, by the previous argument, then the L-polynomial of $\mathcal{C}_{z}$ is a perfect square, i.e., we can assume, after rearranging terms if necessary, that $a_{1,q}(z)=a_{4,q}(z)$ and $a_{2,q}(z)=a_{3,q}(z)$. We can write then
$$Z(\mathcal{C}_{z}/\ensuremath{\mathbb{F}}_{q};T)= \frac{(1-a_{1}(z)T+qT^2)^2(1-a_{2}(z)T+qT^2)^2}{(1-T)(1-qT)}$$
\vspace{0.3cm}
\noindent By Corollary \ref{conjugate_hpgf} in section \ref{preliminaries}, we know that $F_{1,q}(z)=F_{4,q}(z)$ and $F_{2,q}(z)=F_{3,q}(z)$.
At the end, we get that
\begin{equation}\label{a1+a2}
-(a_{1,q}(z)+a_{2,q}(z))=F_{1,q}(z)+F_{2,q}(z).
\end{equation}
We want to prove that
$-a_{1,q}(z)=F_{1,q}(z)$ and $-a_{2,q}(z)=F_{2,q}(z)$. Recall, from (\ref{formulapuntos}) that
\begin{equation}\label{a1^2+a2^2}
-(a_{1,q}(z)^2-2q+a_{2,q}(z)^2-2q)=F_{1,q^2}(z)+F_{2,q^2}(z).
\end{equation}
Also, keep in mind that for the hypergeometric functions $F_{1,q}$ and $F_{2,q}$ we are choosing a character $\eta_{q} \in \fqmc$ of order $5$, and for the hypergeometric functions $F_{1,q^2}$ and $F_{2,q^2}$ the character we are choosing is in $\widehat{\ensuremath{\mathbb{F}}_{q^2}^{\times}}$, also of order $5$.
\begin{claim}
It is enough to show that
\begin{equation}\label{relation_hypg_functions}
F_{i,q^2}(z)=-F_{i,q}(z)^2+2q
\end{equation}
for $ i=1,2$.
\begin{proof}[Proof of Claim]
We will write $a_{i,q}:=a_{i,q}(z)$ and $F_{i,q^k}:=F_{i,q^k}(z)$ for $i,k=1,2$ .
If (\ref{relation_hypg_functions}) is true, from (\ref{a1+a2}) and (\ref{a1^2+a2^2}) we get the system of equations in $a_{1,q}$ and $a_{2,q}$
$$\begin{cases} -a_{1,q}-a_{2,q}= F_{1,q}+F_{2,q}\\ a_{1,q}^2+a_{2,q}^2=F_{1,q}^2+F_{2,q}^2.\end{cases}$$
which is equivalent to
$$\begin{cases} -a_{1,q}-a_{2,q}= F_{1,q}+F_{2,q}\\ a_{1,q}a_{2,q}=F_{1,q}F_{2,q}.\end{cases}$$
hence, $a_{1,q}=-F_{1,q}$ and $a_{2,q}=-F_{2,q}$.
\end{proof}
\end{claim}
\noindent Continuing with the proof of the conjecture for $l=5$, it only remains to show that (\ref{relation_hypg_functions}) holds. For that, let's start by writing explicitly the functions we have on the left and right hand side of (\ref{relation_hypg_functions}). We start with $F_{1,q}=F_{4,q}=11 \,{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q}, & \eta_{q} ^{4} \\
& \varepsilon \\
\end{array} \: z \right)$.
The other case will be the result of a similar argument.
\begin{align*}
F_{1,q}^2 &= \sum_{x,y \in \ensuremath{\mathbb{F}}_{q}} \eta_{q}^4(xy)\,\eta_{q}((1-x)(1-y))\,\eta_{q}^4((1-zx)(1-zy))\nonumber\\
& = \sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^4(s)\sum_{x\in\ensuremath{\mathbb{F}}_{q}^{\times}}\eta_{q}((1-x)(1-s/x))\,\eta_{q}^4((1-zx)(1-zs/x)) & (xy=s)\nonumber\\
& = \sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^4(s)\sum_{x\in\ensuremath{\mathbb{F}}_{q}^{\times}}\eta_{q}(1-x-s/x+s)\,\eta_{q}^4(1-z(x+s/x)+z^2s).
\end{align*}
\vspace{0.2cm}
\noindent On the other hand, define $\chi \in \widehat{\ensuremath{\mathbb{F}}_{q^2}^{\times}}$ such that $\chi:=\eta_{q}\circ N_{\ensuremath{\mathbb{F}}_{q}}^{\ensuremath{\mathbb{F}}_{q^2}}$, i.e., for $\alpha \in \ensuremath{\mathbb{F}}_{q^2}$, $\chi(\alpha)=\eta_{q}(N_{\ensuremath{\mathbb{F}}_{q}}^{\ensuremath{\mathbb{F}}_{q^2}}(\alpha))=\eta_{q}(\alpha^{q+1})$, where $N_{\ensuremath{\mathbb{F}}_{q}}^{\ensuremath{\mathbb{F}}_{q^2}}$ denotes the norm from $\ensuremath{\mathbb{F}}_{q^2}$ down to $\ensuremath{\mathbb{F}}_{q}$. Since $N_{\ensuremath{\mathbb{F}}_{q}}^{\ensuremath{\mathbb{F}}_{q^2}}(\alpha) \in \ensuremath{\mathbb{F}}_{q}$ for all $\alpha \in \ensuremath{\mathbb{F}}_{q^2}$ then $\chi$ is well defined and it actually defines a character of $\ensuremath{\mathbb{F}}_{q^2}^{\times}$ (see \cite{IR90} Chapter 11)..
Moreover, since the order of $\eta_{q}$ is $5$ then the order of $\chi$ must divide $5$.
But if $x \in \fq$ then $N_{\ensuremath{\mathbb{F}}_{q}}^{\ensuremath{\mathbb{F}}_{q^2}}(x)=x^{q+1}=x^2$, therefore $\chi|_{\ensuremath{\mathbb{F}}_{q}}=\eta_{q}^2\neq \varepsilon$. Then $\chi
\in \widehat{\ensuremath{\mathbb{F}}_{q^2}^{\times}}$ is a character of order $5$. We choose this character for our computations and we have
\begin{align*}
F_{1,q^2} &:= \sum_{c \in \ensuremath{\mathbb{F}}_{q^2}} \chi^4(c)\chi(1-c)\chi^4(1-zc)\nonumber\\
&= \sum_{c \in \ensuremath{\mathbb{F}}_{q^2}} \eta_{q}^4(c^{q+1})\eta_{q}((1-c)^{q+1})\eta_{q}^4((1-zc)^{q+1}) \nonumber \\
& = \sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^4(s)
\sum_{\alpha \in\ensuremath{\mathbb{F}}_{q^2}^{\times}, \,\, \alpha^{q+1}=s}\eta_{q}(1-\alpha-s/\alpha +s)\eta_{q}^4(1-z(\alpha+s/\alpha)+z^2s)
\end{align*}
where the last equality follows by putting $c^{q+1}=s$ and noting that, since char$(\ensuremath{\mathbb{F}}_{q})=q$ and $\alpha^{q+1}=s$ then
$$(1-\alpha)^{q+1}=(1-\alpha)^q(1-\alpha)=(1-\alpha ^q)(1-\alpha)=1-\alpha-\alpha^{q}+\alpha^{q+1}=1-\alpha-s/\alpha +s.$$
A similar computation gives that
$$(1-zc)^{q+1}=1-z(\alpha+s/\alpha)+z^2s.$$
\noindent For $s \in \ensuremath{\mathbb{F}}_{q}^{\times}$ define $h:\ensuremath{\mathbb{F}}_{q^2}^{\times}\to \ensuremath{\mathbb{F}}_{q^2}$ such that $h(t)=t+s/t$ and let $f$ and $g$ be the restrictions of $h$ to the sets $\ensuremath{\mathbb{F}}_{q}^{\times}$ and $N^{-1}(s):=\{\alpha \in \ensuremath{\mathbb{F}}_{q^2}: \alpha^{q+1}=s\}\subset \ensuremath{\mathbb{F}}_{q^2}^{\times}$ respectively, i.e.,
$$f:=h|_{\ensuremath{\mathbb{F}}_{q}^{\times}}:\ensuremath{\mathbb{F}}_{q}^{\times} \to \ensuremath{\mathbb{F}}_{q}$$
$$g:=h|_{N^{-1}(s)}:N^{-1}(s) \to \ensuremath{\mathbb{F}}_{q}$$
Notice that, if $\alpha \in N^{-1}(s)$ then $g(\alpha)=\alpha + s/\alpha=\alpha + \alpha^{q}=\text{tr}(\alpha) \in \ensuremath{\mathbb{F}}_{q}$, hence Im$(g)\subset \ensuremath{\mathbb{F}}_{q}$. Making use of these functions, we can rewrite
\begin{equation*}
F_{1,q}^2 = \sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^4(s)\sum_{b \in \text{Im}(f)\subset \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s)\eta_{q}^4(1-bz+z^2s)
\end{equation*}
and
\begin{equation*}
F_{1,q^2}=\sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^4(s) \sum_{b \in \text{Im}(g)\subset \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s)\eta_{q}^4(1-bz+z^2s)
\end{equation*}
\noindent Combining both equations, we have
\begin{equation}\label{equation_with_b}
F_{1,q}^2+F_{1,q^2}=\sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^{4}(s) \sum_{\text{some} \,\,b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s)
\eta_{q}^{4}(1-bz+z^2s).
\end{equation}
\noindent Our next and last step will be to describe over what elements are we summing in the inner sum of (\ref{equation_with_b}). Fix $s \in \ensuremath{\mathbb{F}}_{q}^{\times}$. Note that $h$ is generically a 2-to-1 map. To see this, suppose $b \in \text{Im}(h)$, therefore there exists $t\in \ensuremath{\mathbb{F}}_{q^2}^{\times}$ such that $t+s/t=b$, or equivalently $t^2-bt+s=0$. Hence, $h$ is 2-to-1 except when $b^2-4s=0$, i.e., except when $s$ is a perfect square in $\ensuremath{\mathbb{F}}_{q}$.
\begin{itemize}
\item Case 1: $s$ is not a perfect square in $\ensuremath{\mathbb{F}}_{q}^{\times}$.\\
By previous comment, we know that in this case $h$ is 2-to-1 map. Also, is not too hard to show that $h$ is surjective when restricted to the two domains $\ensuremath{\mathbb{F}}_{q}^{\times}$ and $N^{-1}(s)$. Therefore, in this case every element $b \in \ensuremath{\mathbb{F}}_{q}^{\times}$ will appear exactly twice in the inner sum of (\ref{equation_with_b}).
\item Case 2: $s$ is a perfect square in $\ensuremath{\mathbb{F}}_{q}^{\times}$.\\
In this case, let $s=a^2$, then $b=2a$ or $b=-2a$. As in previous case, every $b \in \ensuremath{\mathbb{F}}_{q}$ different from $2a$ and $-2a$ will appear exactly twice in the inner sum of (\ref{equation_with_b}).
What about $b=2a$ and $b=-2a$? If $s$ is a perfect square then $\text{Im}(f)\cap\text{Im}(g)=\{2a,-2a\}$, hence both $2a$ and $-2a$ will also appear twice in the sum, once as part of the sum for $F_{1,q}^2$ and once as part of the sum for $F_{1,q^2}$.
\end{itemize}
\noindent Summarizing we have
\begin{align*}
F_{1,q}^2+F_{1,q^2} &= \sum_{\substack{s \in \ensuremath{\mathbb{F}}_{q}^{\times}\\ \left(\frac{s}{q}\right)=-1}} \eta_{q}^{4}(s) \sum_{\text{some}\,\,b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s)\\
& \qquad \qquad + \sum_{\substack{s \in \ensuremath{\mathbb{F}}_{q}^{\times}\\ \left(\frac{s}{q}\right)=1}} \eta_{q}^{4}(s) \sum_{\text{some}\,\,b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s)\\
& = 2\sum_{\substack{s \in \ensuremath{\mathbb{F}}_{q}^{\times}\\ \left(\frac{s}{q}\right)=-1}} \eta_{q}^{4}(s) \sum_{b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s).\\
& \qquad \qquad + 2\sum_{\substack{s \in \ensuremath{\mathbb{F}}_{q}^{\times}\\ \left(\frac{s}{q}\right)=1}} \eta_{q}^{4}(s) \sum_{b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s)\\
&= 2\sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^{4}(s) \sum_{b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s).
\end{align*}
\noindent To finish the proof we need to see that
$$\sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^{4}(s) \sum_{b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s)=q.$$
\noindent We begin by rewriting the inner sum in the above formula, but first recall that the action of $GL_{2}(\ensuremath{\mathbb{F}}_{q})$ on $\ensuremath{\mathbb{F}}_{q}$ given by
\begin{displaymath}
\left( \begin{array}{cc}
a & b\\
c & d
\end{array}
\right)\cdot w:=\frac{a w +b}{c w+d}
\end{displaymath}
defines an automorphism of $\mathbb{P}^{1}(\ensuremath{\mathbb{F}}_{q})$. Now, since $\eta_{q}^5=\varepsilon$ and $\eta_{q}(0)=0$ we get
\begin{align*}
\sum_{b\in \ensuremath{\mathbb{F}}_{q}} \eta_{q}(1-b+s) \eta_{q}^{4}(1-bz+z^2s) &= \sum_{\substack{b \in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+zs)}}
\eta_{q}\left(\frac{1-b+s}{1-bz+z^2s}\right)\\
&= \sum_{\substack{b \in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+zs)}}
\eta_{q}(\gamma \cdot b)
\end{align*}
where $\gamma:=\left( \begin{array}{cc}
-1 & s+1\\
-z & z^2s+1
\end{array}
\right)$. Now, $\text{det}\gamma=(z-1)(1-sz)$, therefore, since $z\neq 1$ we see that as long as $s\neq z^{-1}$, $\gamma$ defines an automorphism of $\mathbb{P}^{1}(\ensuremath{\mathbb{F}}_{q})$. Then, by separating the sums according to whether $s=z^{-1}$ or not, we have:
\begin{align*}
\sum_{s \in \ensuremath{\mathbb{F}}_{q}^{\times}} \eta_{q}^{4}(s) \sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+zs)}} \eta_{q}(\gamma\cdot b) &=
\sum_{\substack{s\in\ensuremath{\mathbb{F}}_{q}^{\times}\\ s \neq z^{-1}}} \eta_{q}^{4}(s) \sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+zs)}} \eta_{q}(\gamma \cdot b) \\ &\qquad \qquad + \eta^{4}_{q}(z^{-1})\sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+1)}} \eta_{q}\left(\frac{1-b+z^{-1}}{1-bz+z}\right) \\
&= A+B
\end{align*}
where $A$ and $B$ are set to be the two sums appearing in the previous line. We now compute $A$ and $B$. First we have
\begin{align*}
B &= \sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+1)}} \eta^{4}_{q}(z^{-1}) \eta_{q}\left(\frac{1-b+z^{-1}}{1-bz+z}\right) \\
& = \sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+1)}} \eta_{q}\left(\frac{z-bz+1}{1-bz+z}\right) & (\eta_{q}^{4}(z^{-1})=\eta_{q}(z))\\
& = \sum_{\substack{b\in \ensuremath{\mathbb{F}}_{q}\\ b\neq (z^{-1}+1)}} 1 \\
& = q-1.
\end{align*}
\noindent Now we compute $A$. Since in this case the action of $\gamma$ defines an automorphism of $\mathbb{P}^{1}(\ensuremath{\mathbb{F}}_{q})$, and since $\gamma \cdot b$ runs over $\ensuremath{\mathbb{F}}_{q}- \{z^{-1}\}$ as $b$ runs over $\ensuremath{\mathbb{F}}_{q}-\{z^{-1}+sz\}$ we see that
\begin{align*}
A & = \sum_{\substack{s \in \ensuremath{\mathbb{F}}_{q}^{\times}\\ s\neq z^{-1}}} \eta_{q}^{4}(s) \sum_{\substack{u\in \ensuremath{\mathbb{F}}_{q}\\ u\neq z^{-1}}} \eta_{q}(u) \\
& = (-\eta_{q}^{4}(z^{-1})) (-\eta_{q}(z^{-1})) & \text{(orthogonality relations for characters)} \\
& = 1 & (\eta_{q}^{5}=\varepsilon)
\end{align*}
Therefore, combining our calculations for $A$ and $B$ we see that
\begin{equation}
F_{1,q}^2+F_{1,q^2} = 2(A+B)= 2q
\end{equation}
finishing the proof.
\end{proof}
\end{teo}
\begin{example}\label{example}
We illustrate with an example the result of the conjecture. Consider the smooth projective curve with affine model given by
$$\mathcal{C}_{3}: y^5=t^2(1-t)^3(1-3t)^2$$
over the finite field $\ensuremath{\mathbb{F}}_{11}$. $\mathcal{C}_{3}$ is a hyperelliptic curve of genus $4$, and using Magma we can compute its zeta function. We have that
$$Z(\mathcal{C}_{3}/\ensuremath{\mathbb{F}}_{11},T)= \frac{(121T^4+66T^3+26T^2+6T+1)^2}{(1-T)(1-11T)}.$$
\noindent Therefore, after doing some algebra, we find the values of $a_{i,11}(3)$ for $i=1,\dots,4$.
Specifically, if $\zeta_{5}:=e^{2\pi i/5}$ we have
\begin{align*}
a_{1,11}(3)= a_{4,11}(3)&=-4-2\zeta_{5}^2-2\zeta_{5}^3\\
a_{2,11}(3)=a_{3,11}(3)&=-2+2\zeta_{5}^2+2\zeta_{5}^3.
\end{align*}
\vspace{0.3cm}
\noindent On the other hand, consider the multiplicative character $\eta_{11} \in \widehat{\ensuremath{\mathbb{F}}_{11}^{\times}}$ defined by $\eta_{11}(a):=\zeta_{5}$, where $a$ is a primitive element of $\ensuremath{\mathbb{F}}_{11}^{\times}$, i.e., $a$ generates $\ensuremath{\mathbb{F}}_{11}^{\times}$, and recall that
$F_{i,11}(3)= 11\,{}_{2}F_{1}[\eta_{11} ^{3i},\eta_{11} ^{2i};\varepsilon|3]$.
Using Magma we get
\begin{align*}
F_{1,11}(3)=F_{4,11}(3)&=4+2\zeta_{5}^2+2\zeta_{5}^3\\
F_{2,11}(3)=F_{3,11}(3)&=2-2\zeta_{5}^2-2\zeta_{5}^3.
\end{align*}
\noindent Hence $$F_{i,11}(3)=-a_{i,11}(3)\,\,\,\,\, \text{for all}\,\, i=1,2,3,4.$$
\end{example}
\vspace{0.3cm}
\section{Advances toward the general case}\label{generalconj}
\vspace{0.4cm}
Even though it is still work in progress to prove the conjecture in its full generality, some advances have already been made toward it. To show these advances is the purpose of this section.
\vspace{0.3cm}
Suppose now that $l$ and $q$ are odd primes, with $q \equiv 1 \pmod{l}$, and let $z \in \ensuremath{\mathbb{F}}_{q}\backslash\{0,1\}$. Recall that our conjecture relates values of certain hypergeometric functions over $\ensuremath{\mathbb{F}}_{q}$ to counting points on certain curves over $\ensuremath{\mathbb{F}}_{q}$. Recall also, that the curves we are interested in are smooth projective curves of genus $l-1$ with affine model
$$\mathcal{C}_{z}^{(m,s)}: y^l=t^m(1-t)^s(1-zt)^m$$
where $1\leq m,s<l$ are integers such that $m+s=l$.
Now, as we mentioned in the previous section, Corollary \ref{curves_have_same_number_points} in section \ref{hgf and ac} states that the curves $\mathcal{C}_{z}^{(m,s)}$ have all the same number of points over every finite extension of $\ensuremath{\mathbb{F}}_{q}$ as $(m,s)$ varies over all pairs of positive integers with $m+s=l$, hence they all have the same zeta function over $\ensuremath{\mathbb{F}}_{q}$. This, together with the fact that the hypergeometric functions that appear on the right hand side of equation (\ref{formula}) are the same for all these curves imply that it is enough to prove the conjecture for only one of them, say
\begin{equation}\label{generic_curve}
\mathcal{C}_{z}^{1,l-1}: y^l=t(1-t)^{l-1}(1-zt).
\end{equation}
Throughout this section, we will denote this curve by $\mathcal{C}_{z}$.
\vspace{0.3cm}
So, the question is: what results would be enough to know in order to prove the conjecture for all primes $l$ and $q$ with $q \equiv 1 \pmod{l}$?
Recall that, by equations (\ref{formulapuntos}) and (\ref{puntoscurva}) we have that
\begin{equation}\label{F_and_alpha}
F_{1,q^n}(z)+F_{2,q^n}(z)+ \cdots +F_{l-1,q^n}(z) = -\sum_{i=1}^{l-1}(\alpha_{i,q}^n(z)+\overline{\alpha_{i,q}^n}(z))
\end{equation}
where $F_{i,q^n}(z)= q^n \,{}_{2}F_{1} \left(
\begin{array}{ll|}
\eta_{q^n} ^{i}, & \eta_{q^n} ^{i(l-1)} \\
& \varepsilon \\
\end{array} \: z \right)$ with $\eta_{q^n} \in \widehat{\ensuremath{\mathbb{F}}_{q^n}^{\times}}$ a character of order $l$,
and $\alpha_{i,q}(z)$ are the reciprocals of the roots of the zeta function of $\mathcal{C}_{z}$ over $\ensuremath{\mathbb{F}}_{q}$, i.e.,
$$Z(\mathcal{C}_{z}/\fq;T)=\frac{(1-\alpha_{1,q}(z)T)(1-\overline{\alpha_{1,q}(z)}T)\cdots(1-\alpha_{l-1,q}(z)T)
(1-\overline{\alpha_{l-1,q}(z)}T)}{(1-T)(1-qT)}.$$
From now on we will omit the dependency on $z$ of the hypergeometric functions and the roots of the zeta function, therefore, we will denote
$F_{i,q^n}:=F_{i,q^n}(z)$ and $\alpha_{i,q}:=\alpha_{i,q}(z)$. Also, as in the previous section, denote $a_{i,q}:=\alpha_{i,q}+\overline{\alpha_{i,q}}$, for $i=1,\cdots, l-1$. Since we want to relate the hypergeometric functions above with the values $a_{i,q}$, first we are going to express the values $\alpha_{i,q}^n+\overline{\alpha_{i,q}^n}$ in terms of $a_{i,q}$ and $q$. We have the following theorem:
\begin{lemma}\label{Dickson}
For $\alpha \in \ensuremath{\mathbb{C}}$ such that $|\alpha|=\sqrt{q}$ denote $\alpha+\overline{\alpha}:=a$, and let $n$ be a non-negative integer. Then:
\begin{equation}
\alpha^n+\overline{\alpha}^n=\sum_{i=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^i\, T(n,i)\,q^i\, a^{n-2i}
\end{equation}
where $T(0,0):=2$, $T(n,0):=1$ for $n>0$ and
$$T(n,i):= \frac{n(n-i-1)!}{i!(n-2i)!},\,\,\,\, \text{for}\,\, n>0, \, i\geq 0.$$
\begin{proof}
We will prove the result by induction on $n$.
For $n=0$ and $n=1$ is clear.
\noindent Now suppose the result is true for all $k\leq n$. We want to show then that is also true for $n+1$.
Notice that
\begin{align*}
(\alpha^n+\overline{\alpha}^n)(\alpha+\overline{\alpha}) &= \alpha^{n+1}+\overline{\alpha}^{n+1}+\alpha^n \overline{\alpha}+\overline{\alpha}^n \alpha\\
& = \alpha^{n+1}+\overline{\alpha}^{n+1}+\alpha \overline{\alpha} (\alpha^{n-1}+\overline{\alpha}^{n-1})\\
& = \alpha^{n+1}+\overline{\alpha}^{n+1}+q (\alpha^{n-1}+\overline{\alpha}^{n-1}) & (\alpha \overline{\alpha}=q)
\end{align*}
Hence, since $a=\alpha+\overline{\alpha}$, we have
\begin{equation}\label{powers_alpha}
\alpha^{n+1}+\overline{\alpha}^{n+1}= (\alpha^n+\overline{\alpha}^n)\,a-q\,(\alpha^{n-1}+\overline{\alpha}^{n-1}).
\end{equation}
\noindent Combining equation (\ref{powers_alpha}) and the inductive hypothesis we have
\begin{align}\label{first_sum}
\alpha^{n+1}+\overline{\alpha}^{n+1} &= \sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}(-1)^i\, T(n,i)\,q^i\, a^{n+1-2i}-
\sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}(-1)^i\, T(n-1,i)\,q^{i+1}\, a^{n-1-2i}\nonumber \\
& = a^{n+1}+\sum_{i=1}^{\lfloor\frac{n}{2}\rfloor}(-1)^i\, T(n,i)\,q^i\, a^{n+1-2i} \nonumber\\
& \qquad \qquad -\sum_{j=1}^{\lfloor\frac{n-1}{2}\rfloor+1}(-1)^{j-1}\, T(n-1,j-1)\,q^{j}\, a^{n+1-2j}
\end{align}
after breaking apart the $i=0$ contribution in the first sum, and making the change of variables $i+1=j$ in the second sum.
\noindent Now we separate in two cases.
\begin{itemize}
\item Case 1: $n$ is even.\\
Notice that, in this case we have that $\lfloor \frac{n}{2}\rfloor=\lfloor \frac{n-1}{2}\rfloor+1$. Then, equation (\ref{first_sum}) becomes
\allowdisplaybreaks{
\begin{align*}
\alpha^{n+1}+\overline{\alpha}^{n+1}&= a^{n+1}+\sum_{i=1}^{\lfloor\frac{n}{2}\rfloor} (-1)^i\,
\left(\frac{n}{i!(n-2i)!}+\frac{(n-1)}{(i-1)!(n+1-2i)!}\right)\\
& \qquad \qquad \qquad \cdot (n-1-i)!\,q^i\,a^{n+1-2i}\\
&= a^{n+1}+\sum_{i=1}^{\lfloor\frac{n}{2}\rfloor} (-1)^i\, T(n+1,i)\,q^i\,a^{n+1-2i}\\
&= \sum_{i=0}^{\lfloor\frac{n+1}{2}\rfloor} (-1)^i\, T(n+1,i)\,q^i\,a^{n+1-2i}
\end{align*}
}after replacing $T(n,k)$ by its definition, doing some algebra, and noticing that, if $n$ is even then $\lfloor\frac{n}{2}\rfloor=\lfloor\frac{n+1}{2}\rfloor$. This proves the lemma for $n$ even.
\item Case 2: $n$ is odd.\\
In this case we have $\lfloor\frac{n}{2}\rfloor=\lfloor\frac{n-1}{2}\rfloor$ and $\lfloor\frac{n+1}{2}\rfloor=\lfloor\frac{n}{2}\rfloor+1$. Combining these, breaking apart the contribution of $i=\lfloor\frac{n}{2}\rfloor+1$ in the second sum, and using the previous computation, equation (\ref{first_sum}) becomes
\begin{align*}
\alpha^{n+1}+\overline{\alpha}^{n+1}&= a^{n+1}+ \sum_{i=1}^{\lfloor\frac{n+1}{2}\rfloor-1} (-1)^i\, T(n+1,i)\,q^i\,a^{n+1-2i}\\
& \qquad +(-1)^{\lfloor\frac{n+1}{2}\rfloor} \frac{(n-1)(n-1-\lfloor\frac{n+1}{2}\rfloor)!}{(\lfloor\frac{n+1}{2}\rfloor)-1)!(n+1-2\lfloor\frac{n+1}{2}\rfloor)!}\,
q^{\lfloor\frac{n+1}{2}\rfloor}\,a^{n+1-2\lfloor\frac{n+1}{2}\rfloor}.
\end{align*}
To finish the proof, we need to see that
\begin{equation}\label{dickson_coeff}
\frac{(n-1)(n-1-\lfloor\frac{n+1}{2}\rfloor)!}{(\lfloor\frac{n+1}{2}\rfloor)-1)!(n+1-2\lfloor\frac{n+1}{2}\rfloor)!}
= T(n+1,\lfloor\frac{n+1}{2}\rfloor).
\end{equation}
This is not a hard computation. Write $n=2m+1$ for some $m \in \ensuremath{\mathbb{N}}$, then $\lfloor\frac{n+1}{2}\rfloor=m+1$. Substituting this in equation (\ref{dickson_coeff}) we get $2=2$ finishing the proof for $n$ odd.
\end{itemize}
\end{proof}
\end{lemma}
Now, equation (\ref{F_and_alpha}) and Lemma \ref{Dickson} allow us to relate explicitly the hypergeometric functions with the traces of Frobenius, giving
\begin{equation}\label{relation_hpgf_traces}
F_{1,q^n}+F_{2,q^n}+ \cdots +F_{l-1,q^n} =-\sum_{i=1}^{l-1} \sum_{j=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^j\, T(n,j)\,q^j\, a_{i,q}^{n-2j}.
\end{equation}
Since in Conjecture \ref{conject} we want to prove that $F_{i,q}=-a_{i,q}$ for all $i=1,\ldots,l-1$, then we have the following result.
\begin{prop}\label{relation_hypergeometric_functions}
If for all $i,n=1,\ldots,l-1$ we have that
\begin{equation}\label{relation_powers_hpgf}
F_{i,q^n}=(-1)^{n+1}\sum_{j=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^j\, T(n,j)\,q^j\, F_{i,q}^{n-2j}
\end{equation}
then Conjecture \ref{conject} is true.
\begin{proof}
Assume equation (\ref{relation_powers_hpgf}) is true. Then, substituting into equation (\ref{relation_hpgf_traces}) for $n=1,\ldots,l-1$ we get a system of equations relating sums of the hypergeometric functions $F_{i,q}$ and their powers to sums of the traces of Frobenius $a_{i,q}$ and their powers. After simplifying this system of equations, we get an equivalent one of the form
\begin{equation}\label{system1}
\begin{cases}
F_{1,q}+F_{2,q}+\cdots+F_{l-1,q}=-(a_{1,q}+a_{2,q}+\cdots+a_{l-1,q})\\
F_{1,q}^2+F_{2,q}^2+\cdots+F_{l-1,q}^2=a_{1,q}^2+a_{2,q}^2+\cdots+a_{l-1,q}^2\\
\hspace{4.5cm} \vdots \\
F_{1,q}^{n}+F_{2,q}^{n}+\cdots+F_{l-1,q}^{n}=(-1)^{n}(a_{1,q}^{n}+a_{2,q}^{n}+\cdots+a_{l-1,q}^{n})\\
\hspace{4.5cm} \vdots \\
F_{1,q}^{l-1}+F_{2,q}^{l-1}+\cdots+F_{l-1,q}^{l-1}=a_{1,q}^{l-1}+a_{2,q}^{l-1}+\cdots+a_{l-1,q}^{l-1}.
\end{cases}
\end{equation}
\vspace{0.4cm}
\noindent This fact can be seen by induction. For $n=1$ there is nothing to prove. Suppose now that
$F_{1,q}^{k}+F_{2,q}^{k}+\cdots+F_{l-1,q}^{k}=(-1)^{k}(a_{1,q}^{k}+a_{2,q}^{k}+\ldots+a_{l-1,q}^{k})$ for all $k< n$. Now, by equation (\ref{relation_powers_hpgf}) we have
\begin{align}\label{eq1}
F_{1,q^n}+\cdots+F_{l-1,q^n}&= (-1)^{n+1}\sum_{j=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^j\, T(n,j)\,q^j\,
\left(F_{1,q}^{n-2j}+\cdots+F_{l-1,q}^{n-2j}\right)\nonumber\\
& =(-1)^{n+1}(F_{1,q}^{n}+\cdots+F_{l-1,q}^{n})\nonumber\\
& \qquad + (-1)^{n+1}\sum_{j=1}^{\lfloor \frac{n}{2}\rfloor} (-1)^j\, T(n,j)\,q^j\,
\left(F_{1,q}^{n-2j}+\cdots+F_{l-1,q}^{n-2j}\right)\nonumber\\
& =(-1)^{n+1}(F_{1,q}^{n}+\cdots+F_{l-1,q}^{n})\nonumber\\
& \qquad +(-1)^{n+1}\sum_{j=1}^{\lfloor \frac{n}{2}\rfloor} (-1)^{n-j}\, T(n,j)\,q^j\,
\left(a_{1,q}^{n-2j}+\cdots+a_{l-1,q}^{n-2j}\right)
\end{align}
where the last equality follows from the inductive hypothesis.
\noindent On the other hand, by breaking apart the contribution of $j=0$ in equation (\ref{relation_hpgf_traces}) we have
\begin{align}\label{eq2}
F_{1,q^n}+ \cdots +F_{l-1,q^n} &=-(a_{1,q}^n+\ldots+a_{l-1,q}^n)
+\sum_{j=1}^{\lfloor \frac{n}{2}\rfloor} (-1)^{j+1}\, T(n,j)\,q^j\, (a_{1,q}^{n-2j}+\ldots+a_{l-1,q}^{n-2j}).
\end{align}
Therefore, combining equations (\ref{eq1}) and (\ref{eq2}) and noticing that $(-1)^{2n+1-j}=(-1)^{j+1}$ we get
$$F_{1,q}^n+ \cdots +F_{l-1,q}^n =(-1)^n \left( a_{1,q}^n+\ldots+a_{l-1,q}^n\right)$$
as desired.
\noindent Next, by using the Newton-Girard formulas, which give relations between elementary symmetric polynomials and power sums, we see that the system (\ref{system1}) is equivalent to
\allowdisplaybreaks{
\begin{equation*}\label{system2}
\begin{cases}
F_{1,q}+\cdots+F_{l-1,q}=-(a_{1,q}+a_{2,q}+\cdots+a_{l-1,q})\\
\sum_{1\leq i<j\leq l-1}F_{i,q}F_{j,q}=\sum_{1\leq i<j\leq l-1}a_{i,q}a_{j,q}\\
\hspace{3.5cm} \vdots \\
F_{1,q}\ldots F_{l-1,q}=a_{1,q}\ldots a_{l-1,q}
\end{cases}
\end{equation*}
}
\vspace{0.2cm}
\noindent i.e., the elementary symmetric polynomials in the variables $F_{1,q},\ldots, F_{l-1,q}$ equal (up to a sign) the elementary symmetric polynomials in $a_{1,q},\ldots,a_{l-1,q}$. Then, we can think of these values as being roots of the same polynomial, therefore, after rearranging terms, we have that
$$F_{i,q}=-a_{i,q}\,\,\,\,\, \text{for all}\,\, i=1,\ldots,l-1$$
and Conjecture \ref{conject} follows.
\end{proof}
\end{prop}
\begin{remark}
Notice that it is enough to prove equation (\ref{relation_powers_hpgf}) only for prime powers of $q$, i.e., only for $1\leq n\leq l-1$ with $n$ prime. Otherwise, if $n=mr$ then
$\ensuremath{\mathbb{F}}_{q^{n}}| \ensuremath{\mathbb{F}}_{q^{m}}| \ensuremath{\mathbb{F}}_{q}$ is a tower of extensions, and we can use the relation for these extensions of lower degree.
\end{remark}
\vspace{0.3cm}
As we have seen above, proving equation (\ref{relation_powers_hpgf}) for prime powers of $q$ would be enough to prove Conjecture \ref{conject} in the general case. However, equation (\ref{relation_powers_hpgf}) gets complicated as $n$ grows, so it would be helpful if this equation is needed for even fewer values of $n$ in order to prove the conjecture. This might be possible; in fact, this is what we did to prove cases $l=3$ and $l=5$ in section \ref{conjecture}. Hence, looking at the proofs in previous section, we see that
\begin{prop}\label{relation_half_terms}
If the $L$-polynomial of the smooth projective curve of genus $l-1$ with affine model $\mathcal{C}_{z}:y^l=t(1-t)^{l-1}(1-zt)$ is a perfect square, and equation (\ref{relation_powers_hpgf}) is verified for all primes $n$ such that $1\leq n\leq (l-1)/2$, then Conjecture \ref{conject} holds..
\begin{proof}
Recall that
$$L(\mathcal{C}_{z}/\ensuremath{\mathbb{F}}_{q};T)=\prod_{i=1}^{l-1} (1-a_{i,q}T+qT^2).$$
Hence, if the proposition is true, we would have
$$L(\mathcal{C}_{z}/\ensuremath{\mathbb{F}}_{q};T)=\prod_{i=1}^{(l-1)/2} (1-a_{i,q}T+qT^2)^2$$
therefore, after rearranging terms, we have $a_{i,q}=a_{l-i,q}$, for all $i=1,\cdots,l-1$.
\noindent On the other hand, recall that by Corollary \ref{conjugate_hpgf} in section \ref{preliminaries} the hypergeometric functions $F_{i,q}$ come in pairs, i.e., $F_{i,q}=F_{l-i,q}$ for $i=1,\cdots, l-1$. Hence, system (\ref{system1}) gets reduced to half of it, having only $(l-1)/2$ unknowns. Then, it is enough to prove relation (\ref{relation_powers_hpgf}) only for primes $n$ up to $(l-1)/2$ in order to prove Conjecture \ref{conject}.
\end{proof}
\end{prop}
Now, the question is how can we determine if the $L$-polynomial of $\mathcal{C}_{z}$ over $\ensuremath{\mathbb{F}}_{q}$ is a perfect square. One possible way is to do an argument similar to the one done for the cases $l=3$ and $l=5$. First, notice that we have the following result, analogous to Lemma \ref{curve_biratl}.
\begin{teo}\label{birational_curve_general_case}
The curve $\mathcal{C}_{z}: y^l=t(1-t)^{l-1}(1-zt)$ is birationally equivalent to
\begin{equation}\label{curve_bir_equiv_general}
\mathcal{C}: Y^2=X^{2l}+2(1-2z)X^l+1.
\end{equation}
\begin{proof}
The proof is analogous to the proof of Lemma \ref{curve_biratl}.
\end{proof}
\end{teo}
Also, analogous to Lemma \ref{curve_no_odd_terms}, by considering the fractional linear transformation
$$X\to \frac{X+1}{X-1}$$
$$Y \to \frac{Y}{(X-1)^l}$$
we see that the curve (\ref{curve_bir_equiv_general}) is equivalent to a curve of the form
$$Y^2= c_{l}X^{2l}+c_{l-1}X^{2(l-1)}+\cdots+c_{1}X^2+c_0$$
with no terms of odd degree in X, where the coefficients $c_{i}$ are polynomial equations in $z$. Then, as in previous section, we can conclude that the jacobian of $\mathcal{C}_{z}$ is isogenous to the product of the jacobians of two curves of genus $(l-1)/2$, call them $\mathcal{H}_{1,z}$ and $\mathcal{H}_{2,z}$.
Therefore, by Proposition \ref{relation_half_terms}, we have
\begin{teo}
Let $q \equiv 1 \pmod{l}$. If $\#\mathcal{H}_{1,z}(\ensuremath{\mathbb{F}}_{q^i})=\#\mathcal{H}_{2,z}(\ensuremath{\mathbb{F}}_{q^i})$ for all $i=1,\ldots, (l-1)/2$, and equation (\ref{relation_powers_hpgf}) holds for all primes $n$ such that $1\leq n\leq (l-1)/2$, then Conjecture \ref{conject} holds.
\begin{proof}
Notice that the fact that $\#\mathcal{H}_{1,z}(\ensuremath{\mathbb{F}}_{q^i})=\#\mathcal{H}_{2,z}(\ensuremath{\mathbb{F}}_{q^i})$ for all $i=1,\ldots, (l-1)/2$ implies that the curves $\mathcal{H}_{1,z}$ and $\mathcal{H}_{1,z}$ have the same L-polynomial over $\ensuremath{\mathbb{F}}_{q}$, then, as we mentioned above, the system (\ref{system1}) gets reduced to half of it, having only $(l-1)/2$ unknowns. The rest of the proof follows from Proposition \ref{relation_half_terms}.
\end{proof}
\begin{remark}
Notice that, if Proposition \ref{relation_hypergeometric_functions} holds (i.e. Conjecture \ref{conject} is true over $\ensuremath{\mathbb{F}}_{q}$), using Lemma \ref{Dickson} we can get a result similar to Conjecture \ref{conject} over $\ensuremath{\mathbb{F}}_{q^n}$, for $n\in \ensuremath{\mathbb{N}}$.
\end{remark}
\end{teo}
\vspace{0.4cm}
\section{acknowledgments}
\vspace{0.3cm}
The author would like to thank her advisor Matthew Papanikolas for suggesting this problem and for his advise and support during the preparation of this paper.
\vspace{0.4cm}
|
2,877,628,088,628 | arxiv | \section{Introduction}
The anomalous magnetic moment of the muon $a_\mu = (g - 2)_\mu / 2$ is one of the most precisely measured physical quantities.
Its current value from the Brookhaven National Laboratory E821 experiment is \cite{BNLg-2, BNLg-2_develop}
\begin{equation}
a_\mu(\mbox{exp}) = (11~659~208 \pm 6) \times 10^{-10},
\label{eq:a_exp}
\end{equation}
which is a $2.4 \sigma$ deviation from the Standard Model (SM) prediction
\begin{equation}
\Delta a_\mu \equiv a_\mu(\mbox{exp}) - a_\mu(\mbox{SM}) = (23.9 \pm 10.0) \times 10^{-10}
\label{eq:Delta_a}
\end{equation}
when the hadronic vacuum polarization information is taken directly from the annihilation of $e^+ e^-$ to hadrons \cite{hadronic_from_ee} measured at CMD-2 \cite{CMD-2}.
The uncertainties involved in Eq. (\ref{eq:Delta_a}) are $7.2 \times 10^{-10}$ from the leading-order hadronic contribution \cite{hadLO}, $3.5 \times 10^{-10}$ from the hadronic light-by-light scattering \cite{lbl}, and $6 \times 10^{-10}$ from the $a_\mu$ experiment.
The indirect hadronic information from the hadronic $\tau$ decay gives a higher SM value that does not indicate a significant discrepancy with the SM (only a $0.9 \sigma$ deviation)\footnote{For a recent review of the various SM predictions and the $a_\mu$ discrepancies, see Ref. \cite{Passera}.}.
Recently released KLOE data \cite{KLOE} show an overall agreement with the CMD-2 data \cite{CMD-2}, confirming that there is a discrepancy between the hadronic contributions from the $e^+ e^-$ data and the $\tau$ data obtained from ALEPH, CLEO and OPAL \cite{Hocker}.
New physics is expected to exist at the TeV-scale to resolve various theoretical problems, including Higgs mass stabilization, and new physics could give a significant contribution to $a_\mu$ to explain the above deviation \cite{NewPhysics}.
There have been extensive studies of $a_\mu$ in supersymmetric (SUSY) models \cite{susy_amu}, which show that supersymmetry can naturally explain the deviation of Eq. (\ref{eq:Delta_a}).
The $a_\mu$ data constrains the SUSY parameters, including the sign of $\mu$ \cite{signmu} and upper limits on relevant scalar and fermion superpartner masses \cite{sleptonmass}.
In the minimal supergravity model (mSUGRA) or the Minimal Supersymmetric Standard Model (MSSM), the dominant additional contribution to $a_\mu$ comes from the first-order radiative corrections of the chargino-sneutrino and the neutralino-smuon loops;
it is
\begin{equation}
\Delta a_\mu (\mbox{SUSY}) \sim 13 \times 10^{-10} \frac{\tan\beta~ \mbox{sign}(\mu)}{(M_{\mbox{\tiny SUSY}} / 100 \mbox{~GeV})^2}
\label{eq:Deltaamu}
\end{equation}
in the limit that all the supersymmetric masses are degenerate at $M_{\mbox{\tiny SUSY}}$ \cite{moroi}.
The 2-loop corrections involve sfermion subloops or chargino/neutralino subloops and are at about the few percent level, although full calculations are not yet complete \cite{2loop}.
The discrepancy in Eq. (\ref{eq:Delta_a}) shows that a supersymmetry solution can be found if sign$(\mu) > 0$ and $M_{\mbox{\tiny SUSY}} \lsim 700$ GeV for $\tan\beta \lsim 50$, in the limit that supersymmetric masses are degenerate.
The deviation of $a_\mu$ similarly gives constraints on the parameters of other new physics models including the mass of a second generation leptoquark \cite{Cheung:prd.64.033001}, the mass of the heavy photon in the little Higgs model \cite{Park:hep-ph/0306112} and the compactification scale of an extra dimension \cite{extraDimension}.
Given that $a_\mu$ has been a powerful tool for constraining the new physics models, due to the accuracy of its measurement and the SM evaluation, it is interesting to pursue what $a_\mu$ can tell about recently emerging models.
The recent idea of split supersymmetry assumes large masses (e.g., $10^{10}$ GeV) for scalar superpartners (sleptons, squarks) while keeping fermionic superpartners (gauginos, higgsinos) at the TeV-scale \cite{splitSUSY}.
The large masses of the smuon and sneutrino would make the chargino-sneutrino and neutralino-smuon loop contributions to $a_\mu$ negligible; the split supersymmetry model would be rejected if the deviation of $a_\mu$ is in fact real.
Another interesting TeV-scale new physics model is the supersymmetric $U(1)'$ model \cite{onesinglet, smodel}.
It has a structure similar to the MSSM but has an extra $U(1)$ gauge symmetry ($U(1)'$), which is spontaneously broken at the TeV-scale by one or multiple Higgs singlets.
This model can provide natural solutions to some of the difficulties the MSSM faces, including the explanation of the electroweak scale of the $\mu$ parameter ($\mu$-problem \cite{muproblem}) and the lack\footnote{The required strong first-order phase transition for EWBG is allowed in the MSSM only if the light Higgs mass is only slightly above the LEP experimental bound and the light stop mass is smaller than the top mass \cite{EWBG_MSSM}.} of a sufficiently strong first-order phase transition for electroweak baryogenesis (EWBG) \cite{EWBG_U1}.
The Next-to-Minimal Supersymmetric Standard Model (NMSSM) \cite{NMSSM} can also resolve the $\mu$-problem but its discrete $\bf Z_3$ symmetry invokes a cosmological domain wall problem \cite{domainwall}; a variant which avoids this problem is discussed in Ref. \cite{nMSSM, Wagner}.
Besides the bottom-up reasons to introduce an additional $U(1)$ symmetry to supplement the MSSM, many new physics models, including grand unified theories (GUTs), extra dimensions \cite{extradim}, superstrings \cite{superstring}, little Higgs \cite{littleHiggs}, dynamical symmetry breaking \cite{dynamical} and Stueckelberg mechanism models \cite{Stueckelberg} predict extra $U(1)$ symmetries or gauge bosons.
The newly introduced particles such as the $U(1)'$ gauge boson ($Z'$) and the $U(1)'$ breaking Higgs singlet ($S$) and their superpartners $Z'$-ino ($\tilde Z'$) and singlino ($\tilde S$), alter the Higgs and neutralino spectra.
The modified Higgs spectrum \cite{Higgs_U1} and the neutralino relic density \cite{relic_U1} have been recently studied in the $U(1)'$ model with a secluded $U(1)'$ symmetry breaking sector \cite{smodel}, and the difference in the predictions from the MSSM detailed.
There have been studies of the muon anomalous magnetic moment in models with additional gauge groups or $E_6$ GUT \cite{Leveille, E6_g-2, exotic_g-2}.
They mostly concentrated on the loops including the $Z'$ or the exotic quarks in the model and their superpartners, and constraints were obtained on their masses or couplings.
To explain the $a_\mu$ deviation, the $Z'$ masses should be typically smaller than the experimental limits of $M_{Z'} \gsim 500 \sim 800$ GeV from direct searches at the Tevatron \cite{CDF, PDG}.
In this paper we quantitatively study a supersymmetric $U(1)'$ model with a secluded $U(1)'$ symmetry breaking sector (the $S$-model) \cite{smodel} to see how the extended neutralino sector contribution to $a_\mu$ is different from the MSSM prediction.
The superpotential in this model is
\begin{equation}
W = h_s S H_1 H_2 + \lambda_s S_1 S_2 S_3,
\end{equation}
where $h_s$ and $\lambda_s$ are dimensionless parameters.
The $\mu$-problem is solved by replacing the $\mu$ term by an effective $\mu$ parameter
\begin{equation}
\mu_{\rm eff} = h_s s / \sqrt{2}
\end{equation}
where $s / \sqrt{2}$ is the vacuum expectation value (VEV) of the Higgs singlet $S$ that acquires a VEV at the electroweak or TeV scale.
The $Z'$ has a large mass generated by the Higgs singlet fields $S_{1,2,3}$, which acquire large (TeV scale) VEVs for small $\lambda_s$ because of an almost $F$ and $D$ flat direction.
The extra Higgs singlets allow $\mu_{\rm eff}$ to be at the electroweak scale while keeping the $Z'$ heavier than the experimental limit.
The electroweak symmetry breaking is driven by electroweak scale trilinear soft terms, leading to small values for $\tan\beta \equiv v_2 / v_1$ ($\tan\beta \sim 1$ to $3$), while solutions without unwanted global minima at $\left< H_i^0 \right> = 0$ typically have $\left< S \right> \lsim 1.5 \left< H_i^0 \right>$ \cite{smodel}.
A small $\tan\beta$ implies a problem in the MSSM; for example, the light Higgs mass needs large $\tan\beta$ (for reasonable superpartner masses) to satisfy the LEP bound of $m_h > 115$ GeV.
The mixing of the Higgs doublets and singlets in the $S$-model, however, lowers the LEP $m_h$ bound significantly \cite{Higgs_U1}.
Furthermore, the maximum theoretical value for $m_h$ for a given $\tan\beta$ is increased, both by $F$-terms (similar to effects in the NMSSM \cite{NMSSM, Higgs_U1}) and by $D$-terms \cite{Dterms}.
As a result, $\tan\beta \sim 1$ to $3$ is experimentally allowed in the $S$-model.
The neutralinos in the $S$-model have extra components of the $Z'$-ino and singlinos.
The lightest neutralino of the $U(1)'$ model is often very light because of the mixing pattern of the extra components.
Eq. (\ref{eq:Deltaamu}) suggests that $a_\mu$ in this model may be significantly different from that of the MSSM.
We numerically investigate the differences and obtain constraints on $\tan\beta$ and the relevant soft breaking terms.
We do not include $Z'$ loops since the $Z'$ mass bounds from direct searches at colliders imply negligible effects.
In Section \ref{section:formalism} we describe the formalism to compute the chargino and neutralino contributions in the $S$-model.
In Section \ref{section:analysis}, we give the numerical analysis of $a_\mu$ and make comparisons to the MSSM results, before the conclusion in Section \ref{section:conclusion}.
\section{Supersymmetric Contributions to $a_\mu$ with $U(1)'$ symmetry}
\label{section:formalism}
In this section we describe the neutralino and chargino contributions to $a_\mu$ in the $S$-model.
The formalism is similar to the MSSM with a straightforward extension.
We assume that the VEVs of the $S_{1,2,3}$ are large compared to the VEVs of other Higgs fields ($H_1^0$, $H_2^0$ and $S$) and that their singlino components essentially decouple \cite{smodel, Higgs_U1}.
\subsection{Neutralino Contribution}
Ignoring the $\tilde S_{1,2,3}$ singlinos, the neutralino mass matrix in the basis $\left\{\tilde{B}\right.$, $\tilde{W}_3$, $\tilde{H}_1^0$, $\tilde{H}_2^0$, $\tilde{S}$, $\left.\tilde{Z'} \right\}$ is given by
\begin{eqnarray}
M_{\chi^0}=
\left( \matrix{M_1 & 0 & - g_1 v_1 / 2 & g_1 v_2 / 2 & 0 & 0 \cr
0 & M_2 & g_2 v_1 / 2 & - g_2 v_2 / 2 & 0 & 0 \cr
- g_1 v_1 / 2 & g_2 v_1 / 2 & 0 & - h_s s / \sqrt{2} & - h_s v_2 / \sqrt{2} & g_{Z'} Q'(H_1^0) v_1 \cr
g_1 v_2 / 2 & - g_2 v_2 / 2 & - h_s s / \sqrt{2} & 0 & - h_s v_1 / \sqrt{2} & g_{Z'} Q'(H_2^0) v_2 \cr
0 & 0 & - h_s v_2 / \sqrt{2} & - h_s v_1 / \sqrt{2} & 0 & g_{Z'} Q'(S) s \cr
0 & 0 & g_{Z'} Q'(H_1^0) v_1 & g_{Z'} Q'(H_2^0) v_2 & g_{Z'} Q'(S) s & M_{1'}} \right)
\label{eqn:neutralinomassmatrix}
\end{eqnarray}
where $e = g_1\cos\theta_W = g_2\sin\theta_W$;
$g_{Z'}$ is the $U(1)'$ gauge coupling constant, for which we take the GUT motivated value of $g_{Z'} = \sqrt{5/3} g_1$.
$Q'$ is the $U(1)'$ charge, and the anomaly-free charge assignments based on $E_6$ GUT can be found in Ref. \cite{E6model}.
The VEVs of the Higgs doublets are $\langle H_i^0 \rangle \equiv
\frac{v_i}{\sqrt{2}}$ with $\sqrt{v_1^2 + v_2^2} \simeq 246$ GeV.
The diagonalization of the mass matrix can be accomplished using a unitary matrix $N$,
\begin{equation}
N^T M_{\chi^0} N = Diag(M_{\chi^0_1}, M_{\chi^0_2}, M_{\chi^0_3}, M_{\chi^0_4}, M_{\chi^0_5}, M_{\chi^0_6}).
\end{equation}
The first $4\times4$ ($5\times5$) submatrix of Eq. (\ref{eqn:neutralinomassmatrix}) is the MSSM (NMSSM) limit.
Due to the singlino addition, there exists a kind of see-saw mechanism that makes the lightest neutralino very light \cite{lightneutralino, Wagner}, less than $100$ GeV in the case of $M_{1'} \gg M_1$ (where the $\tilde{Z'}$ practically decouples and the mass matrix becomes the NMSSM limit) \cite{relic_U1}.
We will consider both this limit and that in which the gaugino mass unification relation, $M_{1'} = M_1 = \frac{5}{3} \frac{g_1^2}{g_2^2} M_2 \simeq 0.5 M_2$, is satisfied.
\begin{figure}[t]
\begin{minipage}[b]{1\textwidth}
\centering\leavevmode
\epsfbox{g-2_loop.ps}
\end{minipage}
\label{fig:diagram}
\caption{Supersymmetric contributions to $a_\mu$ involving charginos and neutralinos}
\end{figure}
The smuon mass-squared matrix is given by
\begin{eqnarray}
M_{\tilde\mu}^2 =
\left( \matrix{M_{LL}^2 & M_{LR}^2 \cr
M_{RL}^2 & M_{RR}^2} \right).
\label{eq:smuonmass}
\end{eqnarray}
\begin{eqnarray}
M_{LL}^2 &=& m_{\tilde L}^2 + m_\mu^2 + (T_{3 \mu} - Q_\mu \sin^2\theta_W) M_Z^2 \cos2\beta \\
M_{RR}^2 &=& m_{\tilde E}^2 + m_\mu^2 + (Q_\mu \sin^2\theta_W) M_Z^2 \cos2\beta \\
M_{LR}^2 &=& m_\mu (A_\mu^* - \mu_{\rm eff} \tan\beta) \label{eq:MLR2} \\
M_{RL}^2 &=& (M_{LR}^2)^* \label{eq:MRL2}
\end{eqnarray}
Its diagonalization can be accomplished through the unitary matrix $D$ as
\begin{equation}
D^\dagger M_{\tilde\mu}^2 D = Diag(M_{\tilde\mu_1}^2,M_{\tilde\mu_2}^2).
\end{equation}
The LEP2 SUSY Working Group analysis found $m_{\tilde \mu_R} \gsim 95$ GeV from $\tilde \mu \to \mu \chi^0_{1,2}$ searches \cite{LEPSUSY}.
Since $m_\mu$ is small compared to supersymmetric parameters, the off-diagonal terms of Eq. (\ref{eq:MLR2}) and Eq. (\ref{eq:MRL2}) are small and hence the mixing is small.
For $\tan\beta \gsim 1$, both $m_{\tilde L}^2$, $m_{\tilde E}^2 \gsim (95~ \mbox{GeV})^2$ are required to give $m_{\tilde \mu_1} \gsim 95$ GeV from the tree-level mass matrix of Eq. (\ref{eq:smuonmass}), while for $\tan\beta \lsim 1$, somewhat larger values of these parameters are required.
We require both $m_{\tilde L}^2$ and $m_{\tilde E}^2$ to be larger than $(100~ \mbox{GeV})^2$ and take $A_\mu = 0$ in our numerical analysis.
The neutralino contribution to $a_\mu$ is then \cite{susy_amu}
\begin{equation}
a_\mu(\chi^0) = a_\mu^1(\chi^0) + a_\mu^2(\chi^0)
\end{equation}
where
\begin{eqnarray}
a_\mu^1(\chi^0) &=& \sum_{j=1}^6 \sum_{k=1}^{2} \frac{m_\mu}{8 \pi^2 M_{\chi_j^0}} Re[L_{jk} R_{jk}^*] F_1(\frac{M_{\tilde{\mu}_k}^2}{M_{\chi_j^0}^2}) \\
a_\mu^2(\chi^0) &=& \sum_{j=1}^{6} \sum_{k=1}^{2} \frac{m_\mu^2}{16 \pi^2 M_{\chi_j^0}^2} \left( |L_{jk}|^2 + |R_{jk}|^2 \right) F_2(\frac{M_{\tilde{\mu}_k}^2}{M_{\chi_j^0}^2})
\end{eqnarray}
with the following $\mu$-$\tilde\mu$-$\chi^0$ chiral coupling:
\begin{eqnarray}
L_{j k} &=& \frac{1}{\sqrt{2}} \left( g_1 Y_{\mu_L} N_{1 j}^* - g_2 N_{2 j}^* + g_{Z'} Q'(\mu_L) N_{6 j}^* \right) D_{1 k} + \frac{\sqrt{2} m_\mu}{v_1} N_{3 j}^* D_{2 k} \\
R_{j k} &=& \frac{1}{\sqrt{2}} \left( g_1 Y_{\mu_R} N_{1 j} + g_{Z'} Q'(\mu_R) N_{6 j} \right) D_{2 k} + \frac{\sqrt{2} m_\mu}{v_1} N_{3 j} D_{1 k}.
\end{eqnarray}
$Y_{\mu_L} = -1$, $Y_{\mu_R} = 2$ are hypercharges and the terms with $g_{Z'}$ coupling are the additional contributions from the $U(1)'$.
Since $m_\mu$ is small compared to the supersymmetric masses, we can approximate it as zero in the loop integral functions to obtain
\begin{eqnarray}
F_1 (x) &=& \frac{1}{2} \frac{1}{(x-1)^3} (1 - x^2 + 2x\ln x) \\
F_2 (x) &=& \frac{1}{6} \frac{1}{(x-1)^4} (-x^3 + 6x^2 - 3x - 2 - 6x\ln x).
\end{eqnarray}
\subsection{Chargino Contribution}
The chargino mass matrix is given by
\begin{eqnarray}
M_{\chi^\pm} =
\left( \matrix{M_2 & \sqrt{2} M_W \sin\beta \cr
\sqrt{2} M_W \cos\beta & h_s s / \sqrt{2}} \right).
\label{eq:charginomass}
\end{eqnarray}
It is essentially the same as in the MSSM except that $\mu$ is replaced by $\mu_{\rm eff} = h_s s / \sqrt{2}$.
$M_{\chi^\pm}$ can be diagonalized by two unitary matrices $U$ and $V$ as
\begin{equation}
U^* M_{\chi^\pm} V^{-1} = Diag(M_{\chi^\pm_1},M_{\chi^\pm_2}).
\end{equation}
The LEP light chargino mass limit $M_{\chi^-} \gsim 104$ GeV \cite{charginomass} gives constraints on $M_2$ and $\mu_{\rm eff}$ for a fixed value of $\tan\beta$.
The sneutrino mass-squared is given by
\begin{equation}
M_{\tilde \nu_\mu}^2 = m_{\tilde L}^2 + T_{3 \nu} M_Z^2 \cos 2\beta.
\end{equation}
As in the MSSM calculations, we do not include the right-handed neutrino or its superpartner.
The chargino loop contribution to $a_\mu$ is \cite{susy_amu}
\begin{equation}
a_\mu(\chi^-) = a_\mu^1(\chi^-) + a_\mu^2(\chi^-)
\label{eq:chargino_amu}
\end{equation}
where
\begin{eqnarray}
a_\mu^1(\chi^-) &=& \sum_{j=1}^{2} \sum_{k=1}^{1} \frac{m_\mu}{8 \pi^2 M_{\chi_j^-}} Re[L_{jk} R_{jk}^*] F_3(\frac{M_{\tilde\nu_\mu}^2}{M_{\chi_j^-}^2}) \\
a_\mu^2(\chi^-) &=& - \sum_{j=1}^{2} \sum_{k=1}^{1} \frac{m_\mu^2}{16 \pi^2 M_{\chi_j^-}^2} \left( |L_{jk}|^2 + |R_{jk}|^2 \right) F_4(\frac{M_{\tilde\nu_\mu}^2}{M_{\chi_j^-}^2})
\end{eqnarray}
with the chiral $\mu$-$\tilde\nu_\mu$-$\chi^-$ couplings
\begin{eqnarray}
L_{j1} = \frac{\sqrt{2} m_\mu}{v_1} U_{j2}^*, \qquad R_{j1} = - g_2 V_{j1}.
\end{eqnarray}
The loop integral functions are
\begin{eqnarray}
F_3 (x) &=& -\frac{1}{2} \frac{1}{(x-1)^3} (3x^2 - 4x + 1 - 2x^2\ln x) \\
F_4 (x) &=& -\frac{1}{6} \frac{1}{(x-1)^4} (2x^3 + 3x^2 - 6x + 1 -6x^2\ln x).
\end{eqnarray}
\section{Analysis}
\label{section:analysis}
\begin{figure}[t]
\begin{minipage}[c]{\textwidth}
\centering\leavevmode
\epsfxsize=3.2in
\epsfbox{contour_2models.ps}
\epsfxsize=3.2in
\epsfbox{contour_together.ps}
\end{minipage}
\caption{(a) $\Delta a_\mu$ in the MSSM [filled circles] and the NMSSM limit ($M_{1'} \gg M_2$) of the $S$-model [dark shading] for $\tan\beta = 2.5$ and $m_{\rm smuon} = 100~ \mbox{GeV}$. For the $S$-model, $h_s = 0.75$ and $\eta$-model charges are assumed. The region outside the NMSSM region has $\Delta a_\mu \times 10^{10} < 13.9$, while the island inside the NMSSM region (around $M_2 \sim 100$ GeV) has $\Delta a_\mu \times 10^{10} > 33.9$. The lightly shaded area outlined by solid curves is excluded by the LEP chargino mass limit $M_{\chi^-} > 104$ GeV. The $\Delta a_\mu$ have values $13.9 \le \Delta a_\mu \times 10^{10} \le 33.9$ favored by Eq. (\ref{eq:Delta_a}). The models give similar $a_\mu$ results and all the models allow small $\tan\beta$ for $m_{\rm smuon}$ small ($\sim 100$ GeV). (b) The solution space for the acceptable neutralino relic density \cite{relic_U1} and the $(g-2)_\mu$ deviation are shown together for the same NMSSM limit. The filled squares have the WMAP $3\sigma$ allowed range of $0.09 < \Omega_{\chi^0} h^2 < 0.15$, the open circles have $\Omega_{\chi^0} h^2 < 0.09$ and the crosses have $0.15 < \Omega_{\chi^0} h^2 < 1.0$.}
\label{fig:contour_M2_mu}
\end{figure}
The MSSM can explain the $2.4 \sigma$ deviation between the E821 experiment and the SM prediction for most values of $\tan\beta$.
In this section, we compare predictions from the MSSM with those of the $S$-model.
Figure \ref{fig:contour_M2_mu} (a) shows $\Delta a_\mu$ in the $M_2$-$s$ (also $M_2$-$\mu$) plane for the MSSM [filled circles], the $S$-model and its NMSSM limit [dark shading], with $\tan\beta = 2.5$ and $m_{\rm smuon} \equiv m_{\tilde L} = m_{\tilde E} = 100$ GeV.
The comparison is made for $\mu = \mu_{\rm eff}$.
The corresponding plot for the $S$-model with $M_{1'} = M_1$ is hardly distinguishable from that of the NMSSM limit so it is not shown in the figure.
The plots are the parameter spaces that can give values $\Delta a_\mu \times 10^{10} = 13.9 \sim 33.9$ favored by Eq. (\ref{eq:Delta_a}).
The parameter space excluded by the LEP chargino mass limit of $M_{\chi^-} > 104$ GeV is shown as the lightly shaded area outlined by solid curves.
Throughout the following analysis (Figures \ref{fig:amu_tanbMsoft} and \ref{fig:amu_M2}), we do not include the parameter points that violate the LEP chargino mass constraint.
This figure shows that both the MSSM\footnote{It should be emphasized that the MSSM cannot accept this small $\tan\beta$ if we consider other constraints: the light Higgs mass would be too small to be compatible with the LEP Higgs mass bound of $m_h > 115$ GeV.} and the $S$-model (including the NMSSM limit) can explain the $\Delta a_\mu$ data with small $\tan\beta$ while satisfying $m_{\rm smuon} \gsim 100$ GeV from the experimental bound on the scalar muon mass.
The $S$-model gives only a slightly larger area in the parameter space than the MSSM.
For the $S$-model, $h_s = 0.75$, and the $E_6$ motivated $\eta$-model\footnote{The $\eta$-model is a $U(1)'$ model that is produced with a unique set of charge assignments when $E_6$ is broken to a rank-5 group.}
charge assignments are assumed \cite{E6model}.
It is interesting to note that in the $M_{1'} \gg M_2$ case\footnote{The CDM expectations for smaller $M_{1'}$ have not yet been examined.} (NMSSM limit), a sizable part of the $a_\mu$ $2.4 \sigma$ deviation solution area overlaps the solution area which reproduces the observed cold dark matter (CDM) relic density in the same framework and limit \cite{relic_U1} as shown in Figure \ref{fig:contour_M2_mu} (b).
The filled squares have the WMAP $3\sigma$ allowed range of $0.09 < \Omega_{\chi^0} h^2 < 0.15$, the open circles have $\Omega_{\chi^0} h^2 < 0.09$ and the crosses have $0.15 < \Omega_{\chi^0} h^2 < 1.0$.
In the event that the $\Delta a_\mu$ deviation from the SM is real, this CDM result enhances the viability of the model.
The chargino mass allowed by both the $a_\mu$ deviation and the relic density is $104$ GeV $\lsim M_{\chi^\pm} \lsim 220$ GeV, where the lower bound is the present experimental limit.
Since the chargino mass could be only slightly larger than the present LEP limit, a search for the SUSY trilepton signal at the Tevatron Run II will be very interesting \cite{trilepton}.
It should be noted that only the $Z$-pole annihilation channel was considered to show this model could reproduce the acceptable relic density.
There could be a larger solution space when more channels are considered.
\begin{figure}[t]
\begin{minipage}[c]{\textwidth}
\begin{minipage}[b]{.49\textwidth}
\centering\leavevmode
\epsfxsize=3in
\epsfbox{amu_U1tanb.ps}
\end{minipage}
\hfill
\begin{minipage}[b]{.49\textwidth}
\centering\leavevmode
\epsfxsize=3in
\epsfbox{amu_U1Msoft.ps}
\end{minipage}
\end{minipage}
\caption{Maximum $\Delta a_\mu$ versus (a) $\tan\beta$ and (b) $m_{\rm smuon}$ in the $S$-model with $M_{1'} = M_1$.
The MSSM result and the NMSSM limit $(M_{1'} \gg M_2)$ are nearly indistinguishable from the $M_{1'} = M_1$ curve and are not plotted.
$M_2$ is scanned from $100$ GeV to $1000$ GeV, and $\mu$($\mu_{\rm eff}$) from $100$ GeV to $1000$ GeV. $m_{\rm smuon}$ is scanned from $100$ GeV to $500$ GeV for (a), and $\tan\beta$ from $1$ to $3$ for (b).
Horizontal dashed-dot lines are boundaries of the measured deviation $\Delta a_\mu \times 10^{10} = 13.9 \sim 33.9$ of Eq. (\ref{eq:Delta_a}).}
\label{fig:amu_tanbMsoft}
\end{figure}
We now consider the limits on $\tan\beta$ and $m_{\rm smuon}$ that allow the favored range of $\Delta a_\mu$.
Figure \ref{fig:amu_tanbMsoft} (a) shows the maximum $\Delta a_\mu$ as a function of $\tan\beta$ in the $S$-model (with $M_{1'} = M_1$).
The MSSM curve is nearly the same, as is the NMSSM limit ($M_{1'} \gg M_2$) of the $S$-model.
$M_2$ is scanned from $100$ GeV to $1000$ GeV, $\mu$($\mu_{\rm eff}$) from $100$ GeV to $1000$ GeV (negative $\mu$ is nearly irrelevant for producing positive $\Delta a_\mu$), and $m_{\rm smuon}$ from $100$ GeV to $500$ GeV.
This figure demonstrates the almost linear dependence on $\tan\beta$, as in Eq. (\ref{eq:Deltaamu}), and shows that even with small $\tan\beta$, both models are able to produce the favored values of $\Delta a_\mu$.
Smaller $\tan\beta$, however, has less supersymmetric parameter space that explains the deviation.
The maximum values of $\Delta a_\mu$ have $m_{\rm smuon} \sim 100$ GeV (the lowest scan value).
Since the chargino contribution is the same in the MSSM and the $S$-model, the dominant difference in $a_\mu$ would come from the lightest neutralino ($\chi^0_1$) contribution.
The mass and the coupling of $\chi^0_1$ in the $S$-model is similar to that of the MSSM.
In the NMSSM limit ($M_{1'} \gg M_1$), $\chi^0_1$ is often significantly lighter than that of the MSSM but it is mostly singlino-like which couples to the muon only through mixing; $\chi^0_2$ is mostly similar to $\chi^0_1$ of the MSSM.
Figure \ref{fig:amu_tanbMsoft} (b) shows the maximum $\Delta a_\mu$ as a function of $m_{\rm smuon}$ in the $S$-model; the MSSM curve is nearly identical when $\tan\beta$ is scanned only from $1$ to $3$.
The maximum points have $\tan\beta \sim 3$ (the highest scan value).
A large value of $\tan\beta$, as more general $U(1)'$ models allow, would always ensure a sufficiently large maximum $\Delta a_\mu$ for a given $m_{\rm smuon}$.
This figure demonstrates the roughly inverse-squared dependence on $m_{\rm smuon}$ as in Eq. (\ref{eq:Deltaamu}).
It also shows that, for small $\tan\beta$, the scalar muon should be light ($100$ GeV $\lsim M_{\tilde \mu} \lsim 180$ GeV) to explain the $a_\mu$ deviation.
This could in principle lead to a concern for the neutralino cold dark matter candidate, since the scalar muon could be light enough to be the lightest supersymmetric particle (LSP).
A charged LSP would conflict with the observational absence of exotic isotopes.
In the $S$-model, however, the lightest neutralino is usually very light, e.g., $M_{\chi^0_1} \lsim 100$ GeV, while it produces the acceptable range of the relic density for a wide range of the parameter space \cite{relic_U1}.
One can therefore have a light slepton with an even lighter neutralino as the LSP.
A light slepton will be observable in future accelerators such as the CERN Large Hadron Collider (LHC) and International Linear Collider (ILC).
Light sleptons could be detected easily at a linear collider of a moderate energy of $500$ GeV.
\begin{figure}[t]
\begin{minipage}[c]{\textwidth}
\begin{minipage}[b]{.49\textwidth}
\centering\leavevmode
\epsfxsize=3in
\epsfbox{amu_MSSMm2.ps}
\end{minipage}
\hfill
\begin{minipage}[b]{.49\textwidth}
\centering\leavevmode
\epsfxsize=3in
\epsfbox{amu_U1m2.ps}
\end{minipage}
\end{minipage}
\caption{Maximum $\Delta a_\mu$ versus $M_2$ in (a) the MSSM and (b) the $S$-model. The dotted curves take $M_1$ as a free parameter while the solid ones follow the gaugino mass unification relation ($M_{1'} = M_1 \simeq 0.5 M_2$). $\mu$($\mu_{\rm eff}$) is scanned from $100$ GeV to $1000$ GeV, $M_1$ (for dotted curves) from $50$ GeV to $1000$ GeV, $\tan\beta$ from $1$ to $3$, and $m_{\rm smuon}$ from $100$ GeV to $500$ GeV. When $M_1$ is a free parameter, we take $M_{1'} \gg M_2$ (NMSSM limit) for the $S$-model (dotted curve).
In both models, the maximum values of $\Delta a_\mu$ occur for $M_1 \sim 50$ GeV when $M_1$ is taken as a free parameter.}
\label{fig:amu_M2}
\end{figure}
Figure \ref{fig:amu_M2} shows the maximum $\Delta a_\mu$ as a function of $M_2$ in both models.
For the dotted curves, we do not impose the gaugino mass unification assumption of $M_{1'} = M_1 \simeq 0.5 M_2$, and we scan $M_1$ from $50$ GeV to $1000$ GeV.
This figure shows that independent $M_1$ can make quite a difference for a fixed $M_2$ when $M_2$ is large.
For Figure \ref{fig:amu_tanbMsoft}, a relaxation of gaugino mass unification would not make much difference since the maximum $\Delta a_\mu$ mostly happens for small $M_2$\footnote{Small $M_2$ results in a small chargino mass in Eq. (\ref{eq:charginomass}), and large $\Delta a_\mu$ in Eq. (\ref{eq:chargino_amu}).}.
In the $S$-model we do not relax $M_{1'}$ as a free parameter for the dotted curve, but rather take the NMSSM limit ($M_{1'} \gg M_2$) and relax only $M_1$.
Unless $M_{1'}$ is very small (smaller than the $M_1$ scan limit of $50$ GeV), it would not increase the maximum $\Delta a_\mu$ significantly.
For a wide range of $M_2$ (and practically for any $M_2$ in the case that $M_1$ is a free parameter), both models can produce the favored $\Delta a_\mu$.
The maximum $\Delta a_\mu$ have $\tan\beta \sim 3$, $m_{\rm smuon} \sim 100$ GeV (and $M_1 \sim 50$ GeV for the dotted curves).
\section{Conclusion}
\label{section:conclusion}
Unlike the MSSM, the electroweak symmetry breaking condition in the $S$-model requires small $\tan\beta$.
Moreover, the LEP smuon mass bound requires slepton masses above about $100$ GeV.
On the other hand, the $2.4 \sigma$ deviation of the muon anomalous magnetic moment favors large $\tan\beta$ and small slepton soft terms.
The $\Delta a_\mu$ determination gives the most severe constraint on $\tan\beta$ and slepton masses in the $S$-model; nonetheless, the $S$-model can still explain the deviation while satisfying the chargino and smuon mass limits.
Since the mass of the light smuon is constrained to be less than $180$ GeV, it would be easily observable at the next generation colliders.
It is remarkable that the parameter space that explains the $a_\mu$ deviation has a sizable overlap with the parameter space that produces an acceptable cold dark matter relic density even when only the $Z$-pole annihilation channel is considered.
The common solution space implies a lighter chargino upper mass bound of $220$ GeV, though the common solution space would increase when annihilation channels in addition to the $Z$-pole are included for the neutralino relic density calculation.
More general $U(1)'$ models that do not require small $\tan\beta$ could explain the $a_\mu$ deviation in a wider range of the parameter space.
Even though the lightest neutralino in the $S$-model is often very light, the difference of the $\Delta a_\mu$ predictions of the MSSM and the $U(1)'$ models for comparable parameters is not large.
This is because the lightest neutralino is mostly singlino-like when it is lighter than that of the MSSM, and it couples to the muon only through mixing; the properties of the other neutralinos are quite similar to those of the MSSM.
The relaxation of gaugino mass unification makes a sizable difference in both models.
The contribution of the $Z'$-loop is suppressed by the large $Z'$ mass, which is constrained by the CDF limit of $M_{Z'} \gsim 500 \sim 800$ GeV, with the limit depending on the model \cite{CDF}.
In the case that the right-handed neutrinos are $U(1)'$ charged and form Dirac particles, more severe constraints of $M_{Z'} \gsim$ multi-TeV are deduced from Big Bang Nucleosynthesis (BBN) \cite{BBN}.
Other possibilities for neutrino mass in these models are discussed in Ref. \cite{nuMass}.
\section*{Acknowledgments}
\vspace*{-1.5ex}
This research was supported in part by the U.S. Department of Energy
under Grants
No.~DE-FG02-95ER40896,
No.~DE-FG02-04ER41305,
No.~DE-FG02-03ER46040,
and
No.~DOE-EY-76-02-3071,
and in part by the Wisconsin Alumni Research Foundation.
|
2,877,628,088,629 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper we study the distributed optimization problem, in which each agent in a network of $n$ agents calculates a decision vector that minimizes a global additive objective function of the form $f(\cdot)=\sum_i f_i(\cdot)$, where $f_i$ denotes the local convex objective function known only to agent $i$. Specifically, each agent maintains a local estimate $x_i$ of the global minimizer
\begin{equation}
x_{\text{opt}} = \argmin_{\theta} \sum_i f_i(\theta)
\end{equation}
which we assume is unique.
The agents reach consensus $x_i = x_{\text{opt}}$ by computing the gradients of their local objective functions $\nabla\mkern-1.5mu f_i(x_i)$ and passing messages along the links of the communication network.
Distributed optimization problems of this form have broad application. For example, a distributed set of servers or sensors could perform a learning task (e.g., classification) using their local data without uploading it to a central server for bandwidth, resiliency, or privacy reasons~\cite{forcangia10}. Swarms of robots can use distributed optimization to plan motions to solve the rendezvous problem \cite{rig08}.
The optimization of a collective cost function in a network setting has seen considerable interest over the last decade \cite{sunvanles20,shiqingwuyin15,lishiyan19,jak19,nedolsshi17,quli18,yuayinzhasay19p1,yuayinzhasay19p2}.
Recently, several authors have adapted methods from control theory to study distributed optimization algorithms as linear systems in feedback with uncertainties constrained by integral quadratic constraints (IQCs) \cite{lesrecpac16,sunhules17,sunvanles20}. These works have made it possible to more easily compare the various known algorithms across general classes of cost functions and graph topologies.
The work \cite{sunvanles20} uses these techniques to describe several recent distributed optimization algorithms within a common framework, then describes a new algorithm within that framework that achieves a superior worst-case convergence rate. However, all of the algorithms considered in \cite{sunvanles20}, including the authors' SVL algorithm, share a common undesirable trait:
to reach the correct solution, their states must start in a particular subspace of the overall global state space and remain on it at every time step.
If for any reason the state trajectories
leave this
subspace (e.g., incorrect initialization, dropped packets,
computation errors, agents leaving the network, changes to objective functions due to continuous data collection), then the system will no longer converge to the minimizer.
Such methods cannot automatically recover from disturbances or other faults that displace their trajectories from this subspace; in other words, they are not \textit{self-healing}.
In this paper, we extend our results from dynamic average consensus estimators \cite{ridfrelyn20,kiavancorfrelynmar19} to design a family of distributed optimization algorithms whose trajectories
need not evolve on a
pre-defined subspace. We call such algorithms \emph{self-healing}.
In practice, this means that our algorithms
can be arbitrarily initialized,
agents can join or leave the network at will, packets can be lost or corrupted, and agents can change their objective functions as necessary, such as when they collect new data. In order to handle the particular case of lost packets, we modify our algorithms with a low-overhead packet loss protocol; this modification is possible because our methods are self-healing.
We refer to distributed optimization algorithms that communicate one or two variables (having the same vector dimension as the decision variable $x_i$) per time step as single- and double-Laplacian methods, respectively. Examples of single-Laplacian methods are SVL and NIDS, while examples of double-Laplacian methods are uEXTRA and DIGing \cite{sunvanles20,lishiyan19,jak19,nedolsshi17,quli18}. Our algorithms are the first self-healing single-Laplacian methods that converge to the exact (rather than an approximate) solution. They achieve self-healing by sacrificing internal stability, a fundamental trade-off for single-Laplacian methods.
In particular, each agent will have an internal state that grows linearly in time in steady state, but because such growth is not exponential it
will
not cause any numerical instabilities
unless run over long time horizons. Double-Laplacian methods can achieve both internal stability and self-healing, but they require twice as much communication per time step
and converge no faster than single-Laplacian methods \cite{sunvanles20,kiavancorfrelynmar19}.
\section{Preliminaries and Main Results}
\subsection{Notation and terminology}
We adopt notation similar to that in \cite{sunvanles20}.
%
Let $\ensuremath{\mathbb{1}}_n$ be the $n$-dimensional column vector of all ones, $I_n$ be the identity matrix in $\ensuremath{\mathbb{R}}^{n \times n}$, and $\Pi_n = \frac{1}{n}\ensuremath{\mathbb{1}} \ensuremath{\mathbb{1}}^\tr$ be the projection matrix onto the vector $\ensuremath{\mathbb{1}}_n$. We drop the subscript $n$ when the size is clear from context. We refer to the one-dimensional linear subspace of $\ensuremath{\mathbb{R}}^n$ spanned by the vector $\ensuremath{\mathbb{1}}_n$ as the \emph{consensus direction} or the \emph{consensus subspace}. We refer to the $(n-1)$-dimensional subspace of $\ensuremath{\mathbb{R}}^n$ associated with the projection matrix $(I_n - \Pi_n)$ as the \emph{disagreement direction} or subspace.
The variable $z$ represents the complex frequency of the $z$-transform.
Subscripts denote the agent index whereas superscripts denote the time index. The symbol $\otimes$ represents the Kronecker product. $A^+$ indicates the Moore-Penrose inverse of $A$. Symmetric quadratic forms $x^\tr\mkern-1.5mu A x$ are written as $[\star]^\tr\mkern-1.5mu A x$
to save space when $x$ is long. The local decision variables are $d$-dimensional and represented as a row vector, i.e., $x_i \in \ensuremath{\mathbb{R}}^{1 \times d}$, and the local gradients are a map $\nabla\mkern-1.5mu f_i : \ensuremath{\mathbb{R}}^{1 \times d} \rightarrow \ensuremath{\mathbb{R}}^{1 \times d}$. The symbol $||\cdot||$ refers to the Euclidean norm of vectors and the spectral norm of matrices.
We model a network of $n$ agents participating in a distributed computation as a
weighted digraph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, where $\mathcal{V} = \{1,...,n\}$ is the set of $n$ nodes (or vertices) and $\mathcal{E}$ is the set of edges such that if $(i,j) \in \mathcal{E}$ then node $i$ can receive information from $j$. We make use of the \textit{weighted graph} \textit{Laplacian} $\ensuremath{\mathcal{L}} \in \ensuremath{\mathbb{R}}^{n \times n}$ associated with $\mathcal{G}$ such that $-\ensuremath{\mathcal{L}}_{ij}$ is the weight on edge $(i,j)\in \mathcal{E}$, $\ensuremath{\mathcal{L}}_{ij}=0$ when $(i,j)\not\in\mathcal{E}$ and $i\neq j$, and the diagonal elements of $\ensuremath{\mathcal{L}}$ are
$\ensuremath{\mathcal{L}}_{ii} = -\sum_{j\neq i} \ensuremath{\mathcal{L}}_{ij}$,
so that $\ensuremath{\mathcal{L}} \ensuremath{\mathbb{1}} = 0$.
We define $\sigma = ||I-\Pi-\ensuremath{\mathcal{L}}||$, which is a parameter related to the edge weights and the graph connectivity.
Throughout this work we stack variables and objective functions such that
\[x^k = \begin{bmatrix}
x_1^k \\
\vdots \\
x_n^k
\end{bmatrix} \in \ensuremath{\mathbb{R}}^{n\times d} \;\; \text{and} \;\;
\nabla\mkern-1.5mu F(x^k) = \begin{bmatrix}
\nabla\mkern-1.5mu f_1(x_1^k) \\
\vdots \\
\nabla\mkern-1.5mu f_n(x_n^k)
\end{bmatrix} \in \ensuremath{\mathbb{R}}^{n\times d}.\]
\subsection{Assumptions}
\begin{enumerate}[itemsep=0.25em,label=\textbf{(A\arabic*)},%
align=left,leftmargin=*,series=assumptions]
\item Given $0< m \leq L$, we assume that the local gradients are sector bounded on the interval $(m,L)$, meaning that they satisfy the quadratic inequality
\begin{equation*} \hspace*{-0.2in}
[\star]^\tr
\begin{bmatrix}
-2mLI_d & (L+m)I_d\\
(L+m)I_d & -2I_d
\end{bmatrix}
\begin{bmatrix}
(x_i-x_{\text{opt}})^\tr\\
(\nabla\mkern-1.5mu f_i(x_i)-\nabla\mkern-1.5mu f_i(x_{\text{opt}}))^\tr
\end{bmatrix} \geq 0
\end{equation*}
for all $x_i \in \ensuremath{\mathbb{R}}^{1\times d}$, where $x_{\text{opt}}$ satisfies $\sum_{i=1}^n \nabla\mkern-1.5mu f_i(x_{\text{opt}}) = 0$.
We define the condition ratio as $\kappa = \frac{L}{m}$, which captures the variation in the curvature of the objective function. \label{a:1}
\item The graph $\mathcal{G}$ is strongly connected. \label{a:2}
\item The graph $\mathcal{G}$ is weight balanced, meaning that $\ensuremath{\mathbb{1}}^{\mkern-1.5mu\tr}\mkern-1.5mu \ensuremath{\mathcal{L}} = 0$. \label{a:3}
\item The weights of $\mathcal{G}$ are such that $\sigma = ||I-\Pi-\ensuremath{\mathcal{L}}|| < 1$. \label{a:4}
\end{enumerate}
\begin{remark}
Assumption \ref{a:1} is known as a \textit{sector IQC} (for a more detailed description see \cite{lesrecpac16}) and is satisfied when the local objective functions are $m$-strongly convex with $L$-Lipschitz continuous gradients.
\end{remark}
\begin{remark}
Throughout this paper we assume without loss of generality that the dimension of the local decision and state variables is $d=1$.
\end{remark}
\begin{remark}
Under appropriate conditions on the communications network, the agents can self-balance their weights in a distributed way to satisfy \ref{a:3};
for example, they can use a scalar consensus filter like push-sum (see Algorithm~12 in \cite{haddomcha18}).
\end{remark}
\subsection{Results}
In the following sections we present a parameterized family of distributed, synchronous, discrete-time algorithms to be be run on each agent such that, under assumptions \ref{a:1}-\ref{a:4}, we achieve the following:
\begin{description}[itemsep=0.25em]
\item[Accurate convergence:] in the absence of disturbances or other faults, the local estimates $x_i$ converge to the optimizer $x_{\text{opt}}$ with a linear rate.
\item[Self-healing:] the system state trajectories need not evolve on
a pre-defined
subspace and will recover from events such as arbitrary initialization, temporary node failure, computation errors,
or changes in local objectives.
\item[Packet loss protocol:] if agents are permitted a state of memory for each of their neighbors, they can implement a packet loss protocol that allows computations to continue in the event communication is temporarily lost. This extends the self-healing of the network to packet loss in a way that is not possible if the system state trajectories are required to evolve on a pre-defined subspace.
\end{description}
First we present the synthesis and analysis of our algorithm along with its performance relative to existing methods. Then we demonstrate via simulation that our algorithm still convergences under high rates of packet loss.
\section{Synthesis of Self-Healing\\ Distributed Optimization Algorithms}
\subsection{Canonical first-order methods}
As a motivation for our algorithms, we use the canonical form first described in \cite{sunvanles19} and later used as the SVL template \cite{sunvanles20}. When the communication graph is constant, many single-Laplacian methods
such as SVL, EXTRA and Exact Diffusion can be described in this form \cite{shiqingwuyin15,yuayinzhasay19p1,yuayinzhasay19p2,sunvanles19,sunvanles20}, which is depicted as a block diagram in Figure~\ref{fig:block}. Algorithms representable by the SVL template can also be expressed as a state space system $G$ in feedback with an uncertain and nonlinear block containing the objective function gradients $\nabla\mkern-1.5mu F(\cdot)$ and the Laplacian $\mathcal{L}$ shown in Figure \ref{fig:feedback}, where
\begin{gather}
G = \left[ \begin{array}{c|c|c}
A & B_u & B_v \\
\hline
C_x & D_{xu} & D_{xv} \\
\hline
C_y & D_{yu} & D_{yv}\end{array}\right] = \left[ \begin{array}{cc|c|c}
1 & \beta & -\alpha & -\gamma \\
0 & 1 & 0 & -1 \\
\hline
1 & 0 & 0 & -\delta \\
\hline
1 & 0 & 0 & 0\end{array}\right] \otimes I_n.
\end{gather}
%
We would like to alert the reader to a small notational difference between our work and \cite{sunvanles20}: in this work, the variable $x$ is the input to the gradients and the variable $y$ is the input to the Laplacian, whereas in \cite{sunvanles20} $y$ is the input to the gradients and $z$ is the input to the Laplacian (we cannot use $z$ here because we already use it as the frequency variable of the $z$-transform).
Algorithms representable by the SVL template, and more broadly all existing first-order methods with a single Laplacian, require that the system trajectories evolve on
a pre-defined
subspace. From our work with average consensus estimators \cite{ridfrelyn20,kiavancorfrelynmar19}, we know that these drawbacks arise from the positional order of the Laplacian and integrator blocks. When the Laplacian feeds into the integrator, the output of the Laplacian cannot drive the integrator state away from the consensus subspace, which leads to an observable but uncontrollable mode. If the integrator state is initialized in the consensus subspace, or it is otherwise disturbed there, the estimate of the optimizer will contain an uncorrectable error.
Switching the order of the Laplacian and integrator renders the integrator state controllable but causes it to become inherently unstable because the integrator output in the consensus direction is disconnected from the rest of the system. We exploit this trade-off to develop self-healing distributed optimization algorithms with only a single Laplacian.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=0.6,>=stealth]
\node (sum) [sum] {};
\node (sss) [below left=0.01cm of sum] {$-$};
\node (Int1) [block, right=0.8cm of sum] {${\dfrac{1}{z-1}I_n}$};
\node (X) [right=of Int1] {};
\node (ylabel) [above=0cm of X] {$y^k$};
\node (L) [block, right=of X] {${\mathcal{L}}$};
\node (d) [block, below= 0.2cm of L] {${\delta I_n}$};
\node (sum2) [sum, left=of d] {};
\node (xlabel) [above left=0 cm of sum2] {$x^k$};
\node (sss) [below left=0.01cm of sum2] {$-$};
\node (grad) [block, left=of sum2] {${\alpha \nabla\mkern-1.5mu F(\cdot)}$};
\node (sum3) [sum] at (grad-|sum) {};
\node (au) [above right=0cm of sum3] {$\alpha u^k$};
\node (gam) [block, below=0.2cm of d] {${\gamma I_n}$};
\node (sum4) [sum] at (gam-|sum3) {};
\node (Int2) [block, below=0.2cm of gam] {${\dfrac{1}{z-1} I_n}$};
\node (beta) [block, left=1.3cm of Int2] {${\beta I_n}$};
\draw [link] (sum) -- (Int1);
\draw [link] (Int1) -- (L);
\draw [link] (Int1) -- (Int1-|sum2) -- (sum2);
\node (off1) [right=of d] {};
\node (j1) at (off1|-L) {};
\node (v) [above=0cm of j1] {$v^k$};
\draw [link] (L) -- (L-|j1) -- (j1|-d) -- (d);
\draw [link] (L) -- (L-|j1) -- (j1|-gam) -- (gam);
\draw [link] (L) -- (L-|j1) -- (j1|-Int2) -- (Int2);
\draw [link] (d) -- (sum2);
\draw [link] (sum2) -- (grad);
\draw [link] (grad) -- (sum3);
\draw [link] (sum3) -- (sum);
\draw [link] (gam) -- (sum4);
\draw [link] (sum4) -- (sum3);
\draw [link] (Int2) -- (beta);
\draw [link] (beta) -- (beta-|sum4) -- (sum4);
\end{tikzpicture}
\caption{The SVL template from \cite{sunvanles20} for first-order, single-Laplacian distributed optimization.}
\label{fig:block}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\node (G) [block] {$G$};
\node (NL) [block, below= 0.2cm of G] {${\begin{bmatrix} \nabla\mkern-1.5mu F(\cdot) & 0 \\ 0 & \mathcal{L} \end{bmatrix}}$};
\node (off1) [left=of G] {};
\node (inlabel) [left=0cm of off1] {${\begin{bmatrix}u^k\\v^k\end{bmatrix}}$};
\node (off2) [right=of G] {};
\node (outlabel) [right=0cm of off2] {${\begin{bmatrix}x^k\\y^k\end{bmatrix}}$};
\draw [link] (G) -- (G-|off2) -- (off2|-NL) -- (NL);
\draw [link] (NL) -- (NL-|off1) -- (off1|-G) -- (G);
\end{tikzpicture}
\caption{Distributed optimization algorithms
represented as a feedback interconnection of an LTI system $G$ and an uncertain block containing the gradients and
the graph Laplacian.}
\label{fig:feedback}
\end{figure}
\subsection{Factorization and integrator location}
In the block diagram depicted in Figure \ref{fig:block}, it is unclear how to switch the Laplacian with the bottom integrator in a straightforward way. Instead we factor an integrator out of the $G(z)$ block of Figure \ref{fig:feedback},
\begin{gather}
\label{eq:Gz}
G(z) = \begin{bmatrix}
\dfrac{-\alpha}{z-1} & -\dfrac{\delta z^2 + (\gamma - 2\delta)z + (\beta+\delta-\gamma)}{(z-1)^2}\\[8pt]
\dfrac{-\alpha}{z-1} & -\dfrac{\gamma z + (\beta - \gamma)}{(z-1)^2}
\end{bmatrix}\otimes I_n\\
\label{eq:factor}
= \begin{bmatrix}
\dfrac{-\alpha}{z-1} & -\dfrac{z-1+\zeta}{z-1} \\[8pt]
\dfrac{-\alpha}{z-1} & \dfrac{-\gamma z - (\beta - \gamma)}{(z-1)(\delta z + \eta - \delta)}
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\[8pt]
0 & \dfrac{\delta z + \eta - \delta}{z-1}
\end{bmatrix}\otimes I_n,
\end{gather}
where
\begin{gather}
\eta = \gamma - \delta \zeta \;\;\text{and}\;\;\zeta = \begin{cases}
\dfrac{\beta}{\gamma}, & \delta = 0\\
\dfrac{\gamma - \sqrt{\gamma^2 - 4\beta \delta}}{2\delta}, & \text{otherwise.}
\end{cases}
\end{gather}
%
Swapping the order of the component matrices yields our new family of algorithms (where $G_{\mkern-1.5mu s}$ replaces $G$):
\begin{align}
G_{\mkern-1.5mu s}(z) &=
\begin{bmatrix}
1 & 0 \\[8pt]
0 & \dfrac{\delta z + \eta - \delta}{z-1}
\end{bmatrix}
\begin{bmatrix}
\dfrac{-\alpha}{z-1} & -\dfrac{z-1+\zeta}{z-1} \\[8pt]
\dfrac{-\alpha}{z-1} & \dfrac{-\gamma z - (\beta - \gamma)}{(z-1)(\delta z + \eta - \delta)}
\end{bmatrix}\otimes I_n\notag\\
&=
\begin{bmatrix}
-\alpha \dfrac{1}{z-1} & -\dfrac{z-1+\zeta}{z-1} \\[8pt]
-\alpha \dfrac{\delta z + \eta - \delta}{(z-1)^2} & -\dfrac{\gamma z + \beta - \gamma}{(z-1)^2}
\end{bmatrix}\otimes I_n.
\end{align}%
\begin{figure}
\begin{tikzpicture}
\node (L) [block] {$\ensuremath{\mathcal{L}}$};
\node (int) [block, left=of L] {${\dfrac{\delta z + \eta - \delta}{z-1}}I_n$};
\node (off1) [left=of int] {};
\node (off2) [right=of L] {};
\draw [link] (off1) -- (int);
\draw [link] (int) -- (L);
\draw [link] (L) -- (off2);
\end{tikzpicture}
\caption{The output of the integrator now feeds into the Laplacian, converting an uncontrollable and observable mode in the original SVL template to a controllable and unobservable one.}
\label{fig:int_order}
\end{figure}%
Now the output of the integrator feeds directly into the Laplacian, as depicted in Figure \ref{fig:int_order}. We assume that our parameter choices satisfy
\begin{align}
\gamma^2 &\geq 4\beta\delta
\end{align}
so that the zeros of $G_{\mkern-1.5mu s}$ remain real and thus the system can be implemented with real-valued signals. The corresponding distributed algorithm is described in Algorithm~\ref{alg:1}, where $w_1$ and $w_2$ are the internal states of $G_{\mkern-1.5mu s}$, and the compact state space form is
\begin{equation}
G_{\mkern-1.5mu s} = \left[ \begin{array}{cc|c|c}
1 & 0 & -\alpha & -\zeta \\
1 & 1 & 0 & -1 \\
\hline
1 & 0 & 0 & -1 \\
\hline
\delta & \eta & 0 & 0\end{array}\right] \otimes I_n.
\end{equation}
\begin{remark}
The factorization in \eqref{eq:factor} is not unique; we chose it because it
leads to a method still having only two internal states per agent.
There may be other useful factorizations.
\end{remark}
\begin{algorithm}[t]
\SetAlgoLined
\LinesNotNumbered
\KwSty{Initialization:} Each agent $i\in\{1,...,n\}$ chooses $w_{1i}^0,w_{2i}^0 \in \ensuremath{\mathbb{R}}^{1 \times d}$ arbitrarily. $\ensuremath{\mathcal{L}} \in \ensuremath{\mathbb{R}}^{n \times n}$ is the graph Laplacian.\\
\For{$k=0,1,2,...$}{
\For{$i\in\{1,...,n\}$}{
\KwSty{Local communication}\\
$y_i^k = \delta w_{1i}^k + \eta w_{2i}^k$\\[2pt]
$v_i^k = \sum_{j=1}^n\mathcal{L}_{ij} y_j^k$\\
\KwSty{Local gradient computation}\\
$x_i^k = w_{1i}^k - v_i^k$\\[2pt]
$u_i^k = \nabla\mkern-1.5mu f_i(x_i^k)$\\
\KwSty{Local state update}\\
$w_{1i}^{k+1} = w_{1i}^k - \alpha u_i^k - \zeta v_i^k$\\[2pt]
$w_{2i}^{k+1} = w_{1i}^k + w_{2i}^k - v_i^k$
}}
\caption{\small Self-Healing Distributed Gradient Descent}
\label{alg:1}
\end{algorithm}
\section{Stability and Convergence Rates Using IQCs}
\subsection{Projection onto the disagreement subspace}
As written, our family of algorithms is internally unstable. We use the projection matrix $(I-\Pi)$ to eliminate the instability from the global system without affecting $x^k$. This procedure is a centralized calculation that cannot be implemented in a distributed fashion, but it allows us to analyze the convergence properties of the distributed algorithm.
Consider the steady-state values $(w_1^\star,x^\star,u^\star,v^\star)$ and suppose $w_2^k$ contains a component in the $\ensuremath{\mathbb{1}}$ direction. Then that component does not affect the aforementioned values because it is an input to the Laplacian $\ensuremath{\mathcal{L}}$ (and lies in its nullspace); however, it grows linearly in time due to the $w_2$ update. Thus the system has an internal instability that is unobservable from the output of the bottom block in Figure~\ref{fig:feedback}. Since the component of $w_2^k$ in the consensus direction is unobservable to the variables $(w_1^k,x^k,u^k,v^k)$, we can throw it away without affecting their trajectories. Using the transformation $\hat{w}_2^k = (I - \Pi)w_2^k$, our state updates become
\begin{align}
\label{eq:up1}
w_1^{k+1} &= w_1^k - \alpha u^k - \zeta v^k\\
\label{eq:up2}
\hat{w}_2^{k+1} &= (I-\Pi)w_1^k + (I-\Pi)\hat{w}_2^k - (I-\Pi)v^k \\
\label{eq:up3}
x^k &= w_1^k - v^k \\
\label{eq:up4}
\hat{y}^k &= \delta w_1^k + \eta \hat{w}_2^k \\
u^k &= \nabla\mkern-1.5mu F(x^k) \\
\label{eq:up6}
v^k &= \mathcal{L} \hat{y}^k,
\end{align}
where $y^k$ was replaced with $\hat{y}^k$ in \eqref{eq:up4} and \eqref{eq:up6} to accommodate $\hat{w}_2^k$. These updates
lead to the state-space system
\begin{equation}\label{eq:gp}
G_m = \left[ \begin{array}{cc|c|c}
I & 0 & -\alpha I & -\zeta I \\
I-\Pi & I-\Pi & 0 & -(I-\Pi) \\
\hline
I & 0 & 0 & -I \\
\hline
\delta I & \eta I & 0 & 0\end{array}\right].
\end{equation}
\subsection{Existence and optimality of a fixed point}
Now that we have eliminated the inherent instability of the global system, we can state the following about the fixed points:
\begin{theorem}
For the system described by $G_m$, there exists at least one fixed point $(w_1^\star,\hat{w}_2^\star, x^\star,\hat{y}^\star,u^\star,v^\star)$, and any such fixed point has $x^\star$ in the consensus subspace such that $x_i^\star = x_{\textnormal{opt}}$ for all $i \in \{1,\dots, n\}$, i.e., any fixed point of the system is optimal.
\end{theorem}
\begin{proof}
First, assume that the fixed point $(w_1^\star,\hat{w}_2^\star, x^\star,\hat{y}^\star,u^\star,v^\star)$ exists. To prove that the variable $x^\star$ lies in the consensus direction, we show that $(I-\Pi)x=0$. From \eqref{eq:up2} and \eqref{eq:up3} we have that
\begin{align}
(I-\Pi)w_1^\star &= (I-\Pi)v^\star \\
(I-\Pi)x^\star &= (I-\Pi)w_1^\star - (I-\Pi)v^\star\\
&= 0.
\end{align}
Thus $x_i^\star = x_j^\star$ for all $i,j \in \{1,\dots,n\}$. Next we show that $x_i^\star = x_{\text{opt}}$. From \eqref{eq:up1} then plugging in \eqref{eq:up6}, we have
\begin{align}
-\alpha u^\star - \zeta v^\star &= 0 \\
u^\star &= -\frac{\zeta}{\alpha} v^\star = -\frac{\zeta}{\alpha} \mathcal{L} \hat{y}^\star \\
\ensuremath{\mathbb{1}}^\tr u^\star &= -\frac{\zeta}{\alpha} \ensuremath{\mathbb{1}}^\tr \mathcal{L} \hat{y}^\star \\
\sum_{i=1}^n u_i^\star &= 0 \\
\rightarrow \sum_{i=1}^n \nabla\mkern-1.5mu f_i(x_i^\star) &= 0\\
\rightarrow x_i^\star &= x_{\text{opt}} \; \forall \; i \in \{1,\dots,n\}.
\end{align}
Thus any fixed point is optimal.
Next, to construct a fixed point we define
\begin{equation}
\begin{aligned}
x^\star &= \ensuremath{\mathbb{1}} x_{\text{opt}}, & u^\star &= \nabla\mkern-1.5mu f(x^\star) \\
v^\star &= -\frac{\alpha}{\zeta} u^\star, & w_1^\star &= x^\star + v^\star .
\end{aligned}
\end{equation}
Then $\hat{w}_2^\star$ is the solution to the equation
\begin{equation}
\label{eq:w2sol}
\zeta \eta \mathcal{L} \hat{w}_2^\star = -\alpha(I-\delta \mathcal{L})u^\star.
\end{equation}
Since $\hat{w}_2^k = \ensuremath{\mathcal{L}}^+ \ensuremath{\mathcal{L}} w_2^k$ (i.e., $\hat{w}_2^k$ is in the row space of $\ensuremath{\mathcal{L}}$), we write $\hat{w}_2^\star$ in closed form as
\begin{equation}
\hat{w}_2^\star = \frac{\alpha}{\zeta \eta}\ensuremath{\mathcal{L}}^+(\delta L - I) u^\star.
\end{equation}
Finally, setting $\hat{y}^\star = \delta w_1^\star + \eta \hat{w}_2^\star$ completes the proof.
\end{proof}
\begin{remark}
If the graph is switching but converges in time such that the limit of the sequence of Laplacians exists, as with a weight balancer, then a solution to \eqref{eq:w2sol} still exists and an optimal fixed point can still be found. Furthermore, the proof techniques in the following section still hold for switching Laplacians (see \cite{sunvanles20} for more information).
\end{remark}
\subsection{Convergence}
Following the approaches in \cite{lesrecpac16,sunhules17,sunvanles20}, we prove stability using a set of linear matrix inequalities. First we split our modified system from \eqref{eq:gp} into consensus and disagreement components. We define
\begin{align}
A_m &= A_p \otimes \Pi + A_q \otimes (I-\Pi) \\
B_{mu} &= B_{pu} \otimes \Pi + B_{qu} \otimes (I-\Pi)\\
B_{mv} &= B_{pv} \otimes \Pi + B_{qv} \otimes (I-\Pi)
\end{align}
\begin{equation}
\begin{aligned}
A_p &= \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}, & B_{pu} &= \begin{bmatrix}-\alpha\\0\end{bmatrix}, & B_{pv}&=\begin{bmatrix}-\zeta \\ 0\end{bmatrix}\\
A_q &= \begin{bmatrix}1 & 0\\ 1 & 1\end{bmatrix}, & B_{qu} &= \begin{bmatrix}
-\alpha \\0\end{bmatrix} & B_{qv} &=\begin{bmatrix} -\zeta\\ -1
\end{bmatrix}.
\end{aligned}
\end{equation}
We also define the matrices
\begin{equation}
M_0 = \begin{bmatrix}
-2mL & L+m \\
L+m & -2
\end{bmatrix} \; \; \text{and} \; \; M_1 =
\begin{bmatrix}
\sigma^2-1 & 1\\ensuremath{\mathbb{1}} & -1
\end{bmatrix}.
\end{equation}
Notice that $M_0$ is associated with the sector bound from \ref{a:1} and that $M_1$ is associated with the $(1-\sigma,1+\sigma)$ sector bound on $\ensuremath{\mathcal{L}}$ with inputs from the disagreement subspace.
We now make a statement analogous to Theorem 10 in \cite{sunvanles20}.
\begin{theorem}
If there exists $P,Q \in \ensuremath{\mathbb{R}}^{2\times 2}$ and $\lambda_0,\lambda_1 \in \ensuremath{\mathbb{R}}$, with $P,Q\succ 0$ and $\lambda_0,\lambda_1\geq 0$ such that
\begin{gather}
\label{eq:LMI1}
[\star]^\tr
\left[ \begin{array}{cc|c}
P & 0 & 0\\
0 & -\rho^2P & 0\\
\hline
0 & 0 & \lambda_0M_0
\end{array}\right]
\left[ \begin{array}{cc}
A_p & B_{pu} \\
I & 0 \\
\hline
C_{x} & D_{xu} \\
0 & I
\end{array}\right] \preceq 0, \\
[\star]^\tr
\label{eq:LMI2}
\arraycolsep=2.5pt
\left[ \begin{array}{cc|c|c}
Q & 0 & 0 & 0\\
0 & -\rho^2Q & 0 & 0\\
\hline
0 & 0 & \lambda_0M_0 & 0 \\
\hline
0 & 0 & 0 & \lambda_1M_1 \\
\end{array}\right]
\left[ \begin{array}{ccc}
A_q & B_{qu} & B_{qv} \\
I & 0 & 0 \\
\hline
C_{x} & D_{xu} & D_{xv} \\
0 & I & 0 \\
\hline
C_{y} & D_{yu} & D_{yv}\\
0 & 0 & I
\end{array}\right]\preceq 0,
\end{gather}
then the following is true for the trajectories of $G_m$:
\begin{equation}\label{eq:main}
\Bigg \| \begin{bmatrix}
w_1^k - w_1^\star\\[.1cm] \hat{w}_2^k - \hat{w}_2^\star
\end{bmatrix} \Bigg \| \leq \sqrt{\operatorname{cond}(T)}\rho^k \Bigg \| \begin{bmatrix}
w_1^0 - w_1^\star\\[.1cm] \hat{w}_2^0 - \hat{w}_2^\star
\end{bmatrix} \Bigg \|
\end{equation}
for a fixed point $(w_1^\star,\hat{w}_2^\star, x^\star,\hat{y}^\star,u^\star,v^\star)$, where $T = P \otimes I_n + Q \otimes (I_n - \Pi_n)$ and $\operatorname{cond}(T) = \frac{\lambda_{\textnormal{max}}(T)}{\lambda_{\textnormal{min}}(T)}$ is the condition number of $T$. Thus the output $x^k$ of Algorithm 1 converges to the optimizer with the linear rate $\rho$.
\end{theorem}
\begin{proof}
Equation \eqref{eq:main} follows directly from Theorem 4 of \cite{lesrecpac16}. Since the states of $G_m$ are converging at a linear rate $\rho$, the rest of the signals in the system (including $x^k$) converge to the optimizer at the same rate. Additionally, the trajectories of $G_m$ and $G_{\mkern-1.5mu s}$ (Algorithm 1) are the same, save for $\hat{w}_2^k$ and $\hat{y}^k$, so $x^k$ in Algorithm 1 also converges to the optimizer with linear rate $\rho$.
\end{proof}
To test the performance of our algorithm, we used the parameters $\beta = 0.5, \gamma = 1, \delta=0.5$. These parameters were inspired by the NIDS/Exact Diffusion parameters presented in \cite{sunvanles19}; however, we have done no work to find parameters that optimize the convergence rate. We then solved the LMIs \eqref{eq:LMI1} and \eqref{eq:LMI2} using Convex.jl \cite{convexjl} with the MOSEK solver \cite{mosek}, performing a bisection search on $\rho$ to find the minimum worst-case convergence rate for a given $\kappa$, $\sigma$, and $\alpha$. We used Brent's method from Optim.jl \cite{optim} to determine the optimal $\alpha$. We plot our results for $\kappa=10$ in Figure~\ref{fig:perf} and include the results for SVL (reproduced from \cite{sunvanles20}) for comparison. Our algorithm with these parameter choices achieves the same performance as NIDS for the NIDS parameter choice $\mu = 1$ as shown in \cite{sunvanles20}. The worst-case convergence rate of our algorithm is subject to the same lower bound, $\rho\geq \max (\frac{\kappa-1}{\kappa+1},\sigma) $, found in \cite{sunvanles20}.
\begin{remark}
We tested the convergence rates for our algorithm with Zames-Falb IQCs in place of Sector IQCs but saw no improvement.
\end{remark}
\begin{figure}
\includegraphics[width=0.5\textwidth]{perf_comparison.png}
\caption{Performance of our algorithm compared with SVL for $\kappa = 10$ and $\sigma \in [0,1)$. Our NIDS-inspired parameter choices result in performance identical to that of NIDS in \cite{sunvanles20}. We have not made any attempts to choose ``optimal'' parameters like those of SVL.}
\label{fig:perf}
\end{figure}
\section{Self-Healing Despite Packet Loss}
\subsection{Packet loss protocol}
We next give our agents some additional memory so that they can substitute previously transmitted values when a packet is lost. Each agent $i\in \{1,\dots,n\}$ maintains an edge state $e_{ij}^k$ for each $j\in \mathcal{N}_{\text{in}}(i)$ (the set of neighbors who transmit to $i$). Whenever agent $i$ receives a message from agent $j$, it updates the state $e_{ij}$ accordingly; however, if at time $k$ no message from neighbor $j$ is received, agent $i$ must estimate what would have likely been transmitted. One potential strategy is to substitute in the last message received, but because $y_j$ is growing linearly in quasi steady state, this naive strategy would ruin steady-state
accuracy. Instead we must account for the linear growth present in our algorithm, which we can do by analyzing the quantity $y_j^{k}-y_j^{k-1}$ at the quasi fixed point $(w_1^\star, x^\star,u^\star,v^\star)$:
\begin{align}
y_j^{k}-y_j^{k-1} &= \delta(w_{1j}^\star-w_{1j}^\star) + \eta(w_{2j}^k - w_{2j}^{k-1})\\
&= \eta(w_{1j}^\star-v_j^\star)\\
&= \eta x_j^\star \approx \eta x_i^k
\end{align}
Therefore, when a packet is not received by a neighbor, agent $i$ scales its estimate of the optimizer and adds it to its previously received (or estimated) message. The packet loss protocol is summarized in Algorithm 2. By construction, the modifications included in Algorithm 2 will not alter the quasi fixed points of Algorithm 1, though we do not have a stability condition like Theorem 2 to present at this time. Instead, we show simulation evidence that Algorithm 2 does indeed converge, and packet loss does not appear to have a substantial impact on the convergence rate, even when the rate of packet loss is large. In the absence of dropped packets, the state trajectories of Algorithm 2 are equivalent to those of Algorithm 1.
\begin{remark}
Algorithm 2 can be modified to include a forgetting factor $q$. If agent $i$ does not receive a packet from neighbor $j$ in $q$ time steps, then agent $i$ assumes that the communication link has been severed and clears $e_{ij}$ from memory.
\end{remark}
\begin{algorithm}[t]
\SetAlgoLined
\LinesNotNumbered
\KwSty{Initialization:} Each agent $i\in\{1,...,n\}$ chooses $w_{1i}^0,w_{2i}^0 \in \ensuremath{\mathbb{R}}^{1 \times d}$ arbitrarily. $\ensuremath{\mathcal{L}} \in \ensuremath{\mathbb{R}}^{n \times n}$ is the graph Laplacian. All $e_{ij}$ are initialized the first time a message is received from a neighbor.\\
\For{$k=0,1,2,...$}{
\For{$i\in\{1,...,n\}$}{
\KwSty{Local communication}\\
$y_i^k = \delta w_{1i}^k + \eta w_{2i}^k$\\[2pt]
\For{$j\in \mathcal{N}_{\text{in}}(i)$}{
\eIf{Packet from $j$ received by $i$}{
$e_{ij}^k = y_{j}^k
}
{$e_{ij}^k = \eta x_i^{k-1} + e_{ij}^{k-1}
}}
$v_i^k = \sum_{j=1}^n\mathcal{L}_{ij} e_{ij}^k$\\
\KwSty{Local gradient computation}\\
$x_i^k = w_{1i}^k - v_i^k$\\[2pt]
$u_i^k = \nabla\mkern-1.5mu f_i(x_i^k)$\\
\KwSty{Local state update}\\
$w_{1i}^{k+1} = w_{1i}^k - \alpha u_i^k - \zeta v_i^k$\\[2pt]
$w_{2i}^{k+1} = w_{1i}^k + w_{2i}^k - v_i^k$
}}
\caption{Packet loss protocol}
\label{alg:2}
\end{algorithm}
\subsection{Classification example}
To test the performance of our algorithm under packet loss, we solved a classification problem using the COSMO chip dataset \cite{cosmo} on an $n=7$ node directed ring lattice, shown in Figure \ref{fig:network}, such that $(i,j)\in \mathcal{E}$ when $j \in \{i+1,i+3,i+5\}\mod n$. All edge weights in the graph are set to 1/4 and $\sigma = ||I-\Pi-\ensuremath{\mathcal{L}}|| = 0.562$. We used the logistic loss function with $L_2$-regularization, yielding local cost functions
\begin{equation}
f_i(x_i) = \sum_{j\in S_i} \log(1+e^{-l_jx_i^\tr M(d_j)}) + \frac{1}{n}||x_i||^2,
\end{equation}
%
where $S_i$ is the set of data indices local to agent $i$, $l_j$ is the label of data point $j$, and $M(d_j)$ is the higher-order polynomial embedding of data point $j$ (for more details see the logistic regression example in the COSMO github \cite{cosmo}). Using this cost, the corresponding sector bound $(m,L)$ is approximated as $m = \frac{2}{n}$ and
\begin{gather}
L_i \leq \Big\|\frac{2}{n}I + \frac{1}{4}M_i^\tr M_i\Big\|, \; \;
L = \max_i L_i,
\end{gather}
where the rows of $M_i$ are $M(d_j)$ for $j\in S_i$.
\begin{figure}
\centering
\begin{tikzpicture}
\node (0) {};
\node (1) at ($(0)+(0:1.8)$) [vertex] {$1$};
\node (2) at ($(0)+(-51.43:1.8)$) [vertex] {$2$};
\node (3) at ($(0)+(-102.86:1.8)$) [vertex] {$3$};
\node (4) at ($(0)+(-154.29:1.8)$) [vertex] {$4$};
\node (5) at ($(0)+(-205.71:1.8)$) [vertex] {$5$};
\node (6) at ($(0)+(-257.14:1.8)$) [vertex] {$6$};
\node (7) at ($(0)+(-308.57:1.8)$) [vertex] {$7$};
\draw [link] (1) -- (2);
\draw [link] (1) -- (4);
\draw [link] (1) -- (6);
\draw [link] (2) -- (3);
\draw [link] (2) -- (5);
\draw [link] (2) -- (7);
\draw [link] (3) -- (4);
\draw [link] (3) -- (6);
\draw [link] (3) -- (1);
\draw [link] (4) -- (5);
\draw [link] (4) -- (7);
\draw [link] (4) -- (2);
\draw [link] (5) -- (6);
\draw [link] (5) -- (1);
\draw [link] (5) -- (3);
\draw [link] (6) -- (7);
\draw [link] (6) -- (2);
\draw [link] (6) -- (4);
\draw [link] (7) -- (1);
\draw [link] (7) -- (3);
\draw [link] (7) -- (5);
\end{tikzpicture}
\caption{The directed network topology for the classification example. All edge weights are 1/4.}
\label{fig:network}
\end{figure}
Using $(m,L)$ and $\sigma$, we computed the optimal step size $\alpha$ for our algorithm using Brent's method and computed the SVL parameters as detailed in \cite{sunvanles20}. We then simulated both Algorithm 2 and SVL with and without packet loss and took the maximum error between the distributed algorithms and a centralized solution found using Convex.jl and MOSEK. We ran our algorithms using random initial conditions on the interval $[0,1]$ and the SVL algorithm using zero initial conditions. For the packet loss run of SVL,
we held the previous message on each edge
so that the fixed points would be unaffected. Packets had a 30\% chance of being lost, independent of each other. The results of these simulations are shown in Figure \ref{fig:packet_loss}. In this scenario, Algorithm 2 with lossy channels still converges to the optimum at a similar rate as Algorithm 1 with lossless channels, despite the high rate of packet loss that causes SVL to converge with high error.
\begin{figure}
\includegraphics[width=0.5\textwidth]{traces.png}
\caption{Simulation of Algorithm 1 and SVL in lossless channels as well as Algorithm 2 and SVL in lossy channels. The lossy channels are modeled with an independent 30\% packet loss. Error is the maximum error.}
\label{fig:packet_loss}
\end{figure}
\section{Summary and Future Work}
In this paper, we demonstrated the existence of a parameterized family of first-order algorithms for distributed optimization that do not require system trajectories to evolve on
a pre-defined
subspace, despite having a single communicated variable. These algorithms are self-healing; they do not require the system to be initialized
precisely
and will recover from events such as agents dropping out of the network or changes to objective functions that might otherwise introduce uncorrectable errors. Furthermore, our algorithms can be augmented with our packet loss protocol, thereby allowing the system to converge to the optimizer even in the presence of heavily lossy communication channels. Our algorithms converge with a linear rate to the optimizer but contain an internal instability that grows linearly in time; however, this instability is unlikely to cause issues unless run over long time horizons.
There is much left to investigate. We still need to consider the properties of other factorizations of $G(z)$ in \eqref{eq:Gz}, and possible factorizations of algorithms that are not subsumed by the SVL template. We need to explore the parameter space of the algorithm presented in this paper and, particularly, investigate if an optimization like that used to find the SVL parameters can be carried out. Finally, we will investigate a formal proof that Algorithm 2 still converges in the presence of packet loss.
\balance
\renewcommand*{\bibfont}{\footnotesize}
\printbibliography
\end{document}
|
2,877,628,088,630 | arxiv | \section{1.\ Introduction}
\smallskip\noindent
In the last few years the explicit construction of $\cal W$-algebras
$\q{1}$ has also reached the fast developing field of $N=2$
supersymmetric conformal field theories (SCFTs) \q{2-7}.
{}From a two dimensional quantum-gravity point of view $N=2$ SCFTs
are very important since on the one hand they describe the
string world sheet CFT of $N=1$ space-time supersymmetric heterotic
string compactifications \q{8-10} and on the
other hand the ghost sector of $\cal W$-gravity theories contains an
$N=2$ super $\cal W$-algebra \q{11}. In ref.\ $\q{7}$ non-linear
$N=2$ super $\cal W$-algebras, namely extensions of
the $N=2$ super Virasoro algebra ($N=2$ SVIR) by a pair of
super-primary fields of opposite $U(1)-$charge, have been treated in
a systematic way. The calculation of such objects is very involved but
using a manifestly covariant approach one can simplify the problem
tremendously. The extension of $N=2$ SVIR by superprimaries of
superconformal dimension $\Delta_1,\ldots,\Delta_n$ is denoted by
${\cal SW}(1,\Delta_1,\ldots,\Delta_n)$.
\smallskip\noindent
After presenting a further example, the ${\cal SW}(1,3)$ algebra
with zero $U(1)-$charge and non-vanishing self-coupling constant,
we investigate the representation theory of the former algebras
focusing on the question whether these algebras admit rational models.
To this end we evaluate Jacobi identities on the lowest weight
states $|h,q\rangle$ yielding restrictions on the allowed
lowest weights $(h,q)$ \q{12,13}.
The results obtained by this method allow one to arrange all
known $N=2$ super $\cal W$-algebras with two generators and
vanishing self-coupling constant into essentially four series.
Only one series contains rational theories which can be explained by the
classification of modular invariants of $N=2$ SVIR \q{14,15}.
In the case of $N=2$ super $\cal W$-algebras with two generators there are
many examples of algebras existing for $c\ge 3$.
All these $c\ge 3$ theories have common features which can be
explained by the existence of the spectral flow of $N=2$ SVIR.
Using this fact, we can describe these $N=2$ super $\cal W$-algebras by a simple
formula containing the well known algebras ${\cal SW}(1,{d\over 2})$
for $Q=d$, $c=3d$ ($d\in{\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$) $\q{16}$ as a subset.
The latter algebras occur in the compactification of the heterotic
string to $(10-2d)$ space-time dimensions~\q{17,16}.
\smallskip\noindent
\section{2.\ Construction of $N=2$ super $\cal W$-algebras: $\cal SW$(1,3)}
\smallskip\noindent
In this section we use results of \q{18,19} about
holomorphic $N=2$ superconformal field theories, and in particular
we will use the notation and formulae of section two of
ref.\ \q{7}. We apply the formalism of \q{7} to construct
the supplementary example $\cal SW$(1,3) with zero $U(1)-$charge and
non-vanishing self-coupling constant. Firstly, we have to write down all
super quasiprimary fields which can occur in the OPE
$\Phi_{3}^{0}(Z_1)\>\Phi_{3}^{0}(Z_2)$.
We present only the fields of dimension 5 and ${11}\over 2$.
The fields with lower dimension have been presented in~$\q{7}$.
\smallskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfil#\hfill\hskip0.15cm\cr
height2pt&\omit&&\omit&\cr
& $\Delta$ && $Q$ && quasi-primary fields &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& 5 && 0 && ${\cal N}_s({\cal N}_s({\cal N}_s({\cal N}_s({\cal L}{\cal L})
{\cal L}){\cal L}){\cal L}),\
{\cal N}_s({\cal N}_s({\cal N}_s({\cal L}[\o{D},D]{\cal L})
{\cal L}){\cal L}),\
{\cal N}_s({\cal N}_s({\cal L}\partial^2{\cal L}){\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && \omit && ${\cal N}_s({\cal N}_s({\cal L}\o{D}
\partial{\cal L})D{\cal L}),
\ {\cal N}_s({\cal N}_s({\cal L}D\partial{\cal L})\o{D}{\cal L}),\
{\cal N}_s({\cal L}[\o{D},D]\partial^2{\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && \omit && ${\cal N}_s({\cal N}_s({\Phi}{\cal L}){\cal L}),\
{\cal N}_s({\Phi}\partial{\cal L}),\
{\cal N}_s({\Phi}[\o{D},D]{\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& $11\over2$ && -1 &&
${\cal N}_s({\cal N}_s({\cal N}_s({\cal L}{D}
\partial{\cal L}){\cal L}){\cal L}),
\ {\cal N}_s({\cal N}_s({\cal N}_s({\cal N}_s({\cal L}{\cal L})
{\cal L}){\cal L})D{\cal L}),\
{\cal N}_s({\cal N}_s({\cal L}\partial^2{\cal L})D{\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && \omit &&
${\cal N}_s({\cal N}_s({\cal N}_s({\cal L}[\o{D},D]{\cal L})
{\cal L})D{\cal L}),\ {\cal N}_s({\Phi}\partial{D}{\cal L}),
\ {\cal N}_s({\cal N}_s({\Phi}{D}{\cal L}){\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& $11\over2$ && 1 &&
${\cal N}_s({\cal N}_s({\cal N}_s({\cal L}\o{D}
\partial{\cal L}){\cal L}){\cal L}),
\ {\cal N}_s({\cal N}_s({\cal N}_s({\cal N}_s({\cal L}{\cal L})
{\cal L}){\cal L})\o{D}{\cal L}),\
{\cal N}_s({\cal N}_s({\cal L}\partial^2{\cal L})\o{D}{\cal L})$ &\cr
height2pt&\omit&&\omit&\cr
& \omit &&\omit && ${\cal N}_s({\cal N}_s({\cal N}_s({\cal L}
[\o{D},D]{\cal L}){\cal L})\o{D}{\cal L}),\
{\cal N}_s({\Phi}\partial\o{D}{\cal L}),\
{\cal N}_s({\cal N}_s({\Phi}\o{D}{\cal L}){\cal L})$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 1: Quasi-primary fields of dimension 5
and ${11 \over 2}$ in ${\cal SW}(1,3)$} }}
\bigskip\noindent
For completeness we give also the Kac-determinants in the vacuum sector:
\smallskip\noindent
\centerline{\vbox{
\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfil#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $\Delta$ && $det(D_{\Delta})\sim$ &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& 5 && $(c-2)(c-1)^5 c^9 (c+1)(c+6)^3(c+12)
(2c-3)^3 (5c-9) (7c+18) (c^2+26c-75)$ &\cr
height2pt&\omit&&\omit&\cr
&$11\over2$&&$(c-1)^6 c^{14}(c+1)^2 (c+6)^4 (2c-3)^4 (5c-9)^2 (7c-18)^2$&\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 2: Kac-determinants}}}
\medskip\noindent
The next step is to determine the structure constants
$C_{ij}^k$, $\alpha_{ijk}$ for all normal
ordered products which are rational functions in $c$ and one
self-coupling constant $C_{33}^3\,\alpha$. We skip their presentation
and continue with the result that the Jacobi identities
yield the following expression for the coupling constant $C_{33}^3\,\alpha$:
$$\bigl(C_{33}^3\,\alpha\bigr)^2
={49c^3(7c-18)^2(c^2+26c-75)(3c^4+62c^3-129c^2-9c+54)\over
3(3-2c)(c-2)(c-1)(c+1)(c+6)(c+12)(5c-9)(3c^2-37c+60)}.
\eqno{\rm (1)}$$
All Jacobi identities are satisfied only if the central charge takes the
following rational numbers:
$$ c\in\left\lbrace{18\over7},{10\over3},-{9\over5}\right\rbrace.
\eqno{\rm (2)}$$
The corresponding values of the self-coupling constant are
$$ \bigl(C_{33}^3\,\alpha\bigr)^2\in\left\lbrace 0,{268960000\over88803},
-{106815267\over88000}\right\rbrace.
\eqno{\rm (3)}$$
\medskip\noindent
We conclude this section with a very brief review of the results of the
construction of $N=2$ super $\cal W$-algebras with two generators
with {\it vanishing self-coupling constant} \q{7}.
Using $\rm MATHEMATICA^{TM}$ and a special C-program \q{20}
we constructed ${\cal SW}(1,\Delta)$ algebras for
$\Delta\in\lbrace {3\over2},2,{5\over2},3\rbrace$.
In the following table we present the central charges $c$ and
$U(1)-$charges $Q$ for which these $\cal W$-algebras exist.
\medskip\noindent
\centerline{\vbox{
\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&&\omit&&\omit
&&\omit&&\omit&&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.6cm\hfil#\hfil\hskip0.6cm\cr
height2pt&\omit&&\omit&\cr
& $\Delta$ && $Q$ && $c$ && \omit && \omit&& $\Delta$ && $Q$ && $c$ &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& \omit && $0$ && $9\over4$, $3\over5$, $-{3\over2}$ && \omit&& \omit
&& \omit && $0$ && $5\over2$, $9\over7$, $-{9\over2}$ &\cr
height2pt&\omit&&\omit&\cr
& $3\over2$ && $\pm 3$ && $9$ &&\omit&&\omit&&$5\over2$&&$\pm 5$&& $15$&\cr
height2pt&\omit&&\omit&\cr
& \omit && $\pm 1$ && $3$ &&\omit&&\omit&&\omit&& $\pm 1$ && $3$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && \omit&&\omit&&\omit&&\omit&& \omit&&$\pm 2$ && $9\over2$ &\cr
\tablespace\noalign{\hrule}\tablespace
& \omit && 0 && $12\over5$, $-3$&&\omit&&\omit&&\omit&&$0$&&$18\over7$&\cr
height2pt&\omit&&\omit&\cr
& 2 && $\pm 4$ && $12$, $-9$, $-21$
&&\omit&& \omit && $3$ && $\pm 6$ && $18$, $-15$, $-33$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\pm {3\over2}$ && $15\over4$
&& \omit&& \omit && \omit && $\pm {4\over3}$ && $10\over3$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && \omit && \omit && \omit
&& \omit && \omit&& $\pm {5\over2}$ && $21\over4$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.1cm Table 3: Central charges of
${\cal SW}(1,\Delta)$ algebras with vanishing self-coupling constant}}}
\vfill
\eject
\section{3.\ Representations of $N=2$ super $\cal W$-algebras}
\medskip\noindent
In this section we study the possible lowest weight representations
in the Neveu-Schwarz (NS) sector of the algebras presented in table 3.
To this end one evaluates Jacobi-identities on the lowest weight
states yielding necessary conditions for the quantum numbers of the
allowed representations. According to the results obtained this way, we
propose an arrangement of $N=2$ super $\cal W$-algebras with two generators
and vanishing self-coupling constant into the following four classes:
\bigskip\noindent
{\bf (1)} $c={6\Delta\over 2\Delta+1},Q=0,\quad 2\Delta\in{\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$
\smallskip\noindent
These values of $c$ lie in the unitary minimal
series $c(k)={{3k}\over {k+2}},\ k\in {\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$ of $N=2$ SVIR with $k=4\Delta$.
The lowest weights in the NS sector are given by (see e.g.\ $\q{15}$):
$$ h_{l,m}={{l(l+2)-m^2}\over{4(k+2)}},\quad q_m={m\over{k+2}},\quad
l=0,\dots,k\quad {\rm and\ } m=-l,-l+2,\dots,l.
\eqno{\rm (4)}$$
The primary field corresponding to the lowest weight $(h_{k,0},0)$
is a ${\mathchoice{\BZT}{\BZT}{\BZS}{\BZSS}}_2$ simple current of conformal dimension $\Delta$ for $k=4\Delta$.
The classification of modular invariant partition functions
$\q{15}$ yields off-diagonal invariants for
$k\equiv 0,2 \ ({\rm mod}\ 4)$ which can be viewed as diagonal invariants
of the extended model with symmetry algebra ${\cal SW}(1,\Delta)$.
The explicit calculation of the lowest weights of the ${\cal SW}(1,\Delta)$
at $c(4\Delta)$ yields exactly those lowest weights which one expects from the
off-diagonal invariant. We stress that these models are the only rational
ones among the central charges given in table 3 above.
The NS part of the non-diagonal modular invariant partition function for
$k\equiv 2\ ({\rm mod}\ 4)$ is given by $\q{15}$:
$$\eqalign{Z_{NS}=&
\sum_{{l=0,2,\dots,{k\over 2}-1}\atop{m=-l,-l+2,\dots,l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^{k-l}_m \vert^2 +
\sum_{{l={k\over 2}+1,\dots,k-2,k}\atop{|m|< k-l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^{k-l}_m \vert^2 +\cr
& \sum_{{l={k\over 2}+1,\dots,k-2,k}\atop{m<-(k-l)}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^l_{m+k+2} \vert^2 +
\sum_{{l={k\over 2}+1,\dots,k-2,k}\atop{m> k-l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert\chi^l_m+\chi^l_{m-(k+2)}\vert^2.\cr
}\eqno{\rm (5)}$$
The $\chi^l_m$ are the characters of the
irreducible lowest weight representations of $N=2$ SVIR with
lowest weight $(h_{l,m},q_m)$. For $k\equiv 0\ ({\rm mod}\ 4)$
it reads $\q{15}$:
$$\eqalign{Z_{NS}=&
\sum_{{l=0,2,\dots,{k\over 2}-2}\atop{m=-l,-l+2,\dots,l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^{k-l}_m \vert^2 +
\sum_{m=-{k\over 2},-{k\over 2}+2\dots,{k\over 2}}
\hskip -0.5cm 2\vert \chi^{k\over 2}_{m} \vert^2 +
\sum_{{l={k\over 2}+2,\dots,k-2,k}\atop{|m|< k-l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^{k-l}_m\vert^2 +\cr
& \sum_{{l={k\over 2}+2,\dots,k-2,k}\atop{m<-(k-l)}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert \chi^l_m+\chi^l_{m+k+2}\vert^2 +
\sum_{{l={k\over 2}+2,\dots,k-2,k}\atop{m> k-l}}
\hskip -0.5cm {\textstyle{1\over 2}}\vert\chi^l_m+\chi^l_{m-(k+2)}\vert^2.\cr
}\eqno{\rm (6)}$$
\smallskip\noindent
{}From these expressions one can directly read off the lowest weights
of ${\cal SW}(1,\Delta)$ in the NS-sector.
The following picture shows the allowed lowest weights $(h,q)$ for
the first member of the series $(k=6)$. A cross represents a representation
module with non-degenerate lowest weight, i.e.\ the zero modes of
bosonic components act trivial in the NS sector. The rectangles denote
a representation module with doubly degenerate lowest weight, i.e.\ one
has $\vert h,-{1\over 2}\rangle \sim \o{\psi}_0 \vert h, {1\over 2}\rangle$
\footnote{${}^1)$}{The components $\phi,\psi,\o\psi,\chi$ of a super
field $\Phi$ are defined according
to $\Phi(Z)=\phi(z)+{1\over{\sqrt{2}}}\left(\theta\o\psi(z)-\o\theta\psi(z)
\right) + \theta\o\theta\chi(z)$.}.
\medskip\noindent
\vbox{
\line{\hfill\psfig{figure=susy1.ps}\hfill}
\vskip -3.5cm
\line{\hskip3.5cm $\scriptstyle{\vert h,-{1\over2}\rangle
\sim \o{\psi}_0\vert h,{1\over2}\rangle}$}}
\vskip 3cm
\bigskip\noindent
The Ramond sector is determined by the spectral flow
$$\eqalign{
L'_n &=L_n+\eta J_n+{c\over 6}\eta^2\delta_{n,0}\ ,\quad
J'_n=J_n+{c\over 3}\eta\delta_{n,0} \cr
\varphi'_r&=\varphi_{r+\eta Q },\quad\quad
\varphi\in\lbrace G,\o{G},{\rm components\ of\ \Phi}\rbrace \cr}
\eqno{\rm (7)}$$
\smallskip\noindent
for $\eta=\pm {1\over 2}$. Note that in the Ramond sector the bosonic fields
$\psi$ and $\bar\psi$ of dimension 2 carry half-integer modes.
\bigskip\noindent\medskip\noindent
{\bf (2a)} $c=3\left(1-{2\over \Delta+1}\right),Q=0\quad
{\rm for}\quad \Delta\in {\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}+{1\over 2}$
\medskip\noindent
In particular, the values $c={3\over5}$ ($\Delta={3\over 2}$)
and $c={9\over7}$ ($\Delta={5\over 2}$) are contained in this series.
These models have {\it infinitely} many lowest weight
representations with infinitely degenerate lowest weight.
One obtains for the minimal lowest weight
$h_{\rm min}={c-3\over24}$, i.e.\ one has
$c_{\rm eff}=c-24 h_{\rm min}=3$.
However, there exist finitely many representations
with finitely degenerate lowest weight.
Note that the $N=0$ ($c_{\rm eff}=1$) and $N=1$
($c_{\rm eff}={3\over 2}$) analogues are rational
theories (see e.g.\ \q{21-23}).
\bigskip\noindent
For $\Delta\in {\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$ there are two other
series with $c_{\rm eff}=3$.
\bigskip\noindent
{\bf (2b)} $c=3\left(1-2\Delta\right),Q=2\Delta$\ \ and\ \
$c=3\left(1-4\Delta\right),Q=2\Delta$\ \ \ \ for $\Delta\in {\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$
\medskip\noindent
The space of representations looks similar to the case (2a).
\vfill\eject
\smallskip\noindent
As examples we present the $c={3\over5}$ ($\Delta={3\over 2}$)
and $c=-9$ ($\Delta=2$) models:
\smallskip\noindent
\vbox{
\line{\quad\psfig{figure=susy2.ps}\hfill}
\vskip -4.5cm
\line{\hskip 10cm\hbox{$\scriptstyle
\vert h,-2\rangle\sim\varphi_0^-\vert h,2\rangle,\ h>0$}\hfil}
}
\vskip 4cm
\bigskip\noindent\medskip\noindent
Explicitly we obtained the following representations
for ${\cal SW}(1,{3\over 2})$ and $c={3\over 5}$:
\medskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $(0,0),({3\over{10}},0),
({1\over 5},\pm{2\over 5}),({1\over{10}},\pm{1\over 5})$ &&
$\langle h,q\vert \psi_0 \o{\psi}_0 \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0 {\psi}_0 \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(-{1\over{10}},q)\quad {\rm for\ }q\in{\mathchoice{\BRT}{\BRT}{\BRS}{\BRSS}}$ &&
$\langle h,q\vert \psi_0 \o{\psi}_0 \vert h,q\rangle\sim
(1-5q)(2-5q)(3-5q)(4-5q) $ &\cr
&\omit && $\langle h,q\vert \o{\psi}_0 {\psi}_0 \vert h,q\rangle \sim
(1+5q)(2+5q)(3+5q)(4+5q) $ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 4:
${\cal SW}(1,{3\over 2}) {\rm \ at\ } c= {3\over 5},Q=0$}}}
\medskip\noindent
The relations for $h=-{1\over {10}}$ imply that for
$(\pm q)\in \{ {1\over 5}, {2\over 5}, {3\over 5}, {4\over 5} \}$
the degeneracy of the
lowest weight is partially removed but is nevertheless infinite.
That is to say that e.g.\ the states $\vert -{1\over {10}}, -{4\over 5}\rangle$
and $\vert -{1\over{10}}, {1\over 5}\rangle$ are not in the same representation
module as it is the case for $\vert -{1\over{10}}, q \rangle$
and $\vert -{1\over{10}}, q\pm 1 \rangle$ if $q$ is generic.
\bigskip\noindent
For ${\cal SW}(1,2)$ and $c=-9$ we obtained the following results:
\medskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $(0,0),(-{3\over 8},0),
(-{1\over 4},\pm{1\over 2})$ &&
$\langle h,q\vert \phi_0^+ \phi_0^- \vert h,q\rangle=
\langle h,q\vert \phi_0^- \phi_0^+ \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({1\over 8},\pm 2)$ &&
$\langle h,q\vert \phi_0^\pm \phi_0^\mp \vert h,q\rangle \not=0 ,\
\langle h,q\vert \phi_0^\mp \phi_0^\pm \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(1,\pm 2)$ &&
$\langle h,q\vert \phi_0^\pm \phi_0^\mp \vert h,q\rangle \not= 0,\
\langle h,q\vert \phi_0^\mp \phi_0^\pm \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(-{1\over 2},q)\quad {\rm for\ } q\in{\mathchoice{\BRT}{\BRT}{\BRS}{\BRSS}}$ &&
$\langle h,q\vert \phi_0^+ \phi_0^- \vert h,q\rangle \sim
(q-3)^2(q-1)^2$ &\cr
& \omit &&
$\langle h,q\vert \phi_0^- \phi_0^+ \vert h,q\rangle \sim
(q+3)^2(q+1)^2$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 5: ${\cal SW}(1,2) {\rm \ at\ } c=-9,Q=4$}}}
\medskip\noindent
The implication of the relations for $h=-{1\over 2}$
is similar to the previous case. Note that the
representations with positive $h$ are doubly degenerate.
\vfill
\eject
\smallskip\noindent
{\bf (3)} $c=3\left(1-\Delta\right),Q=0,\quad 2\Delta\in{\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$
\bigskip\noindent
These models are pathological because their spectrum
is not bounded from below.
\bigskip\noindent
\vbox{
\line{\hfill\psfig{figure=susy3.ps}\hfill}
\vskip -5cm
\line{\hskip 8cm\hbox{$\scriptstyle
\vert {1\over 4},-{1\over 2}\rangle\sim
{\o\psi}_0\vert {1\over 4},+{1\over 2}\rangle$}\hfil}
}
\vskip 4.5cm
\bigskip\noindent
To be explicit, from our calculations we could not exclude the
following representations:
\bigskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $(0,0) ,\ (-{1\over 3}q^2-{1\over 6},q)\quad {\rm for\ }q\in{\mathchoice{\BRT}{\BRT}{\BRS}{\BRSS}}$ &&
$\langle h,q\vert \psi_0 \o{\psi}_0 \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0 {\psi}_0 \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({1\over 4},+{1\over 2})$ &&
$\langle h,q\vert \o{\psi}_0 {\psi}_0 \vert h,q\rangle = 0,\
\langle h,q\vert \psi_0 \o{\psi}_0 \vert h,q\rangle \not= 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({1\over 4},-{1\over 2})$ &&
$\langle h,q\vert \o{\psi}_0 {\psi}_0 \vert h,q\rangle \not= 0,\
\langle h,q\vert \psi_0 \o{\psi}_0 \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 6: ${\cal SW}(1,{3\over 2})\ {\rm at\ }
c=-{3\over 2},Q=0$ }}}
\bigskip\noindent\bn
{\bf (4)} $c\ge3, Q {\rm \ rational}$
\bigskip\noindent
For these algebras there exists no known analogue in the
theory of $N=0$ and $N=1$ $\cal W$-algebras with two generators
(see e.g.\ \q{1} and references therein).
All representations of these algebras have a similar structure.
The theories are not rational and have the property that infinitely
many representations with fixed $U(1)$-charge $q$ exist, where
$q$ takes finitely many rational values. The representations with fixed
$h$ have a finitely degenerate lowest weight.
Furthermore, in all cases the field ${\cal N}_s(\Phi{\cal L})$
vanishes identically.
\medskip\noindent
For the subset $c=3d$ ($d\in{\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}$) it has been shown explicitly in \q{16}
that the extension of the $N=2$ SVIR by the local chiral
primaries $(\Delta,Q)=(c/6,\pm c/3)$ exists.
In $\q{16}$ the null field ${\cal N}_s(\Phi{\cal L})$ has been
used to find the unitary representations for $c=3d$.
Using the methods described above we were able to reproduce
these results without exploiting the null field:
\vfill\eject
\smallskip\noindent
\vbox{
\line{\hfill\psfig{figure=susy4.ps}\quad
\quad \quad\psfig{figure=susy5.ps}\hfill}
\vskip -4.5cm
\line{\hbox{$\scriptstyle
\vert h,-1\rangle\sim{\psi}_0^-\vert h,1\rangle,\ h>{1\over 2}$}
\hskip 8.5cm
\hbox{$\scriptstyle \vert 1,-2\rangle\sim{\phi}_0^-\vert 1,2\rangle$}
\hfil}}
\vskip 4.5cm
\smallskip\noindent
The following tables show the concrete results in more detail.
Note that we give only the unitary irreducible representations
implying the condition $h\ge {1\over 2}\vert q\vert$.
\smallskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
&$({1\over 2},+1),\ (h,-1),(h,0)\quad{\rm for\ }{{|q|}\over 2}\le h<\infty$&&
$\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
&$({1\over 2},-1),\ (h,0),(h,+1)\quad{\rm for\ }{{|q|}\over 2}\le h<\infty$&&
$\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0 $ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 7: ${\cal SW}(1,{3\over 2})$
{\rm at }$c=9,Q=3$}}}
\smallskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $(h,0),(h,\pm 1)\quad {\rm for\ } {{|q|}\over 2}\le h<\infty$ &&
$\langle h,q\vert \phi_0^+ \phi_0^- \vert h,q\rangle =
\langle h,q\vert \phi_0^- \phi_0^+ \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(1,-2)$ && $\langle h,q\vert \phi_0^+ \phi_0^- \vert h,q\rangle = 0,\
\langle h,q\vert \phi_0^- \phi_0^+ \vert h,q\rangle \not= 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(1,+2)$ && $\langle h,q\vert \phi_0^+ \phi_0^- \vert h,q\rangle \not= 0,\
\langle h,q\vert \phi_0^- \phi_0^+ \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 8: ${\cal SW}(1,2)$ {\rm at } $c=12,Q=4$}}}
\medskip\noindent
For the new algebras with $c\ge 3$ we obtained a very similar structure
for the representations. As for the $c=3d$ algebras the theories are not
rational and we also obtain a quantization of the $U(1)-$charge $q$.
\medskip\noindent\smallskip\noindent
\vbox{
\line{\hfill\psfig{figure=susy6.ps}\quad
\quad \quad\psfig{figure=susy7.ps}\hfill}
\vskip -4.5cm
\line{\hbox{$\scriptstyle
\vert {3\over 4},-{3\over 2}
\rangle\sim\o{\psi}^-_0\vert {3\over 4},{3\over 2}\rangle$}
\hskip 8.5cm
\hbox{}
\hfil}
\line{\hbox{$\scriptstyle
\vert h,-{1\over 2}\rangle\sim{\psi}^-_0\vert h,
{1\over 2}\rangle,\ h>{1\over 4}$}}
}
\vskip 4.5cm
\vfill\eject
\smallskip\noindent
The explicit data are given in the following two tables:
\bigskip\noindent
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $({1\over 2},\pm 1),\ (h,0)\quad {\rm for\ } 0\le h < \infty$ &&
$\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle =
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr
& \omit &&
$\langle h,q\vert {\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(h,+{1\over 2})\quad {\rm for\ } {1\over 4}\le h < \infty$ &&
$\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle \sim (4h-1)^2,\
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr
&\omit && $\langle h,q\vert {\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $(h,-{1\over 2})\quad {\rm for\ } {1\over 4}\le h < \infty$ &&
$\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle\sim (4h-1)^2,\
\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\langle h,q\vert {\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({3\over 4}, +{3\over 2})$ && $\langle h,q\vert
{\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle \not= 0,\
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle =
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({3\over 4}, -{3\over 2})$ && $\langle h,q\vert
{\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle = 0,\
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle \not= 0$ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\langle h,q\vert \o{\psi}_0^+ {\psi}_0^- \vert h,q\rangle =
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 9: ${\cal SW}(1,{5\over 2})$
{\rm at } $c={9\over 2},Q=2$}}}
\medskip\noindent\mn
\centerline{\vbox{\hbox{\vbox{\offinterlineskip
\defheight2pt&\omit&&\omit&\cr{height2pt&\omit&&\omit&\cr}
\def\tablespace\noalign{\hrule}\tablespace{height2pt&\omit&&\omit&\cr\noalign{\hrule}height2pt&\omit&&\omit&\cr}
\hrule\halign{&\vrule#&\strut\hskip0.2cm\hfill#\hfill\hskip0.2cm\cr
height2pt&\omit&&\omit&\cr
& $(h,q)$ && \omit &\cr
\tablespace\noalign{\hrule}\tablespace\tablerule
& $(h,0)\quad {\rm for\ } 0\le h < \infty$ &&
$6h^3+5\langle h,q\vert \o{\psi}_0^+{\psi}_0^-\vert h,q\rangle=0,\
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0 $ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\langle h,q\vert {\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
\tablespace\noalign{\hrule}\tablespace
& $({1\over 6},\pm{1\over 3}),({1\over 2},\pm{1\over 3}),
({1\over 3},\pm{2\over 3}),({1\over 2},\pm 1)$ &&
$\langle h,q\vert \o{\psi}_0^+{\psi}_0^-\vert h,q\rangle=
\langle h,q\vert {\psi}_0^- \o{\psi}_0^+ \vert h,q\rangle = 0 $ &\cr
height2pt&\omit&&\omit&\cr
& \omit && $\langle h,q\vert {\psi}_0^+ \o{\psi}_0^- \vert h,q\rangle =
\langle h,q\vert \o{\psi}_0^- {\psi}_0^+\vert h,q\rangle = 0$ &\cr
height2pt&\omit&&\omit&\cr}\hrule}}
\hbox{\hskip 0.5cm Table 10: ${\cal SW}(1,{5\over 2})$
{\rm at } $c=3,Q=1$ } }}
\bigskip\noindent\smallskip\noindent
In the following we will argue that all models with $c\ge 3$
can be understood from the existence of the spectral flow in
$N=2$ superconformal theories (see eq.\ (7)).
\smallskip\noindent
For $\eta\in {\mathchoice{\BZT}{\BZT}{\BZS}{\BZSS}}$ the Neveu-Schwarz sector flows to itself.
Starting with the vacuum state $(h,q)=(0,0)$ and applying
successively the spectral flow with $\eta=\pm 1$ one obtains the
following tower of $N=2$ Virasoro primary states
\smallskip\noindent
$$ h_m={c\over 6}m^2-{(m-1)^2\over 2}, \quad
q_m=\pm \left({mc\over 3}-m+1\right) \quad\quad m\in{\mathchoice{\BNT}{\BNT}{\BNS}{\BNSS}}.
\eqno{\rm (8)}$$
\smallskip\noindent
Note that this spectral flow does in general not map lowest weight
states onto lowest weight states but these subtleties can be easily
taken care of. For $c=3d$ it was shown in \q{16} that
the vacuum representation of the extension of the $N=2$ SVIR
by the local chiral primaries $(h,q)=(h_1=c/6,q_1=\pm c/3)$
is the direct sum of an infinite number of $N=2$ Virasoro
representations with lowest weights $(h_m, q_m)$.
This implies that the extension of the $N=2$ SVIR by the two fields
$(h,q)=(h_m,\pm \vert q_m \vert )$ also exists for $c=3d$ where
the generators can be written as normal ordered products of
the fundamental fields $(h_1,\pm\vert q_1\vert)$. Furthermore, in all
these theories the field ${\cal N}_s(\Phi{\cal L})$ vanishes.
\smallskip\noindent
Application of these arguments to $d=1$ implies, for instance, that all
${\cal SW}(1,\Delta)$ algebras with $\Delta={\mathchoice{\BZT}{\BZT}{\BZS}{\BZSS}}+1/2$ and $Q=1$ exist for $c=3$
being in agreement with our calculations.
Assuming that the construction of $\q{16}$ can be
generalized to arbitrary rational values of $c\ge 3$, yielding
generically a parafermionic fundamental algebra \q{24}, we
conjecture the ${\cal SW}(1,\Delta)$ algebra to exist for
$$ c={6\over m^2}\left( \Delta+{(m-1)^2\over 2}\right),\quad
q=\pm {1\over m}(2\Delta-m+1)
\eqno{\rm (9)}$$
where $m\in\lbrace 1,\ldots, \lbrack \Delta+{1\over2} \rbrack\rbrace$.
\smallskip\noindent
We emphasize that the results of table 3 are in perfect agreement
with this conjecture.
\bigskip\noindent
\section{4.\ Conclusion}
\medskip\noindent
Investigating the representation theory of $N=2$ $\cal W$-algebras with
two generators and vanishing self-coupling constant we found that
all known algebras can be arranged into four series. Only the algebras
existing for central charges in the unitary minimal series
of $N=2$ SVIR yield rational models which can be explained by
the non-diagonal modular invariants of $N=2$ SVIR. The $\cal W$-algebras
existing for $c\ge 3$ can be interpreted by the existence of the spectral
flow for $N=2$ supersymmetric CFTs. Every extension of the $N=2$ super
Virasoro algebra by the spectral flow operators implies a whole hierarchy
of $\cal W$-algebras existing for the same central charge. From our results
we expect that these algebras do not admit rational models. However, one
may expect a $U(1)-$charge quantization of the representations of these
algebras because this happened to be true in all cases considered so far.
In order to find new rational $N=2$ SCFTs one has to investigate
$N=2$ $\cal W$-algebras with more generators.
We rephrase that the only rational models known so far lie in the unitary
minimal series of $N=2$ SVIR. However, the question if unitarity is a
necessary condition for an $N=2$ supersymmetric CFT to be rational lies
beyond the scope of this letter but should
be investigated in the near future.
\bigskip\noindent
{\bf Acknowledgements:} We are very glad to thank
J.M.\ Figueroa-O'Farrill and W.\ Nahm for interesting
discussions and A.\ Honecker and R.\ Schimmrigk for
careful reading of the manuscript.
\bigskip\noindent\medskip\noindent
\section{References}
\tolerance=10000
\medskip\noindent
\bibitem{1} P.\ Bouwknegt, K.\ Schoutens,
{\it $\cal W$-Symmetry in Conformal Field Theory},
Phys.\ Rep.\ {\bf 223} (1993) p.\ 183
\bibitem{2} T.\ Inami, Y.\ Matsuo, I.\ Yamanaka,
{\it Extended Conformal Algebra with $N=2$ Supersymmetry},
Int.\ J.\ Mod.\ Phys.\ {\bf A5} (1990) p.\ 4441
\bibitem{3} H.\ Lu, C.N.\ Pope, L.J.\ Romans, X.\ Shen, X.J.\ Wang,
{\it Polyakov Construction of the $N=2$ Super-${\cal W}_3$ Algebra},
Phys.\ Lett.\ {\bf B264} (1991) p.\ 91
\bibitem{4} L.J.\ Romans, {\it The $N=2$ Super-${\cal W}_3$ Algebra},
Nucl.\ Phys.\ {\bf B369} (1992) p.\ 403
\bibitem{5} D.\ Nemeschansky, S.\ Yankielowicz,
{\it $N=2$ $\cal W$-Algebras, Kazama-Suzuki Models and Drinfeld Sokolov
Reduction}, preprint USC-91/005
\bibitem{6} K.\ Ito, {\it Quantum Hamiltonian Reduction
and $N=2$ Coset Models}, Phys.\ Lett.\ {\bf B259} (1991) p.\ 73
\bibitem{7} R.\ Blumenhagen, {\it $N=2$ Supersymmetric $\cal W$-Algebras},
Nucl.\ Phys.\ {\bf B405} (1993) p.\ 744
\bibitem{8} T.\ Banks, L.J.\ Dixon, D.\ Friedan, E.\ Martinec,
{\it Phenomenology and Conformal Field Theory Or
Can String Theory predict the Weak Mixing Angle?},
Nucl.\ Phys.\ {\bf B299} (1988) p.\ 613
\bibitem{9} D. Gepner, {\it Space-time Supersymmetry in
Compactified String Theory and Superconformal Models},
Nucl.\ Phys.\ {\bf B296} (1988) p.\ 757
\bibitem{10} Y.\ Kazama, H.\ Suzuki, {\it New $N=2$
Superconformal Field Theories and Superstring Compactification},
Nucl.\ Phys.\ {\bf B321} (1989) p.\ 232
\bibitem{11} M.\ Bershadsky, W.\ Lerche, D.\ Nemeschansky, N.P.\ Warner,
{\it Extended $N=2$ Superconformal Structure of Gravity and
$\cal W$-Gravity Coupled to Matter}, Nucl.\ Phys.\ {\bf B401} (1993) p.\ 304
\bibitem{12} R.\ Varnhagen, {\it Characters and Representations of
New Fermionic $\cal W$-Algebras}, Phys.\ Lett.\ {\bf B275} (1992) p.\ 87
\bibitem{13} W.\ Eholzer, M.\ Flohr, A.\ Honecker, R.\ H\"ubel,
W.\ Nahm, R.\ Varnhagen, {\it Representations of $\cal W$-Algebras
with Two Generators and New Rational Models},
Nucl.\ Phys.\ {\bf B383} (1992) p. 249
\bibitem{14} F.\ Ravanini, S.-K.\ Yang, {\it Modular Invariance
in $N=2$ Superconformal Field Theories},
Phys.\ Lett.\ {\bf B195} (1987) p.\ 202
\bibitem{15} Z.\ Qiu, {\it Modular Invariant Partition
Functions for $N=2$ Superconformal Field Theories},
Phys.\ Lett.\ {\bf B198} (1987) p.\ 497
\bibitem{16} S.\ Odake, {\it $c=3d$ Conformal Algebra with
Extended Supersymmetry}, Mod.\ Phys.\ Lett.\ {\bf A5} (1990) p.\ 561
\bibitem{17} T.\ Eguchi, H.\ Ooguri, A.\ Taormina, S.-K.\ Yang,
{\it Superconformal Algebras and String
Compactification on Manifolds with $SU(n)$ Holonomy},
Nucl.\ Phys.\ {\bf B315} (1989) p.\ 193
\bibitem{18} P.\ West, {\it 'Introduction to Supersymmetry
and Supergravity'}, second edition 1990, World Scientific, Singapore
\bibitem{19} G.\ Mussardo, G.\ Sotkov, M.\ Stanishkov,
{\it $N=2$ Superconformal Minimal Models},
Int.\ J.\ Mod.\ Phys.\ {\bf A4} (1989) p.\ 1135
\bibitem{20} A.\ Honecker, {\it A Note on the Algebraic
Evaluation of Correlators in Local Chiral Conformal Field Theory},
preprint BONN-HE-92-25, hep-th/9209029
\bibitem{21} P.\ Ginsparg, {\it Curiosities at $c=1$},
Nucl.\ Phys.\ {\bf B295} (1988) p.\ 153
\bibitem{22} E.B.\ Kiritsis, {\it Proof of the Completeness of the
Classification of Rational Conformal Theories with $c=1$},
Phys.\ Lett.\ {\bf B217} (1989) p.\ 427
\bibitem{23} M.\ Flohr, {\it $\cal W$-Algebras, New Rational Models
and Completeness of the $c=1$ Classification},
Commun.\ Math.\ Phys.\ {\bf 157} (1993) p.\ 179
\bibitem{24} V.A.~Fateev, A.B.~Zamolodchikov,
{\it Nonlocal (Parafermion) Currents in Two-Dimensional Conformal
Quantum Field Theory and Self-Dual Critical Points
in ${\mathchoice{\BZT}{\BZT}{\BZS}{\BZSS}}_N$-Symmetric Statistical Systems},
Sov.\ Phys.\ JETP {\bf 62} (1985) p.\ 215
\vfill
\end
|
2,877,628,088,631 | arxiv | \section{Introduction}
Environments with (locally) planar-layered profiles are encountered in diverse applications such as geophysical exploration, ground penetrating radar, conformal antenna design, and so on~\cite{sainath,sainath3,lambot1}. To facilitate electromagnetic (EM) radiation analysis in such environments, eigenfunction (plane wave) expansions (PWE) have long been used because of their relative computational efficiency versus brute-force numerical methods such as finite difference and finite element methods. Moreover, PWE can accommodate linear, but otherwise arbitrary anisotropic layers characterized by arbitrary (diagonalizable) 3$\times$3 material tensors~\cite{sainath}. This proves useful when rigorously modeling planar media simultaneously exhibiting both electrical and magnetic anisotropy, such as (i) isoimpedance beam-shifting devices and (to facilitate proximal antenna placement) ground-plane-coating slabs systematically designed via transformation optics (T.O.) techniques~\cite{sainath3,pendry5}, (ii) more practically realizable (albeit not necessarily isoimpedance) approximations to T.O.-inspired media such as metamaterial-based thin, wide-angle, and polarization-robust absorbers to facilitate (for example) radar cross section control~\cite{huangfu}, as well as (iii) numerous other media such as certain types of liquid crystals, elastic media subject to small deformations, and superconductors at high temperatures~\cite{boulanger1}. These named, amongst other, modeling scenarios share in common the potential presence of a particular class of anisotropic media in which the magnetic permeability ($\boldsymbol{\bar{\mu}}_r$) and electric permittivity ($\boldsymbol{\bar{\epsilon}}_r$) tensor properties are ``matched" to each other and hence together define media supporting four ``degenerate" plane wave eigenfunctions that, while possessing four linearly independent field polarization states (eigenvectors) as usual, share only two unique (albeit, critically still, \emph{non-defective}) eigenvalues~\cite{felsen}. Alternatively stated, propagation characteristics within such media are still (in general) dependent on propagation direction but \emph{independent} of polarization, eliminating ``double refraction" (``birefringence") effects~\cite{felsen,boulanger1}. Hence our proposed moniker ``Non-Birefringent Anisotropic Medium" (NBAM), rather than the ``pseudo-isotropic" moniker~\cite{boulanger1}.
From an analytical standpoint, said PWE constitute spectral integrals exactly quantifying the radiated fields~\cite{sainath3}. Except for some very simple cases however, these expansions must almost always be evaluated by means of numerical quadratures or cubatures, whose robust computation (with respect to varying source and layer properties) is far from trivial and requires careful choice of appropriate quadrature rules, complex-plane integration contours, etc. to mitigate discretization and truncation errors as well as accelerate convergence~\cite{sainath,sainath3,mich2}. In addition to such considerations of primarily {\it numerical} character, a distinct problem occurs, due to said eigenvalue degeneracy, when sources radiate within NBAM layers. Indeed, this case requires proper analytical ``pre-treatment" of the fundamental spectral-domain field expressions to avoid two sources of ``breakdown": (i) Numerically unstable calculations (namely, divisions by zero) during the computation chain, as well as (ii) Corruption of the correct form of the eigenfunctions, viz. $z$exp[$ik_zz$] instead of the proper form exp[$ik_zz$], the former resulting from a naive, ``blanket" application of Cauchy's integral theorem to the canonical field expressions~\cite{hanson1,Weber1980}.
To this end, we first show the key results detailing the degenerate ``direct" (i.e., homogeneous medium) radiated fields in the ``principal material basis" (PMB) representation with respect to which the material tensors are assumed simultaneously diagonalized by an orthogonal basis~\cite{pendry5}.\footnote{\label{ortho}Note: The material tensor eigenvectors $\{\bold{\hat{v}}_1,\bold{\hat{v}}_2,\bold{\hat{v}}_3\}$ are \emph{not} to be confused with the field polarization eigenvectors.} Subsequently, we transform these PMB expressions to the Cartesian basis (the PWE's employed basis). Finally, we employ a robust, numerically-stable NBAM polarization decomposition scheme to obtain the Cartesian-basis direct field polarization amplitudes. The two-dimensional (2-D) Fourier integral-based PWE algorithm, resulting from implanting these derived field expressions into an otherwise highly robust PWE algorithm~\cite{sainath3}, comprises this paper's central contribution.
\section{\label{intro2}Problem Statement}
We assume the $\exp{-i\omega t}$ convention in what follows.
Within a homogeneous medium of material properties $\{\boldsymbol{\bar{\epsilon}}_{r},\boldsymbol{\bar{\mu}}_{r} \}$, the electric field $\bm{\mathcal{E}}(\bold{r})$ radiated by electric ($\bm{\mathcal{J}}$) and (equivalent) magnetic ($\bm{\mathcal{M}}$) current sources satisfies\footnote{$k_0=\omega\sqrt{\mu_0\epsilon_0}$, $\epsilon_0$, $\mu_0$, $\eta_0=\sqrt{\mu_0/\epsilon_0}$, $\boldsymbol{\bar{\epsilon}}_{r}$, and $\boldsymbol{\bar{\mu}}_{r}$ are the vacuum wave number, vacuum permittivity, vacuum permeability, vacuum plane wave impedance, NBAM relative permittivity tensor, and NBAM relative permeability tensor, respectively. An infinitesimal point/Hertzian dipole current resides at $\bold{r}'=(x',y',z')$, the observation point resides at $\bold{r}=(x,y,z)$, $\Delta \bold{r}=\bold{r}-\bold{r}'=(\Delta x, \Delta y, \Delta z)$, $u(\cdot )$ denotes the Heaviside step function, and $\bold{k}=(k_x,k_y,k_z)$ denotes the wave vector. Furthermore, $\boldsymbol{\bar{\tau}}_{r}=\boldsymbol{\bar{\mu}}_{r}^{-1}$ and $d_0=k_0^2\epsilon_{zz}(\tau_{xy}\tau_{yx}-\tau_{xx}\tau_{yy})$, where $\gamma_{ts}=\bold{\hat{t}}\cdot\boldsymbol{\bar{\gamma}}_{r}\cdot\bold{\hat{s}}$ ($\gamma=\tau,\epsilon$; $t,s=x,y,z$). All derivations are performed for the electric field, but duality in Maxwell's Equations makes immediate the magnetic field solution. Finally, a tilde over variables denotes they are Fourier/wave-number domain quantities.}
\begin{equation}\label{WaveOp}
\bm{\mathcal{\bar{A}}}(\cdot)=\nabla \times \boldsymbol{\bar{\mu}}^{-1}_{r} \cdot \nabla \times (\cdot) -
k_0^2 \boldsymbol{\bar{\epsilon}}_{r}
\cdot (\cdot)\end{equation}
\begin{equation}\bm{\mathcal{\bar{A}}} (\bm{\mathcal{E}}) = ik_o\eta_o\bm{\mathcal{J}}-\nabla \times
\boldsymbol{\bar{\mu}}^{-1}_{r} \cdot \bm{\mathcal{M}} \end{equation}
and can be expressed via a 3-D Fourier integral over
the field's plane wave constituents $\{\bold{\tilde{E}}(\bold{k})\mathrm{e}^{i\bold{k} \cdot \bold{r}}\}$:\footnote{\label{deteq}Adj($\cdot$) and Det($\cdot$) denote the adjugate and determinant of said argument, respectively. Det($\bold{\tilde{\bar{A}}}$)$=d_0(k_z-\tilde{k}_{1z})(k_z-\tilde{k}_{2z})(k_z-\tilde{k}_{3z})(k_z-\tilde{k}_{4z})$, where $\{\tilde{k}_{nz}\}$ are the eigenvalues (i.e., longitudinal [$z$] propagation constants).}
\begin{align}
\bold{\tilde{\bar{A}}}^{-1}&=\mathrm{Adj}\left(\bold{\tilde{\bar{A}}}\right)/\mathrm{Det}\left(\bold{\tilde{\bar{A}}}\right) \\
\bold{\tilde{E}}(\bold{k})&=\bold{\tilde{\bar{A}}}^{-1}\cdot \left[ ik_0\eta_0\bold{\tilde{J}}-\tilde{\nabla} \times \boldsymbol{\bar{\mu}}^{-1}_{r} \cdot \bold{\tilde{M}} \right] \\
\bm{\mathcal{E}}(\bold{r})&= \left(\frac{1}{2\pi}\right)^3\iiint\limits_{-\infty}^{+\infty} \bold{\tilde{E}}(\bold{k}) \, \mathrm{e}^{i\bold{k} \cdot \bold{r}} \, \mathrm{d}k_z \, \mathrm{d}k_x \, \mathrm{d}k_y \numberthis \label{fte}
\end{align}
where, anticipating planar layering along $z$, the $k_z$ spectral integral is ``analytically" evaluated for every ($k_x,k_y$) doublet manifest in the (typically \emph{numerically}) evaluated outer 2-D Fourier integral. That is to say, by ``analytically" evaluated we mean that the general (symbolic) closed-form solution of the $k_z$ integral for arbitrary $(k_x,k_y)$ doublet, obtained by equivalently viewing the $k_z$ real-axis integral as a contour integral evaluated using Jordan's Lemma and residue calculus, is well-known and can be numerically evaluated at the $(k_x,k_y)$ doublets~\cite{sainath,chew}. In particular, analytical evaluation of the $k_z$ integral yields the ``direct" field $\bm{\mathcal{E}}^d(\bold{r})$~\cite{sainath}:
\begin{multline}
\bm{\mathcal{E}}^d(\bold{r})=\frac{i}{(2\pi)^{2}} \iint \limits_{-\infty}^{+\infty}\Bigg[u(\Delta z)\sum_{n=1}^2{\tilde{a}_{n}^d\bold{\tilde{e}}_{n}\mathrm{e}^{i\tilde{k}_{nz}\Delta z}} + \\
u(-\Delta z) \sum_{n=3}^4{\tilde{a}_{n}^d\bold{\tilde{e}}_{n}\mathrm{e}^{i\tilde{k}_{nz}\Delta z}} \Bigg] \mathrm{e}^{ik_x\Delta x+ik_y\Delta y} \, \mathrm{d}k_x \, \mathrm{d}k_y \label{Eeqn1} \end{multline}
where $\tilde{a}_{n}^d(k_x,k_y)$ is the (source dependent) direct field amplitude of the $n$th polarization, while $\bold{\tilde{e}}_{n}(k_x,k_y)$ and $\tilde{k}_{nz}(k_x,k_y)$ are (resp.) the electric field eigenvector (i.e., polarization state) and eigenvalue of the $n$th mode ($n=1,2,3,4$)~\cite{sainath}. Modes labeled with $n=1,2$ correspond to up-going polarizations, and similarly for down-going modes ($n=3,4$).\footnote{Please see~\cite{sainath,chew} for other relevant layered-medium expressions.}
The problem with the canonical \emph{numerical implementation} of this residue calculus approach lies in its tacit assumption of non-degeneracy (distinctness) in the eigenvalues $\{\tilde{k}_{1z},\tilde{k}_{2z},\tilde{k}_{3z},\tilde{k}_{4z}\}$, which does not hold for NBAM media. As an illustration of the polarization-independent dispersion behavior of NBAM, consider the dispersion relations of a uniaxial-anisotropic medium slab $\{\boldsymbol{\bar{\epsilon}}_{r}=\mathrm{Diag}\left[a,a,b\right],\boldsymbol{\bar{\mu}}_{r}=\mathrm{Diag}\left[c,c,d\right]\}$ ($k_{\rho}^2=k_x^2+k_y^2$)~\cite{felsen,chew}: $\tilde{k}_{1z}=\left[k^2_0ac-(c/d)k_{\rho}^2\right]^{1/2}$, $\tilde{k}_{2z}=\left[k^2_0ac-(a/b)k_{\rho}^2\right]^{1/2}$, $\tilde{k}_{3z}=-\tilde{k}_{1z}$, and $\tilde{k}_{4z}=-\tilde{k}_{2z}$. Setting $\{a=\bar{y}^2c,b=\bar{y}^2d\}$ ($\bar{y}$ is an arbitrary, non-zero multiplicative constant) renders $\tilde{k}_z^+ = \tilde{k}_{1z}=\tilde{k}_{2z}$ and $\tilde{k}_z^- = \tilde{k}_{3z}=\tilde{k}_{4z}$, demonstrating the plane wave propagation direction \emph{dependent}, but polarization \emph{independent}, dispersion characteristics of uniaxial NBAM~\cite{boulanger1}. This conclusion applies also for more general uniaxial NBAM material tensors possessing PMB rotated with respect to the Cartesian basis~\cite{boulanger1}. Similarly, for biaxial NBAM with PMB-expressed tensors $\{\boldsymbol{\bar{\mu}}_{r}^{\mathrm{pmb}}=\mathrm{Diag}\left[a,b,c\right],\boldsymbol{\bar{\epsilon}}_{r}^{\mathrm{pmb}}=\bar{y}^2\boldsymbol{\bar{\mu}}_{r}^{\mathrm{pmb}}\}$, the polarization-independent dispersion relations are:
\begin{align}
\tilde{k}_{1z}^{\mathrm{pmb}}&=\left[(\bar{y}k_0)^2ab-(a/c)k_x^2-(b/c)k_y^2\right]^{1/2} \numberthis \label{e1}\\
\tilde{k}_{3z}^{\mathrm{pmb}}&=-\left[(\bar{y}k_0)^2ab-(a/c)k_x^2-(b/c)k_y^2\right]^{1/2} \numberthis \label{e2}
\end{align}
with $\tilde{k}_z^+ =\tilde{k}_{2z}^{\mathrm{pmb}}=\tilde{k}_{1z}^{\mathrm{pmb}}$ and $\tilde{k}_z^- =\tilde{k}_{4z}^{\mathrm{pmb}}=\tilde{k}_{3z}^{\mathrm{pmb}}$.
Now, the two-fold degenerate eigenvalue $\tilde{k}_z^+$ has associated with it two linearly independent field polarizations describing up-going plane waves~\cite{boulanger1}; this holds likewise for the two down-going polarizations with common eigenvalue $\tilde{k}_z^-$. Mathematically speaking, the eigenvalues $\{\tilde{k}_z^+,\tilde{k}_z^-\}$ are each twice-repeating (i.e., algebraic multiplicity of two) but have associated with each of them two linearly independent eigenvectors (i.e., geometric multiplicity of two), making them non-defective and rendering the four NBAM polarization states suitable as a local EM field basis within NBAM layers~\cite{hanson1}. Despite the existence of four linearly independent eigenvectors, it is worthwhile to further exhibit the key results of the systematic analytical treatment of the two fictitious double-poles of $\bold{\tilde{\bar{A}}}^{-1}$ to render numerical PWE-based EM field evaluation robust to the two said sources of ``breakdown"; this treatment is performed in the next section.
Let us first make two preliminary remarks, however. First, assume that the source-containing layer is a biaxial NBAM with $\boldsymbol{\bar{\mu}}_{r}^{\mathrm{pmb}}=\mathrm{Diag}\left[a,b,c\right]$ and $\boldsymbol{\bar{\epsilon}}_{r}^{\mathrm{pmb}}=\bar{y}^2\boldsymbol{\bar{\mu}}_{r}^{\mathrm{pmb}}$. Second, the orthogonal matrix $\bold{\bar{U}}=\begin{bmatrix} \bold{\hat{v}}_1 & \bold{\hat{v}}_2 & \bold{\hat{v}}_3\end{bmatrix}$ transforms vectors between the PMB and Cartesian basis. For example, the relationship between the $n$th PMB eigenmode wave vector $\bold{k}^{\mathrm{pmb}}_{n}=(k_{nx}^{\mathrm{pmb}},k_{ny}^{\mathrm{pmb}},\tilde{k}_{nz}^{\mathrm{pmb}})$ and the (assumed available\footnote{The Cartesian basis wave vectors and polarization eigenvectors are assumed available (e.g., via the state matrix method~\cite{chew}). Indeed, recall that the operations discussed herein are performed within the backdrop of numerical 2-D Fourier integral evaluations~\cite{sainath}.}) $n$th Cartesian-basis wave vector $\bold{k}_{n}=(k_x,k_y,\tilde{k}_{nz})$ writes as $\bold{k}^{\mathrm{pmb}}_{n}=\bold{\bar{U}}^{-1}\cdot\bold{k}_{n}$.
\section{\label{form}Direct Electric Field Radiated within NBAM}
The (Cartesian basis) Fourier domain representation of the electric field, radiated in a homogeneous NBAM, writes as $\bold{\tilde{E}}=-\bold{\tilde{\bar{A}}}^{-1} \cdot \bold{\tilde{\nabla}} \times \boldsymbol{\bar{\mu}}^{-1}_{r}\cdot \bold{\tilde{M}}$ for a (equivalent) magnetic current source or $\bold{\tilde{E}}= ik_0\eta_0 \bold{\tilde{\bar{A}}}^{-1} \cdot\bold{\tilde{J}}$ for an electric current source. These two equations, moreover, hold equally when re-represented in the NBAM's PMB (i.e., adding ``pmb" superscript to all quantities), which is what we will employ. Indeed, the components $\{A_{mw}\}$ ($m,w=1,2,3$) of $\bold{\tilde{\bar{A}}}^{-1,\mathrm{pmb}}(\cdot)$ write as ($A_{mw}= A_{wm}$, and $\bold{\bar{k}}=\bold{k}^{\mathrm{pmb}}/k_0$):
\begin{align}
\tilde{B}&= -c\bar{y}^2k_0^2\left(\bar{k}_z^2-\left[ab\bar{y}^2-(a/c)\bar{k}_x^2-(b/c)\bar{k}_y^2\right]\right) \numberthis \label{exp1}\\
A_{11}&= \left(\bar{k}_x^2-bc\bar{y}^2\right)/\tilde{B}, \ A_{12}= \bar{k}_x\bar{k}_y/\tilde{B} \\
A_{13}&= \bar{k}_x\bar{k}_z/\tilde{B}, \ A_{22}= \left(\bar{k}_y^2-ac\bar{y}^2\right)/\tilde{B} \\
A_{23}&= \bar{k}_y\bar{k}_z/\tilde{B}, \ A_{33}=\left(\bar{k}_z^2-ab\bar{y}^2\right)/\tilde{B} \numberthis \label{exp2}
\end{align}
while the components of $-\bold{\tilde{\bar{A}}}^{-1,\mathrm{pmb}}\cdot\bold{\tilde{\nabla}}^{\mathrm{pmb}}\times\boldsymbol{\bar{\mu}}^{-1,\mathrm{pmb}}_{r}(\cdot)$ $\{ \dot{A}_{mw} \}$ write as ($\dot{A}_{mw}=-\dot{A}_{wm}$):
\begin{align}
\tilde{B}'&= \tilde{B}/(\bar{y}^2), \ \dot{A}_{12}=-ick_z/\tilde{B}' \numberthis \label{exp3a} \\
\dot{A}_{13}&=ibk_y/\tilde{B}', \ \dot{A}_{23}=-iak_x/\tilde{B}' \numberthis \label{exp3}
\end{align}
The expressions within Eqns. \eqref{exp1}-\eqref{exp2} describe the electric field from an electric current source while the expressions within Eqns. \eqref{exp3a}-\eqref{exp3} describe the electric field from an (equivalent) magnetic current source. Duality in Maxwell's Equations makes immediate the magnetic field results.
Next the PMB electric field $\bold{E}^{\mathrm{pmb}}(k_x,k_y;z,z')$, after re-expressing Eqns. \eqref{exp1}-\eqref{exp3} in terms of $\{k_x,k_y,k_z\}$ to identify the $k_z$ (rather than $k_z^{\mathrm{pmb}}$) eigenvalues $\{\tilde{k}_{nz}\}$ (using the relation $\bold{k}=\bold{\bar{U}}\cdot \bold{k}^{\mathrm{pmb}}$) as well as ``analytically" performing the $k_z$ contour integral, can be decomposed into a linear combination of the degenerate up-going modes $\{\bold{\tilde{e}}_1^{\mathrm{pmb}},\bold{\tilde{e}}_2^{\mathrm{pmb}}\}$ (for $z > z'$) or down-going modes $\{\bold{\tilde{e}}_3^{\mathrm{pmb}},\bold{\tilde{e}}_4^{\mathrm{pmb}}\}$ (for $z < z'$).\footnote{When $z=z'$, assuming the source does not lie exactly at a planar material interface, one can write the direct fields as a linear combination of either the up-going \emph{or} down-going modes since both combinations lead to identical field results (save at $\bold{r}'$) on the plane $z=z'$~\cite{sainath,chew}.} For an electric source, we have for $\bold{\tilde{e}}^{\pm,\mathrm{pmb}}$:
\begin{equation}
\pm 2\pi i\left[ik_0\eta_0\left(k_z-\tilde{k}_z^{\pm}\right)\bold{\tilde{\bar{A}}}^{-1,\mathrm{pmb}}\cdot \bold{\tilde{J}}^{\mathrm{pmb}}\right]\Bigg|_{k_z=\tilde{k}_z^{\pm}} \numberthis \label{e1a}
\end{equation}
and similarly for a (equivalent) magnetic source upon replacing $ik_0\eta_0\bold{\tilde{\bar{A}}}^{-1,\mathrm{pmb}}\cdot \bold{\tilde{J}}^{\mathrm{pmb}}$ with $-\bold{\tilde{\bar{A}}}^{-1,\mathrm{pmb}}\cdot\bold{\tilde{\nabla}}^{\mathrm{pmb}} \times \boldsymbol{\bar{\mu}}^{-1,\mathrm{pmb}}_{r} \cdot \bold{\tilde{M}}^{\mathrm{pmb}}$ in Eqn. \eqref{e1a}. Next, the degenerate PMB modal electric fields are re-expressed in the Cartesian basis ($\bold{\tilde{e}}^{\pm}=\bold{\bar{U}}\cdot\bold{\tilde{e}}^{\pm,\mathrm{pmb}}$) from which the Cartesian-basis direct field modal amplitudes $\{\tilde{a}_{1}^{d},\tilde{a}_{2}^{d},\tilde{a}_{3}^{d},\tilde{a}_{4}^{d}\}$ can be robustly extracted using the polarization decomposition method proposed previously for sources radiating within isotropic layers~\cite{sainath}:
\begin{equation}
\begin{bmatrix}
\tilde{a}_{1}^{d} \\
\tilde{a}_{2}^{d}
\end{bmatrix}=\begin{bmatrix}
\tilde{e}_{x1} & \tilde{e}_{x2} \\
\tilde{e}_{y1} & \tilde{e}_{y2}
\end{bmatrix}^{-1}
\begin{bmatrix}
\tilde{e}^{+}_x \\
\tilde{e}^{+}_y
\end{bmatrix},\
\begin{bmatrix}
\tilde{a}_{3}^{d} \\
\tilde{a}_{4}^{d}
\end{bmatrix}=
\begin{bmatrix}
\tilde{e}_{x3} & \tilde{e}_{x4} \\
\tilde{e}_{y3} & \tilde{e}_{y4}
\end{bmatrix}^{-1}
\begin{bmatrix}
\tilde{e}^{-}_x \\
\tilde{e}^{-}_y
\end{bmatrix}
\label{decomp}
\end{equation}
where $\{\tilde{e}_{xn},\tilde{e}_{yn},\tilde{e}_{zn}\}$ are the $x$, $y$, and $z$ components of the (cartesian basis) NBAM's $n$th electric field eigenvector $\bold{\tilde{e}}_n$. Moreover, if the above-inverted matrices are suspected (with respect to, say, the euclidean matrix norm measure) of being ill-conditioned, one can always utilize instead say the $y$ and $z$, or alternatively the $x$ and $z$, components of the field eigenvectors~\cite{sainath}. Indeed, this decomposition procedure is well-defined due to the non-defective nature of the eigenvalues, and hence linear independence between the four NBAM field eigenvectors $\{\bold{\tilde{e}}_{n}\}$~\cite{boulanger1}.
\section{\label{res}Results}
Now we exhibit some illustrative results demonstrating the developed algorithm's performance. We investigate both the electric field $\mathcal{E}_z$ radiated by a vertical (i.e., $z$-directed) Hertzian electric current dipole (VED), as well as the magnetic field $\mathcal{H}_z$ radiated by a $z$-directed Hertzian (equivalent) magnetic current dipole (VMD); both sources radiate at $f=2$MHz. In both scenarios, the source resides at depth $z'=0$m within a three-layer NBAM, occupying the region $-1 \leq z \leq 1$ [m], of material properties $\boldsymbol{\bar{\epsilon}}_r=\boldsymbol{\bar{\mu}}_r=\mathrm{Diag}[10,10,1/10]$, $\mathrm{Diag}[5,5,1/5]$, and $\mathrm{Diag}[2,2,1/2]$ within the regions $-1 < z < -1/4$ [m], $-1/4 < z < 1/4$ [m], and $1/4 < z < 1$ [m] (resp.); see Fig. \ref{geo1}. The top layer ($z \geq 1$m) is vacuum ($\epsilon_{r1}=\mu_{r1}=1$) while the bottom layer ($z \leq -1$m) is a perfect electric conductor (PEC); note that this layered-medium configuration was specifically chosen to facilitate comparison with closed-form solutions through invocation of T.O. and EM Image theory~\cite{sainath4}. Indeed the EM field solution within $z\geq -1$m, for our five-layered configuration involving a VED source, can be shown identical to the closed-form field result of two VED's (located at depths $z=-1.75$m and $z=-19.25$m) of identical orientation to the original VED and radiating in homogeneous, unbounded vacuum. Note that within the NBAM, an added step to compute the closed-form result must be taken, appropriately mapping the observation points within the NBAM to vacuum observation points by viewing a $d$-meter thick NBAM layer $\boldsymbol{\bar{\epsilon}}_r=\boldsymbol{\bar{\mu}}_r=\mathrm{Diag}[n,n,1/n]$ as equivalent to a $nd$-meter thick vacuum layer. Similarly, the VMD problem can be shown identical to two VMD's (located at depths $z=-1.75$m and $z=-19.25$m) radiating in homogeneous, unbounded vacuum; in this scenario however, image theory prescribes that the $z=-1.75$m VMD possess identical orientation to the original VMD, but that the $z=-19.25$m VMD possess \emph{opposite} orientation.\footnote{The amplitudes of the VED and VMD (i.e., lying within the central NBAM layer) must be scaled by a factor of 1/5 (relative to the vacuum sources) to facilitate field comparisons. Moreover the normal field components $\{\mathcal{E}_z,\mathcal{H}_z\}$, within the NBAM layer with properties $\boldsymbol{\bar{\epsilon}}_r=\boldsymbol{\bar{\mu}}_r=\mathrm{Diag}[n,n,1/n]$, are also scaled (artificially, for both visual display and error computation purposes) by $1/n$ to account for their discontinuity across material interfaces.}
Observing Figs. \ref{geom3c}-\ref{geom3e}, we note the relative errors in both the electric field ($\delta_e$) and magnetic field ($\delta_h$) are very low,\footnote{Let $E_c$ be the computed electric field, and let $E_v$ be the closed-form reference solution. Then $\delta_e=|E_c-E_v|/|E_v|$ (likewise for $\delta_h$).} approaching in most of the observation plane near the limits of floating point double precision-related numerical noise (approximately -150 in [dB] scale); for reference, Figs. \ref{geom3z}-\ref{geom3a} are the computed field distributions themselves from our algorithm. This is consistent with our having set an adaptive relative integration error tolerance of $~1.2 \times 10^{-14}$. We do observe however that the error noticeably increases (for fixed observer/source radial separation) as the observation angle tends closer to ``horizon" (i.e., source depth $z'$ and observer depth $z$ coinciding). The error variation trend versus angle has been observed before~\cite{sainath4} even when the source resided in non-NBAM media, and hence the increased error versus observation angle is not likely due to instabilities in the presented NBAM-robust algorithm. We conjecture rather that the increasing error (versus observation angle) arises due to commensurately increasing numerical cancellation\footnote{Namely, cancellation from radiation field contributions arising from numerical integration along contour sub-sections symmetrically located about the imaginary $k_x$ and $k_y$ axes. By contrast, our algorithm robustly ensures (irrespective of observation angle) that the \emph{evanescent} field contribution introduces little numerical cancellation-induced error and rapid convergence~\cite{sainath3}.} that can only be partially offset by a (computer resource limited) \emph{finite} extent of $hp$ integration refinement performed using \emph{finite} precision arithmetic. This numerical cancellation, we remark, is well known to be predominantly induced by integrand oscillation, which worsens as the observation angle tends to horizon~\cite{sainath4,sainath}. One remedy is to use a constant-phase path~\cite{lambot1}, but a robust remedy for 2-D integrals (needed for generally anisotropic media) remains an open question. Moreover, this path would change as one varies the outer integration variable. Finally we emphasize that given the design of our \emph{particular} implementation, which always first computes the direct electric field and \emph{then} (if need be) computes the magnetic field using ancillary relations~\cite{chew}[Ch. 2], we have in fact tested the soundness of both Eqns. \eqref{exp1}-\eqref{exp2} (VED scenario) and Eqns. \eqref{exp3a}-\eqref{exp3} (VMD scenario).
\begin{figure}[H]
\centering
\includegraphics[width=2in]{geo_NBAM}
\caption{\label{geo1}\small Vertically-oriented Hertzian dipole current source within a three-layer NBAM. The purple (air) and blue (NBAM) regions form the plane on which the fields are observed in Fig. \ref{f2}. The parameter $n$ equals ten, five, and two within the regions $-1 < z < -1/4$ [m], $-1/4 < z < 1/4$ [m], and $1/4 < z < 1$ [m], respectively.}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[\label{geom3z}]{\includegraphics[width=1.75in,height=1.35in]{NBAM_Ez}}
\subfloat[\label{geom3a}]{\includegraphics[width=1.75in,height=1.35in]{NBAM_Hz}}
\subfloat[\label{geom3c}]{\includegraphics[width=1.75in,height=1.35in]{NBAM_EzErr}}
\subfloat[\label{geom3e}]{\includegraphics[width=1.75in,height=1.35in]{NBAM_HzErr}}
\caption{\label{f2}\small (a) $\mathcal{E}_z$ radiated by a VED. (b) $\mathcal{H}_z$ radiated by a VMD. (c) Relative error: $\mathcal{E}_z$. (d) Relative error: $\mathcal{H}_z$.}
\label{geom3}
\end{figure}
\section{\label{conc}Conclusion}
We addressed a fundamental origination of breakdown in the spectral-domain-based (PWE) evaluation of EM fields radiated by sources embedded within NBAM planar slabs, leading to a robust formulation that can accurately compute EM fields despite the modal degeneracy, induced by said NBAM, that would ordinarily lead to numerical instabilities and/or corruption of the functional form of the plane wave eigenfunctions. Indeed this instability arises due to eigenvalues that, while non-defective, have an algebraic multiplicity equal to two rather than one. The remedy is to apply a proper (analytical) ``pre-treatment" of the spectral-domain tensor operators prior to polarization amplitude extraction, resulting in robust analysis of EM fields in arbitrary anisotropic planar-layered media. Results validated the high accuracy of numerical computations based on this analytical pre-treatment.
|
2,877,628,088,632 | arxiv | \section{Introduction}
One of the most useful derivatives of a ground-state density functional theory (DFT) calculation is
the Kohn-Sham (KS) eigenvalues\cite{Kohn1965}, which lead to a \emph{non-interacting} spectrum.
Even though the KS equations represent an auxiliary non-interacting system
whose states and eigenvalues may be quite different from the true quasi-particle
system,
empirical evidence shows that in many cases this single particle KS
spectrum is in agreement with the x-ray photo-emission Spectroscopy (XPS) and
Bremsstrahlung isochromat spectroscopy (BIS) experiments\cite{bis,elp91a,elp91b,sawat84}.
However, for strongly correlated materials this KS spectrum is found to be in fundamental
disagreement with experimental reality. In the absence of spin-ordering
all modern exchange correlation (xc) functionals within DFT fail
to predict an insulating ground-state for
transition metal mono-oxides (TMOs), the prototypical Mott insulators. On the
other hand, it is well
known experimentally that these materials are insulating in nature even at
elevated temperatures (much above the N\'eel temperature)\cite{tjer,jauch}.
This indicates that magnetic order is not the driving mechanism for the
gap, but merely a co-occurring phenomenon. In fact not only DFT, but most
modern many-body techniques such as the $GW$ method also fail to
capture the insulating behavior in TMOs without explicit long range spin
ordering \cite{rodl09,arya95,kobayashi08}.
In this regard, the two many-body techniques that are able to capture the correct physics of
strong correlations are dynamical mean field theory (DMFT)\cite{ren06,kunes08,oki08} and reduced density matrix functional
theory (RDMFT)\cite{sharma08}; these two methods predicts TMOs as insulators, even in the absence of
long range spin-order. This clearly points towards the ability of these techniques to capture physics well beyond
the reach of most modern day ground-state methods.
Despite this success the effectiveness of RDMFT as a ground state theory has been seriously hampered due to the absence
of a technique for the determination of the spectral information.
Recently, this final hurdle has also been removed and the spectral information thus obtained for TMOs was shown to be
in good agreement with experiments\cite{sharma13}. However, these spectra were calculated in the presence of anti-ferromagnetic order.
The question then arises as to how effective RDMFT is in describing the insulating state of Mott insulators in the absence of
long range spin order. In order to answer this question, in the present work, we study the spectral
properties of non-magnetic NiO and MnO. Here former is insulating due to interplay of Mott localization and charge transfer
effects while the latter is insulating purely due to strong Mott localization.
A detailed analysis of RDMFT and KS orbitals is performed which shows that, unlike in the case of band insulators, for Mott
insulators the nature of two set of orbitals are very different and this difference is indeed crucial for the success of
RDMFT in describing Mott physics.
\section{Theory}
Within RDMFT the one-body reduced density
matrix (1-RDM) is the basic variable \cite{lodwin,gilbert}
\begin{align}
\gamma({\bf r}, {\bf r'})=N\int\!d{\bf r}_2 \ldots d{\bf r}_N
\Phi^*({\bf r'},{\bf r}_2 \ldots {\bf r}_N)\Phi({\bf r},{\bf r}_2 \ldots {\bf r}_N),
\end{align}
where $\Phi$ denotes the many-body wave function.
Diagonalization of this matrix produces a set of natural orbitals\cite{lodwin},
$\phi_{j{\bf k}}$, and occupation numbers, $n_{j{\bf k}}$, leading to the spectral
representation
\begin{align}\label{srep}
\gamma({\bf r},{\bf r}')=\sum_{j,{\bf k}}
n_{j{\bf k}} \phi_{j{\bf k}}({\bf r})\phi_{j{\bf k}}^*({\bf r}'),
\end{align}
where the necessary and sufficient conditions for ensemble $N$-representability
of $\gamma$ \cite{coleman} require $0\le n_{j{\bf k}} \le 1$ for all $j,{\bf k}$,
and $\sum_{j,{\bf k}} n_{j{\bf k}}=N$. Here $j$ represents the band index and
${\bf k}$ the crystal momentum.
In terms of $\gamma$, the total ground state energy \cite{gilbert} of the
interacting system is (atomic units are used throughout)
\begin{align} \label{etot} \nonumber
E[\gamma]=&-\frac{1}{2} \int\lim_{{\bf r}\rightarrow{\bf r}'}
\nabla_{\bf r}^2 \gamma({\bf r},{\bf r}')\,d^3r'
+\int\rho({\bf r}) V_{\rm ext}({\bf r})\,d^3r \\
&+\frac{1}{2} \int
\frac{\rho({\bf r})\rho({\bf r}')}
{|{\bf r}-{\bf r}'|}\,d^3r\,d^3r'+E_{\rm xc}[\gamma],
\end{align}
where $\rho({\bf r})=\gamma({\bf r},{\bf r})$, $V_{\rm ext}$ is a given
external potential, and $E_{\rm xc}$ we call the xc
energy functional. In principle, Gilbert's \cite{gilbert} generalization of the
Hohenberg-Kohn theorem to the 1-RDM guarantees the existence of a functional
$E[\gamma]$ whose minimum yields the exact $\gamma$ and the exact ground-state
energy of systems characterized by the external potential $V_{\rm ext}({\bf
r})$. In practice, however, the correlation energy is an unknown functional of
the 1-RDM and must be approximated. Although there are several known approximations
for the xc energy functional\cite{M1984,GPB2005,pernal2010,AC3,pade,pnof1,PINOS1,PNOF5,KP2014,localrdmft,piris_jcp2013,GPGB2009,PMLU2012,CP2012,MGGB2013,localrdmftappl},
the most promising for extended systems is the power functional\cite{sharma08,sharma13} where the xc energy reads
\begin{align} \label{exc}
&E_{\rm xc}[\gamma]=E_{\rm xc}[\{\phi_{i{\bf k}}\},\{n_{i{\bf k}}\}] = -\frac{1}{2}\int \, \int d^3r' d^3r
\frac{|\gamma^{\alpha}({\bf r},{\bf r}')|^2}{|{\bf r}-{\bf r}'|},
\end{align}
here $\gamma^{\alpha}$ indicates the power used in the operator sense i.e.
\begin{equation}
\gamma^{\alpha}({\bf r},{\bf r}')=\sum_i n^{\alpha}_i \phi_i({\bf r})\phi_i^*({\bf r}'),
\end{equation}
for $\alpha=1/2$ this is the M\"{u}ller functional\cite{mueller}, which is known to severely
overestimate electron correlation \cite{csanyi,gritsenko,herbert,nekjel} while for $\alpha=1$ this functional is
equivalent to the Hartree-Fock method, which includes no correlations.
If $\alpha$ is chosen to be $1/2 < \alpha < 1$, the power functional interpolates between the uncorrelated Hartree-Fock limit
and the over-correlating M\"{u}ller functional.
All calculations are performed using the full-potential linearized augmented plane wave code Elk\cite{elk},
with practical details of the calculations following the schemes described in Refs.~\onlinecite{sharma08} and
\onlinecite{sharma13}.
\section{Results}
\begin{figure}[ht]
\centerline{\includegraphics[width=\columnwidth]{./mag-non-mag.pdf}}
\caption{(color online) Density of states as a function of energy (in eV) for NiO (left panel) and MnO (right panel). Results are
obtained with (black) and without (red) long range (anti-ferromagnetic) spin order. For comparison experimental data taken from Refs. \onlinecite{elp91a}
and \onlinecite{sawat84} is also shown (grey shaded area). Chemical potential is shown at dotted vertical line}
\label{dos}
\end{figure}
Presented in Fig. \ref{dos} are the spectra for the Mott insulators under consideration. It is immediately apparent that RDMFT
captures the essence of Mott-Hubbard physics: both NiO and MnO present substantial gaps at the Fermi energy and are thus insulating in the absence of spin order.
This fact was already noticed in a previous work\cite{sharma08} in which the presence of gap without any spin-order was
deduced via very different technique, namely the discontinuity in the chemical potential as a function of the particle number.
A comparison of the non-magnetic spectra with the experimental data shows that the shape of the conduction band is well
reproduced for both materials, but that the shape of the valence band is not in very good agreement with experiments. This agreement
improves on inclusion of the spin order, indicating that even though insulating nature of TMO's is not \emph{driven} by spin order,
spin polarization significantly effects the spectra of these materials. This is hardly surprising given that NiO and MnO have very
large local moments of 1.9$\mu_B$ and 4.7$\mu_B$ respectively.
\begin{figure}
\centerline{\begin{tabular}{c}
\includegraphics[width=0.5\textwidth, clip]{./nio-alpha.pdf}\\
\\
\includegraphics[width=0.5\textwidth, clip]{./mno-alpha.pdf} \\
\\
\includegraphics[width=0.5\textwidth, clip]{./si-alpha.pdf}
\end{tabular}}
\caption{\label{dos-a} Density of states as a function of energy (in eV) for NiO (upper panels), MnO (middle panels),
and Si (lower panels). The results are obtained using different values of $\alpha$ in Eq. \ref{exc}.}
\end{figure}
Correct treatment of correlations is crucial for TMOs, the prototypical strongly correlated materials. As mentioned above
the power functional interpolates between two limits -- the highly over correlated M\"uller ($\alpha=0.5$) and totally
uncorrelated Hartree-Fock ($\alpha=1$). We now look at the effect of correlations, by varying $\alpha$, on the spectra of
Mott insulators (NiO and MnO) and band insulator (Si), see Fig. \ref{dos-a}. The behaviour of the spectra as a function of
$\alpha$ is rather trivial for band insulator, Si; the valence bands rigidly shift lower in energy leading to increase in
the band gap. The behaviour for Mott insulators is different in that the shape of the bands change as a function of
$\alpha$. Both for NiO and MnO over correlated M\"uller functional incorrectly gives a metallic ground-state. For NiO, which
has even number of electrons in a unit-cell, the Hartree-Fock method leads to a very large band gap insulator. In contrast to this,
for MnO, with odd number of electrons in the unit-cell, a single particle theory such as Hartree-Fock can only give rise to a metallic ground state.
This leads to highly non trivial behaviour for MnO as a function of $\alpha$, which must lie within a small range (between 0.65 and 0.7)
in which the correct insulating ground-state is obtained. Reassuringly, this is also the range of $\alpha$ in which correct ground state behaviour is seen for NiO.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.9\columnwidth]{./wf-min.pdf}}
\caption{Density of states as a function of energy (in eV) for NiO (top panel), MnO (middle panel) and Si (lower panel).
Results are obtained with (black) and without (red) optimization of the natural orbitals with in RDMFT. KS results (green) are obtained
using local density approximation\cite{lda}.}
\label{wfm}
\end{figure}
Within RDMFT there are no Kohn-Sham-like equations to solve, and a direct minimization over
natural orbitals and occupation numbers is required while maintaining the
ensemble $N$-representability conditions. The minimization over occupation numbers is computationally very
efficient (for details see Ref. \onlinecite{sharma08}), but the same cannot be said about the minimization over the natural orbitals.
In practical terms, the natural orbitals (see Eq. (\ref{srep})) are expanded in a set of previously converged
KS states, and optimization of the natural orbitals is performed by varying the expansion
coefficients. This procedure allows us to examine how different KS states are from fully optimized natural orbitals.
In the present work these KS states were obtained using local density approximation (LDA)\cite{lda}.
In Fig. \ref{wfm} three set of results are shown; (i) KS density of states, (ii) RDMFT density of states obtained without optimizing
the natural orbitals i.e. by using KS orbitals as natural orbitals but fully optimizing the occupation numbers and (iii) the fully
optimized RDMFT results i.e. full optimization over the natural orbitals and occupation numbers. From these results it is
clear that for the band insulator Si it is sufficient to optimize the occupation numbers to increase the band gap in line with experiment;
the KS states are evidently already a very good representation of the natural orbitals. These results are in line with
our experience with finite systems which shows that orbital optimization results roughly up to 25\% of the
total correlation energy and the rest 75\% comes from the occupation numbers optimization.
As may be seen in Fig.~\ref{wfm} the opposite situation holds for the
case of the Mott insulators NiO and MnO: clearly the KS states differ profoundly from the natural orbitals. In this case it is crucial
to optimize the natural orbitals. The reason for this is that in the case of Mott insulators it is the localization of electrons
which leads to formation of the gap and KS orbitals are not sufficiently localized, thus optimization over the natural orbitals is required.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.9\columnwidth,angle=-0]{./rho2d.pdf}}
\caption{(Color online) Difference between the LSDA charge density and charge density
calculated using LSDA+$U$ and RDMFT, ($\rho({\bf r})-\rho_{LSDA}({\bf r})$) for
NiO. Positive values indicate localization of charge as compared to LSDA.}
\label{rho2d}
\end{figure}
A confirmation of this charge localization may be seen in the charge density.
In Fig. \ref{rho2d} we plot the difference $\rho({\bf r})-\rho_{LDA}({\bf r})$,
for (i) RDMFT (lower panel) and (ii) the LSDA+$U$ functonal\cite{ldapu} (upper panel) within DFT for NiO.
LSDA+$U$ method is chosen because, like RDMFT, it also finds the correct insulating ground state for NiO\cite{rodl09,nio-sharma}.
The LSDA+$U$ method achieves this by both spin order an on-site Hubbard $U$ and, in contrast to RDMFT,
cannot treat the non-magnetic insulating state of this material. The impact of this difference on the charge
density is clear in Fig. \ref{rho2d}: significant charge localization is seen only in the RDMFT density. Interestingly,
one observes an almost spherical charge accumulation at the oxygen
site, a result in agreement with experiment\cite{dudarev00}, but different
from that found in the corresponding LSDA+$U$ result.
\section{Summary}
To summarize, in this work we demonstrate that RDMFT in conjunction with the power functional is able to capture the insulating state of
NiO and MnO in absence of long range spin order. However, while spin order does not drive the insulating ground state,
the large local moments in these materials require spin
be explicitly taken into account for excellent agreement with experimental spectra to be obtained. The power, $\alpha$,
in the power-functional is an indicator of the amount of correlation and a detailed analysis
shows a highly non trivial behaviour of the spectra, for Mott insulators, as a function of $\alpha$, which must lie
within a small range (between 0.65 and 0.7) for the correct insulating ground-state is obtained. It is further
shown that the natural orbitals for the strongly correlated materials, NiO and MnO,
are much more localized as compared to the Kohn-Sham orbitals, which enables them to capture the physics of Mott localization in these materials.
|
2,877,628,088,633 | arxiv | \section{Introduction}
It is well known that a $N$-dimensional real symmetric [complex
hermitian] matrix $V$ is congruent to a diagonal matrix modulo
an orthogonal [unitary] matrix\cite{[1]}. That is, $V=S^\dagger
D S$ where $ D$ is diagonal and $S\in SO(N)$ [$S\in SU(N)$]. If,
in addition, $V$ is also positive definite, new possibilities
arise for establishing its congruence to a diagonal matrix. For
$N$ even, it was shown by Williamson\cite{[2]} some sixty years
ago, and subsequently by several authors\cite{others,recent},
that such a $V$ is also congruent to a diagonal matrix modulo a
symplectic matrix in $Sp(N,{\cal R})$ [$Sp(N,{\cal C})$]. That
is, $V>0$ implies $ V=S^\dagger D^\prime S$ where $ D^\prime$ is
diagonal and $S\in Sp(N,{\cal R})$ [$S\in Sp(N,{\cal C})$].
Williamson's theorem has recently been exploited in defining
quadrature squeezing and symplectically covariant formulation of
the uncertainty principle for multimode states\cite{[3]}. In
this work we establish yet another kind of congruence of a real
symmetric [complex hermitian] positive definite matrix to a
diagonal matrix valid, for both odd and even dimensions. We show
that an $N$-dimensional real symmetric [complex hermitian]
positive definite matrix $V$ is congruent to a diagonal matrix
modulo a pseudo-orthogonal [pseudo-unitary] matrix. That is,
$V>0$ implies $V=S^\dagger D^{\prime\prime} S$ where $
D^{\prime\prime}$ is diagonal and $S\in SO(m,n)$ [$S\in
SU(m,n)$], for any choice of partition $N=m+n$. A simple proof
of this result is given. The strategy adopted in proving this
result, with appropriate modification, works for the Williamson
case as well, and affords a particularly simple proof of
Williamson's theorem. Needless to add that the diagonal entries
of neither $D^\prime$ nor $D^{\prime\prime}$ correspond to the
eigenvalues of $V$.
The theorems established here play a crucial role in enabling
one to construct pseudo-orthogonal and symplectic bases from a
given set of linearly independent vectors via an extremum
principle in the spirit of the work of Schweinler and
Wigner\cite{[5]}. In an important contribution to the age old
``orthogonalization problem'' -- the problem of constructing an
orthonormal set of vectors from a given set of linearly
independent vectors -- Schweinler and Wigner proposed an
orthonormal basis which, unlike the familiar Gram-Schmidt basis
(which depends on the particular initial order in which the
given linearly independent vectors are listed), treats all the
linearly independent vectors on an equal footing and has since
found important application in wavelet analysis\cite{wavelet}.
More significantly, they showed that this special basis follows
from {\em an extremum principle}. In this work, we exploit our
results on congruence to obtain generalizations of the
Schweinler-Wigner exremum principle leading to pseudo-orthogonal
and symplectic bases from a given set of linearly independent
vectors. Conversely, the extremum principle, once formulated,
can be interpreted as a procedure for finding the appropriate
congruence transformation to effect the desired diagonalization.
\section{Congruence of a positive matrix
under pseudo-orthogonal [pseudo-unitary] transformations }
The fact that a real symmetric [complex hermitian] matrix is
congruent to a diagonal matrix modulo an orthogonal [unitary]
matrix is well known. While congruence coincides with
conjugation in the real orthogonal and complex unitary cases,
they become distinct when more general sets of transformations
are involved. A question which naturally arises is whether
congruence to a diagonal form can also be achieved through a
pseudo orthogonal [pseudo-unitary] transformation. The answer to
this question turns out to be in the affirmative with the caveat
that the matrix in question be positive definite, and can be
formulated as the following theorem:
\noindent
{\it Theorem} 1: Let $V$ be a real symmetric positive definite
matrix of dimension $N$. Then, for any choice of partition
$N=m+n$, there exists an $S\in SO(m,n)$ such that
\begin{equation}
S^TVS=D^2={\rm diagonal}~({\rm and} >0).
\end{equation}
\vskip0.25cm
\noindent{\it Proof:}
We begin by recalling that the group $SO(m,n)$ consists of all
real matrices which satisfy $S^TgS=g,~ \mbox{det}\,S=1$, where
$g=$diag$(\:\underbrace{1,1,\cdots,1}_m\,,\,\underbrace{-1,\cdots,-1}_n\:)$.
Consider the matrix $V^{-1/2}gV^{-1/2}$ constructed from the
given matrix $V$. Since $V^{-1/2}gV^{-1/2}$ is real symmetric,
there exists a rotation matrix $R\in SO(N)$ which diagonalizes
$V^{-1/2}gV^{-1/2}$ :
\begin{equation}
R^TV^{-1/2}gV^{-1/2}R=\mbox{diagonal}\equiv \Lambda\,.
\end{equation}
This may be viewed also as a congruence of $g$ using
$V^{-1/2}R$, and signatures are preserved under congruence.
(Indeed, signatures are the only invariants if we allow
congruence over the full linear group $GL(N,{\cal R})$ ). As a
consequence, the diagonal matrix $\Lambda$ can be expressed as
the product of a positive diagonal matrix and $g$ :
\begin{equation}
R^TV^{-1/2} g V^{-1/2} R = D^{-2} g = D^{-1} g D^{-1}\,.
\end{equation}
Here $D$ is diagonal and positive definite.
Taking the inverse of the matrices on both sides of (3) we find
that the diagonal entries of $gD^2 = D^2 g$ are the eigenvalues
of $V^{1/2} g V^{1/2}$ and that the columns of $R$ are the
eigenvectors of $V^{1/2} g V^{1/2}$. Since $V^{1/2} g V^{1/2}$,
$gV$, and $Vg$ are conjugate to one another, we conclude that
$D^2$ is determined by the eigenvalues of $gV \sim Vg$.
Define $S=V^{-1/2}RD$. It may be verified that $S$ satisfies the
following two equations :
\begin{eqnarray}
S^TgS&=&g\,,\nonumber\\ S^TVS&=&D^2=\mbox{diagonal}\,.
\end{eqnarray}
The first equation says that $S\in SO(m,n)$ and the second says
that $V$ is diagonalized through congruence by $S$. Hence the
proof.
group $SO(m,n)$ by $SU(m,n)$, and $R\in SO(N)$ by $U\in SU(N)$
in the statement and proof of the above theorem, we have the
following theorem which applies to the complex case.
\noindent
{\it Theorem} 2: Let $V$ be a hermitian positive definite
matrix of dimension $N$. Then, for any partition $N=m+n$, there
exists an $S\in SU(m,n)$ such that
\begin{equation}
S^\dagger \,VS=D^2=~{\rm diagonal}~ ({\rm and}~>~0).
\end{equation}
\section{A simple proof of Williamson's theorem}
It turns out that the above procedure when applied to the real
symplectic group of linear canonical transformations leads a
particularly simple proof of Williamsons's theorem.
\noindent
{\it Theorem} 3: Let $V$ be a $2n$-dimensional real symmetric
positive definite matrix. Then there exists an $S\in Sp(2n,{\cal
R})$ such that
\begin{eqnarray}
S^TVS& =&D^2 >0\,,\nonumber\\
D^2&=&\mbox{diag}(\kappa_1,\kappa_2,\cdots,\kappa_n,\kappa_1,\kappa_2,\cdots,
\kappa_n).
\end{eqnarray}
\noindent
{\it Proof:} Note that the $2n$-dimensional diagonal matrix $D$
has only $n$ independent entries. The group $Sp(2n,{\cal R})$
consits of all real matrices $S$ which obey the condition
\begin{equation}
S^T\beta S=\beta\,,~~~\beta= \left(\,
\begin{array}{cc}0 & 1 \\
-1 & 0\,\end{array}\,\right)\,,
\end{equation}
with $1$ and $0$ denoting the $n\times n$ unit and zero
matrices respectively. Even though $S^T\beta S=\beta$ may appear
to suggest that $\mbox{det}S\, =\pm 1$, it turns out that $
\mbox{det}\,S = 1$. In other words, $Sp(2n,{\cal R})$ consists
of just one connected (though not simply connected) piece.
Indeed, for every $n\ge 1$ the connectivity property of $Sp(2n,
{\cal R})$ is the same as that of the circle.
The most general $S\in GL(2n,{\cal R})$ which solves $S^TVS=D^2$
is $S=V^{-1/2}RD$, where $R\in O(2n)$. Note that none of the
factors $D,R$ or $V^{-1/2}$ is an element of $Sp(2n,{\cal R})$.
However, a $V$-dependent choice of $D,R$ can be so made that
the product $V^{-1/2}RD$ is an element of $Sp(2n,{\cal R})$ as
we shall now show.
Since $\beta^T=-\beta$, it follows that ${\cal M} =
V^{-1/2}\beta V^{-1/2}$ is antisymmetric. Hence there exists an
$R\in SO(2n)$ such that\cite{[6]}
\begin{equation}
R^T V^{-1/2} \beta V^{-1/2} R = \left(\,
\begin{array}{cc}
0 & \Omega\\ -\Omega & 0\,\end{array}\,\right),~~\Omega =
\mbox{diagonal} > 0\,.
\end{equation}
Define a diagonal positive definite matrix
\begin{equation}
D =
\left(\,\begin{array}{cc}
\Omega^{-1/2} & 0\\
0 & \Omega^{-1/2}\end{array}\,\right)\,.
\end{equation}
Then we have
\begin{equation}
DR^TV^{-1/2} \beta V^{-1/2} RD = \beta\,.
\end{equation}
Now define $S=V^{-1/2}RD$. It may be verified that $S$ enjoys
the following properties: \begin{eqnarray} S^T\beta S &=&
\beta\,,\nonumber\\ S^TVS&=&D^2 ={\rm diagonal}.
\end{eqnarray}
The first equation says that $S\in Sp(2n, {\cal, R})$ and the
second one says that $V$ is diagonalized by congruence through
S. This completes the proof of the Willianson theorem. To
appreciate the simplicity of the present the reader may like to
compare it with two recently published proofs of the Williamson
theorem\cite{recent}.
We wish to explore the structure underlying the above proof a
little further so that the relationship between $D$ and $S$ in
(11) on the one hand and the eigenvalues and eigenvectors of
$\beta V^{-1}\,$(or $V^{-1/2}\beta V^{-1/2})$ on the other
becomes transparent. Again consider the matrix ${\cal M} =
V^{-1/2}\beta V^{-1/2}$. It is a real, non-singular,
anti-symmetric matrix and hence its eigenvalues $i\omega_\alpha$
and eigenvectors $\eta_\alpha$ have the following properties:
\begin{eqnarray}
{\cal M}\eta_\alpha &=& i\,\omega_\alpha
\eta_\alpha\,,~~~\alpha= 1,\cdots, 2n;\nonumber\\
\omega_k &>& 0\,,~~~k=1,\cdots,n\,;~~~~ \omega_{n+k} = -\omega_k\,;\nonumber\\
\eta_{n+k} &=& \eta_{k}^{*}\,;~~~~ k= 1,\cdots, n\,.
\end{eqnarray}
The eigenvectors $\eta_\alpha$ can be chosen to be orthonormal
even when the eigenvalues $i\omega_\alpha$ are degenerate.
Arrange the eigenvectors $\eta_\alpha$ as columns of a matrix U.
The matrix $U$ thus obtained clearly belongs to the unitary
group $U(2n)$, and satisfies
\begin{equation}
U^\dagger{\cal M} U = \Lambda, ~~~
\Lambda=\left(\,\begin{array}{cc}i\Omega & 0\\
0 & -i\Omega\end{array}\,\right)\,,
\end{equation}
where $\Omega = {\rm diag}(\omega_1,\cdots,\omega_n) > 0$. Now
define the following $2n\times 2n$ unitary matrices
\begin{equation}
\Sigma= \left(\,\begin{array}{cc} 0 & 1\\
1 & 0 \end{array}\,\right), ~~~~
\Delta = \frac{1}{{\sqrt 2}}\left(\,\begin{array}{cc} 1& -i\\
1& \,i\end{array}\,\right)\,.
\end{equation}
These two matrices have the properties $\Sigma^2 = 1$,~
$U\Sigma= U^{*}$, and $\Sigma\Delta = \Delta^{*}\,$($^*$ denotes
complex cojugate of a matrix). As a useful consequence of these
properties we have
\begin{equation}
U^{*}\Delta^{*} = U^{*} \Sigma \Sigma \Delta^{*} = U\Delta\,.
\end{equation}
We find that the unitary matrix $U\Delta$ is real: $U\Delta
\in O(2n)$.
Now consider $ S= V^{-1/2}U\Delta D$, where $D$ is a diagonal
matrix to be determined. It follows from the definition of $S$
and the reality of $U\Delta\in O(2n)$ that
\begin{equation}
S^T V S = S^\dagger V S = D^2\,.
\end{equation}
Further, recalling that $U^\dagger{\cal M} U = \Lambda$ we
obtain
\begin{eqnarray}
S^T\beta S = S^\dagger\beta S &=& D\Delta^\dagger U^\dagger
{\cal M} U\Delta D
\nonumber\\
&=&D\Delta^\dagger \Lambda \Delta D = D
\left(\,\begin{array}{cc} O & \Omega\\ -\Omega & O
\end{array}\,\right)D\,.
\end{eqnarray}
It is now evident that the following choice for $D$ ensures
that $S$ is an element of $S\in Sp(2n,{\cal R})$:
\begin{equation}
D= \left(\,\begin{array}{cc} \Omega^{-1/2} & O\\ O &
\Omega^{-1/2} \end{array}\,\right) \,.
\end{equation}
\noindent
This completes our analysis of the manner in which $S$ and $D$
are related to the eigenvalues and eigenvectors of the matrix
$\beta V^{-1}$.
As in the pseudo-orthogonal case, by replacing the supercript
$^T$ by $^\dagger$ in the statement and proof of Theorem 3, one
obtains the following result.
\noindent
{\it Theorem} 4: Let $V$ be a $2n$-dimensional hermitian
positive definite matrix. Then there exists an $S\in Sp(2n,{\cal
C})$ such that
\begin{eqnarray}
S^\dagger VS& =&D^2 >0\,,\nonumber\\
D^2&=&\mbox{diag}(\kappa_1,\kappa_2,\cdots,\kappa_n,\kappa_1,\kappa_2,\cdots,
\kappa_n).
\end{eqnarray}
An immediate consequence of the theorems stated above is that
for a real symmetric [complex hermitian] positive definite
matrix we can not talk about {\em the} canonical form under
congruence, for there are $m+n$ possible choices of $SO(m,n)$
[$SU(m,n)$], and in the case of even dimension one more choice
coming from Williamson's theorem. Needless to add that for the
same matrix $V$, the diagonal matrix $D$ will be different for
different choices.
\section{Orthogonalzation Procedures}
Assume that we are given a set of linearly independent
$N$-dimensional vectors $v_1,\cdots,v_N$. Let $G$ denote the
associated Gram matrix of pairwise inner products: $ G_{ij}=
(v_i,v_j)$. The Gram matrix is hermitian by construction, and
positive definite by virtue of the linear independence of the
given vectors. The orthogonalization problem, i.e., constructing
a set of orthonormal vectors out of the given set of linearly
independent vectors, amounts to finding a matrix $S$ that
solves
\begin{equation}
S^\dagger G S = 1,~~\bbox{i.e.},~ G^{-1} = SS^{\dagger}\,.
\end{equation}
Each such $S$ defines an orthogonalization procedure.
Let us arrange the set of $N$ vectors as the entries of a row
${\bf v} = (v_1,v_2,\cdots,v_N)$, and let ${\bf z} =
(z_1,z_2,\cdots,z_N)$ represent a generic orthonormal basis. The
orthonormal set of vectors {\bf z} corresponding to a chosen $S$
are related to the given set of linearly independent vectors
through ${\bf z}= {\bf v}S $. Clearly, there are infinitely
many choices for $S$ satisfying $(20)$: given an $S$ satisfying
$(20)$, any $S^\prime = SU$ where $U$ is an arbitrary unitary
matrix also satifies $(20)$. Thus the freedom available for the
solution of the orthonormalization problem is exactly as large
as the unitary group $U(N)$, and this was to be expected.
Schweinler and Wigner\cite{[5]} posed and answered the following
question: is there a way of descriminating between various
choices of $S$ that solves (20) and hence between various
orthogonalization procedures? They argued that a particular
choice of orthogonalization procedure should correspond
ultimately to the extremization of a suitable scalar function
over the manifold of all orthonormal bases, with the given
linearly independent vectors appearing as parameters in the
function. Different choices of onthonormal bases will then
correspond to different functions to be extremized. They
preferred the function to be symmetric under permutation of the
given vectors. As an example they considered the following
function which is quartic in the given vectors:
\begin{equation}
gm({\bf z})=\sum_{k}\left(\sum_{l} {\mid (z_k,v_l)\mid }^2
\right)^2\,.
\end{equation}
They showed that the extremum (maximum in this case) value of
$m({\bf z})$ is given by ${\rm tr}(G^{2})$, and this value
corresponds to the orthonormal basis ${\bf z} ={\bf v} U_0
P^{-1/2}$, where $U_0$ is the unitary matrix which diagonalizes
$G$: $U_0\:^\dagger G U_0 = P $. We may refer to this as the
Schweinler-Wigner basis, and the function $m({\bf z})$ as the
Schweinler-Wigner quartic form. It is clear that $U_0$ and hence
the Schweinler- Wigner basis is essentially unique if the
eigenvalues of the Gram matrix $G$ are all distinct. We may
note in passing that, unlike the Gram-Schmidt orthogonalization
procedure, the Schweinler-Wigner procedure is democratic in that
it treats all the linearly independent vectors ${\bf v}$ on an
equal footing.
The content of the work of Schweinler and Wigner has recently
been reformulated\cite{[7]} in a manner that offers a clearer
and more general picture of the Schweinler-Wigner quartic form
$m{(\bf z)}$ and of the orthonormal basis which maximizes it.
This perspective on the orthogonalization problem plays an
important role in our generalizations of the Schweinler-Wigner
extremum principle, and hence we summarise it briefly.
Since every orthonormal basis is the eigenbasis of a suitable
hermitian operator, it is of interest to characterize the
Schweinler-Wigner basis in terms of such an operator. Given
linearly independent $N$-dimensional vectors ${\bf v} =
(v_1,v_2,\cdots,v_N)$, the operator
$\hat{M}=\displaystyle{\sum_j} v_jv_j^\dagger$ is hermitian
positive definite. In a {\it generic orthonormal} basis ${\bf
z}$, it is represented by a hermitian positive definite matrix
$M({\bf z}):\; M({\bf z})_{ij} = (z_i, \hat{M}z_j)$. Under a
change of orthonarmal basis ${\bf z}\rightarrow {\bf z}^\prime =
{\bf z}S$, $M({\bf z})$ transforms as follows
\begin{equation}
M({\bf z}) \to M({\bf z}') = S^\dagger M({\bf z})S\,,\,\,\; S
\in U(N)\,. \end{equation} Recall that $U(N)$ acts transitively
on the set of all orthonormal bases and that ${\rm tr}(M({\bf
z})^2)=\displaystyle{\sum_{j,k}}|M({\bf z})_{jk}|^2$ is
invariant under such a change of basis, and hence is endependent
of ${\bf z}$. The Schweinler-Wigner quartic form $m({\bf z})$
can easily be identified as $\displaystyle{\sum_k} (M({\bf
z})_{kk})^2$. In view of the above invariance, maximization of
$\displaystyle{\sum_k} (M({\bf z})_{kk})^2$ is the same as
minimization of $\displaystyle{\sum_{j\ne k}} |M({\bf
z})_{jk}|^2$. The absolute minimum of $\displaystyle{\sum_{j\ne
k}}|M({\bf z})_{jk}|^2$ equals zero, and obtains when $M({\bf
z})$ is diagonal. Thus, the orthonormal basis which maximizes
$\displaystyle{\sum_k} (M({\bf z})_{kk})^2$ is the same as the
one in which $\hat{M}$ is diagonal, and we arrive at the
following important conclusion of Ref.[9]:
\noindent
{\em Theorem} 5: The distinquished orthonormal basis which
extremizes the Schweinler-Wigner quartic form $m({\bf z})$ over
the manifold of all orthonormal bases is the same as the
orthonormal basis in which the positive definite matrix $M({\bf
z})$ becomes diagonal.
Important for the above structure is the fact that the invariant
${\rm tr}(M({\bf z})^2)$ is the sum of non-negative quantities,
and therefore a part of it is necessarily bounded. It is
precisely this property, which can be traced to the underlying
unitary symmetry, that is not available when we try to
generalize the Schweinler-Wigner procedure to construct
pseudo-orthonormal and symplectic bases wherein the underlying
symmetries are the noncompact groups $SO(m,n)$ and $Sp(2n,{\cal
R})$ respectively..
\section{ Lorentz basis with an extremum property}
In this Section we show how the Schweinler-Wigner procedure can
be generalized to construct pseudo-orthonormal basis based on an
extremum principle. We begin with the case of real vectors.
We are given a set of linearly independent real $N$-dimensional
vectors ${\bf v} = (v_1,\cdots,v_N)$ and we want to construct
out of it a pseudo-orthonormal basis [$SO(m,n)$ Lorentz basis
with $N=m+n$], i.e., a set of vectors ${\bf z}=(z_1,
\cdots,z_N)$ satisfying
\begin{equation}
(z_k, gz_l) = g_{kl}\,,~~g=
\mbox{diag}(\:\underbrace{1,1,\cdots,1}_m\,,\,\underbrace{-1,\cdots,-1}_n\:).
\end{equation}
Let $\hat{M}=\displaystyle{\sum_j} v_j v_j ^T$ as before, and
let the symmetric positive definite matrix $M({\bf z}):~~M({\bf
z})_{ij} = (z_i, \hat{M}z_j)$ represent $\hat{M}$ in a {\em
generic pseudo-orthonormal} basis ${\bf z}$. Under a
pseudo-orthogonal change of basis ${\bf z} \to {\bf z}' = {\bf
z}S$, the matrix $M({\bf z})$ transforms as follows:
\begin{equation}
M({\bf z}) \to M({\bf z}') = S^T M({\bf z})S\,,~~ S\in
SO(m,n)\,. \end{equation} Since $S^T g S = g$ (or $gS^T =
S^{-1}g$) by definition, we have
\begin{equation}
S:~~gM({\bf z}) \to gM({\bf z}') = S^{-1}gM({\bf z})S.
\end{equation}
That is, as $M({\bf z})$ undergoes congruence, $gM({\bf z})$
undergoes conjugation. Thus, ${\rm tr}(gM({\bf z}))^l$,
$l=1,2,\cdots,$ are invariant. In what follows we shall often
leave implicit the dependence of $M$ on the generic
pseudo-orthonormal basis ${\bf z}$.
Consider the invariant ${\rm tr}(gM({\bf z})gM({\bf z}))$
corresponding to $l=2$. Write $M=M^{\rm even} + M^{\rm odd}$
where
\begin{equation}
M^{\rm even} = {1\over2}(M+gMg)\,,\,\,M^{\rm odd} =
{1\over2}(M-gMg)\,.
\end{equation}
In the above decomposition we have exploited the fact that $g$
is, like parity, an {\em involution}.
With $M$ expressed in the $(m,n)$ block form
\begin{equation}
M=\left(\,\begin{array}{cc} A & C\\ C^T &
B\end{array}\,\right)\,,\,\; A^T=A\,,\,\,B^T=B\,,
\end{equation}
we have
\begin{equation}
M^{\rm even}= \left(\,\begin{array}{cc} A & 0\\ 0 &
B\,\end{array}\right)\,,\,\; M^{\rm odd} =
\left(\begin{array}{cc} 0 & C\\ C^T & 0\,\end{array}\right)\,.
\end{equation}
Symmetry of $M$ implies that $M^{\rm odd}$ and $M^{\rm even}$
are symmetric. Further, $M^{\rm odd}$ and $M^{\rm even}$ are
trace orthogonal: ${\rm tr}(M^{\rm odd} M^{\rm even})=0$. Thus,
\begin{equation}
{\rm tr}(gMgM) = {\rm tr}(M^{\rm even})^2 - {\rm tr}(M^{\rm
odd})^2\,,
\end{equation}
which can also be written as
\begin{equation}
{\rm tr}(MgMg) = {\rm tr}(M^2) - 2 {\rm tr}(M^{\rm odd})^2\,.
\end{equation}
A few observations are in order:
\begin{itemize}
\item In contradistinction to the original unitary case, the invariant in the
present case is no more a sum of squares. This can be traced to
the non-compactness of the underlying $SO(m,n)$ symmetry. As one
consequence, $\displaystyle{\sum_k} (M_{kk})^2$ is not bounded.
As an example, consider the simplest case $m=1,\;n=1$ and let
\begin{equation}
M= \left(\,\begin{array}{cc}a & 0\\0 &
b\,\end{array}\right)\,,~~a, b > 0.
\end{equation}
Under congruence by the $SO(1,1)$ element
\begin{equation}
S= \left(\,\begin{array}{cc}
\cosh\mu & \sinh\mu\\
\sinh\mu & \cosh\mu\,\end{array}\right)\,,
\end{equation}
the value of $\displaystyle{\sum_k} (M_{kk})^2$ changes from
$a^2 + b^2$ to $ a^2 +b^2 + 2ab\sinh^2\mu \cosh^2 \mu$, which
grows with $\mu$ without bounds, showing that
$\displaystyle{\sum_k} (M_{kk})^2$ and hence $\mbox{tr} (M^2)$
is not bounded. Thus, in contrast to the unitary case,
extremization of the Schweinler-Wigner quartic form
$\displaystyle{\sum_k} (M_{kk})^2$ will make no sense in the
absence of further restrictions.
\item The structure of the invariant ${\rm tr}(gMgM)$ in (30) suggests
the further restriction needed to be imposed: within the
submanifold of pseudo-orthogonal bases ${\bf z}$ which keep
${\rm tr}(M({\bf z})^{\rm odd})^2$ (and hence ${\rm tr}(M({\bf
z})^2)$) at a fixed value we can maximize
$\displaystyle{\sum_k}M({\bf z})^2_{kk}$. In particular we can
do this within the submanifold which minimizes ${\rm tr}(M({\bf
z})^{\rm odd})^2$, and hence ${\rm tr}(M({\bf z})^2)$. Clearly,
zero is the absolute minimum of the nonnegative object ${\rm
tr}(M({\bf z})^{\rm odd})^2$. But by theorem 1 there exists a
Lorentz basis ${\bf z}$ in which $M({\bf z})$ is diagonal and
hence $M({\bf z})^{\mbox{odd}} = 0$. Thus the minimum ${\rm
tr}(M({\bf z})^{\rm odd})^2=0$, and hence the minimum of ${\rm
tr}(M({\bf z})^2)$, namely ${\rm tr}(gM({\bf z})gM({\bf z}))$,
is attainable.
\end{itemize}
The above observations suggest the following {\em two step
analogue of the Schweinler-Wigner extremum principle for Lorentz
bases}. Choose the submanifold of Lorentz bases which minimize
the quartic form ${\rm tr}(M({\bf z})^{\rm odd})^2$, and
maximize the Schweinler-Wigner quartic form $ m({\bf z}) =
\displaystyle{\sum_k} (M({\bf z})_{kk})^2$ within this
submanifold. Clearly, the first step takes $M$ to a
block-diagonal form, and the second one diagonalizes it. Thus
we have established the following generalization of Theorem 5 to
the pseudo-orthonormal case:
\noindent
{\em Theorem} 6: The distinquished pseudo-orthonormal basis
which extremizes the ``Schweinler-Wigner'' quartic form $m({\bf
z})$ over the submanifold of pseudo-orthonormal bases which
minimize the quartic form $\mbox{tr}(M({\bf z})^2)$ is the same
as the pseudo-orthonormal basis in which the positive definite
matrix $M({\bf z})$ becomes diagonal.
The submanifold under reference consists of Lorentz bases which
are related to one another through the maximal compact
(connected) subgroup of $SO(m,n)$, namely $SO(m)\times SO(n)$.
This subgroup consists of matrices of the block-diagonal form
\begin{equation}
\left(\begin{array}{cc}
R_1 & 0\\ 0 & R_2\end{array}\right)\,,\;\;\; R_1 \in
SO(m)\,\,,\;\;R_2\in SO(n)\,,
\end{equation}
and this is precisely the subgroup of $SO(m,n)$ transformations
that do not mix the even and odd parts of $M({\bf z})$.
To conclude this Section we may note that the above construction
carries over to the complex case, with obvious changes like
replacing $^T$ by $^\dagger$ and $SO(m,n)$ by $SU(m,n)$.
\section{\bf Sympletic Basis with an Extremum Property}
Our construction in the pseudo-orthogonal case suggests a scheme
by which the Schweinler-Wigner extremum principle principle can
be generalized to construct a symplectic basis. Suppose that we
are given a set of linearly independent vectors $ {\bf
v}=(v_1,v_2,\cdots,v_{2n})$ in ${\cal R}^{2n}$. The natural
symplectic structure in $R^{2n}$ is specified by the standard
symplectic ``metric'' $\beta$ defined in (7). Let ${\bf z} =
(z_1, z_2, \cdots, z_{2n})$ denote a generic symplectic basis.
That is, $(z_j,\beta z_k)=\beta _{jk}\,,\,\,j,k=1,2,\cdots,2n$.
The real symlectic group $Sp(2n,R)$ acts transitively on the set
of all symplectic bases.
To generalize the Schweinler-Wigner principle to the symplectic
case, we begin be defining
$\hat{M}=\displaystyle{\sum_{j=1}^{2n}} v_j v_j^T$. Let $M({\bf
z}):~M({\bf z})_{ij} = (z_i, \hat{M}z_j)$ be the symmetric
positive definite matrix representing the operator $\hat{M}$ in
a {\em generic symplectic} basis {\bf z}. Under a symplectic
change of basis ${\bf z} \to {\bf z}' = {\bf z}S,\;S \in
Sp(2n,{\cal R})$, the matrix $M({\bf z})$ undergoes the
following transformation:
\begin{equation}
M({\bf z})\to M({\bf z}^\prime) = S^TM({\bf z})S\,,\,\,\, S \in
Sp(2n,R)\,.
\end{equation}
Since $S^T \beta S = \beta$ implies $\beta S^{T} = S^{-1}
\beta$, we have
\begin{equation}
S:~~ \beta M({\bf z}) \to \beta M({\bf z}^\prime) = S^{-1}\beta
M({\bf z}) S.
\end{equation}
That is, under a symplectic change of basis $ M({\bf z})$
undergoes congruence, but $\beta M({\bf z})$ undergoes
conjugation. Hence $\mbox{tr}(\beta M({\bf z}))^{2l},\,\,
l=1,2,\cdots,n$ are invariant (Note that ${\rm tr}(\beta
M({\bf z}))^{2l+1}=0$ in view of $\beta ^T = -\beta,
\;M({\bf z})^T = M({\bf z})$).
Since $i\beta$ is an {\em involution} we can use it to separate
$M({\bf z})$ into even and odd parts :
\begin{eqnarray}
M({\bf z})&=&M({\bf z})^{\rm even} + M({\bf z})^{\rm
odd}\,,\nonumber\\ M({\bf z})^{\rm even} &=& {1\over2}(M({\bf
z})+\beta M({\bf z})\beta^T)\,,\nonumber\\ M({\bf z})^{\rm odd}
&=& {1\over2} (M({\bf z})-\beta M({\bf z}) \beta^T)\,.
\end{eqnarray}
The even and odd parts of $M({\bf z})$ satisfy the symmetry
properties
\begin{equation}
\beta M({\bf z})^{\rm even}\beta ^T=M({\bf z})^{\rm
even}\,,\,\,\,\beta M({\bf z})^{\rm odd}\beta^T = -M({\bf
z})^{\rm odd}\,.
\end{equation}
Further, $M({\bf z})^{\rm odd}$ and $M({\bf z})^{\rm even}$ are
trace orthogonal: $\mbox{tr}\left(M({\bf z})^{\rm odd}M({\bf
z})^{\rm even}\right) = 0$.
The structure of the even and odd parts of $M({\bf z})$ may be
appreciated by writing $M({\bf z})$ in the block form
\begin{equation} M({\bf z})=\left(\begin{array}{cc} A & C\\ C^T
& B\,\end{array}\right),\,\,A^T=A\,,\,\,B^T=B\,.
\end{equation}
We have
\begin{eqnarray}
M({\bf z})^{\rm even} &=& \left(\begin{array}{cc} {1\over2}(A+B)
& {1\over2}(C-C^T)\\ & \\ -{1\over2}(C-C^T) &
{1\over2}(A+B)\end{array}\right),\nonumber\\ \nonumber\\ M({\bf
z})^{\rm odd} &=& \left(\begin{array}{cc} {1\over2}(A-B) &
{1\over2}(C+C^T)\\ & \\ {1\over2}(C+C^T) &
{1\over2}(B-A)\end{array}\right).
\end{eqnarray}
Now consider the invariant $-{\rm tr} (\beta M({\bf z})\beta
M({\bf z}))={\rm tr}(\beta ^T M({\bf z})\beta M({\bf z}))$. We
have
\begin{equation}
{\rm tr}(\beta^T M({\bf z})\beta M({\bf z})) = {\rm tr}(M({\bf
z})^{\rm even})^2 - {\rm tr}(M({\bf z})^{\rm odd})^2\,,
\end{equation}
which can also be written as
\begin{equation}
{\rm tr}(\beta^TM({\bf z})\beta M({\bf z})) = {\rm tr}(M({\bf
z})^2) - 2{\rm tr}(M({\bf z})^{odd})^2\,.
\end{equation}
The structural similarity of this invariant to that in the
pseudo-orthogonal case should be appreciated.
Now, by an argument similar to the pseudo-orthogonal case one
finds that, owing to the noncompactness of $Sp(2n,{\cal R})$,
the function $\mbox{tr}(M({\bf z})^2)$ and hence the
Schweinler-Wigner quartic form $\displaystyle{\sum_{k=1}^{2n}}
(M({\bf z})_{kk})^2$ is unbounded if ${\bf z}$ is allowed to
run over the entire manifold of all symplectic bases. For
instance, in the lowest dimensional case $n=1$ with $M$ chosen
to be
\begin{equation}
M= \left(\,\begin{array}{cc}a & u\\d & b\,\end{array}\right)
,~~a, b > 0,~~ab-ud>0,
\end{equation}
under congruence by the $Sp(2,{\cal,R})$ matrix
\begin{equation}
S= \left(\,\begin{array}{cc}
\mu & 0\\
0 & 1/\mu\,\end{array}\right),
\end{equation}
the value of $\displaystyle{\sum_k} (M_{kk})^2$ changes from
$a^2 + b^2$ to $ \mu^2 a^2 +(1/\mu^2) b^2 $ which, by an
appropriate choice of $\mu$, can be made as large as one wishes.
However, it follows from (41) that over the submanifold of
symplectic bases which leave ${\rm tr}(M({\bf z})^{\rm odd})^2$
fixed, the function ${\rm tr}(M({\bf z})^2)$ remains invariant
and so the quartic form $\sum (M({\bf z})_{kk})^2$ is bounded
within this restricted class of symplectic bases and hence can
be maximised. In particular the nonnegative ${\rm tr}(M({\bf
z})^{\rm odd})^2$ can be chosen to take its minimum value.
Williamson theorem implies that there are symplectic bases
which realize the absolute mimumum ${\rm tr}(M({\bf z})^{\rm
odd})^2 = 0$.
We can now formulate the {\em analogue of the Scweinler-Wigner
extremum principle for symplectic bases} in the following way:
Take the subfamily of symplectic bases in which ${\rm tr}(M({\bf
z})^{\rm odd})^2$ and hence ${\rm tr}(M({\bf z})^2)$is minimum.
[This minimum of $\mbox{tr}(M({\bf z})^2)$ equals the invariant
${\rm tr}(\beta^TM({\bf z})\beta M({\bf z}))$]. Then maximise
the Schweinler-Wigner quartic form $ m({\bf z}) =
\displaystyle{\sum_k} (M({\bf z})_{kk})^2$ within this
submanifold of symplectic bases. This will lead, not just to a
basis in which $M({\bf z})$ is diagonal, but to one where
$M({\bf z})$ has the Williamson canonical form $M({\bf z})={\rm
diag}(\kappa_1,\cdots, \kappa_n;
\kappa_1,\cdots, \kappa_n)$. We have thus established the following
generalization of the Schweinler-Wigner extremum principle to
the symplectic case.
\noindent
{\em Theorem} 7: The distinquished symplectic basis which
extremizes the ``Schweinler-Wigner'' quartic form $m({\bf z})$
over the submanifold of symplectic bases which minimize the
quartic form $\mbox{tr}(M({\bf z})^2)$ is the same as the
symplectic basis in which the positive definite matrix $M({\bf
z})$ assumes the Williamson canonical diagonal.
Note that once $M({\bf z})^{\rm odd}=0$ is reached, as implied
by ${\rm tr}(M({\bf z})^{\rm odd})^2=0$, $M({\bf z})$ has the
special even form
\begin{equation}
{\left(\begin{array}{cc} A & C\\ -C &
A\end{array}\right)},~~A^T=A,\;C^T = -C,
\end{equation}
so that $A+iC$ is hermitian. The subgroup of symplectic
transformations which do not mix $M({\bf z})^{\rm even}$ with
$M({\bf z})^{\rm odd}$, and hence maintain the property $M({\bf
z})^{\rm odd}=0$ have the special form
\begin{equation}
S = \displaystyle{\left(\begin{array}{cc} X & Y\\ -Y &
X\end{array}\right)},~~~ X+iY \in ~U(n).
\end{equation}
This subgroup, isomorphic to the unitary group $U(n)$, is the
maximal compact subgroup\cite{dutta} of $Sp(2n,{\cal R})$. Thus,
diagonalizing $M({\bf z})$ using symplectic change of basis,
after it has reached the even form, is the same as diagonalizing
an $n$-dimensional hermitian matrix using unitary
transformations.
\section{\bf Concluding Remarks}
To conclude, we have shown that an $N\times N$ real symmetric
[complex hermitian] positive definite matrix is congruent to a
diagonal form modulo a pseudo-orthogonal [pseudo-unitary] matrix
belonging to $SO(m,n)$ [$SU(m,n)$], for any choice of partition
$N=m+n$. The method of proof of this result is adapted to
provide a simple proof of Williamson's theorem. An important
consequence of these theorems is that while a real-symmetric
[complex-hermitian] positive definite matrix has a unique
diagnal form under conjugation, it has several different
canonical diagnal forms under congruence. The theorems
developed here are used to formulate an extremum principle a
l\'a Schweinler and Wigner for constructing
pseudo-orthonormal[pseudo-unitary] and symplectic bases from a
given set of linearly independent vectors. Conversely, the
extremum principle thus formulated can be used for finding the
congruence transformation which brings about the desired
diagonalization.
It is interesting that pseudo-orthonormal basis and symplectic
basis could be constructed by extremizing {\em precisely the
same Schweinler-Wigner quartic form} $ m({\bf z}) =
\displaystyle{\sum_k} (M({\bf z})_{kk})^2$ that was originally
used to construct orthonormal basis in the unitary case.
However, it must be borne in mind that the similarity in the
structure of the quartic form to be extremized in the three
cases considered is only at a formal level. In reality, the
three quartic forms are very different objects, for they are
functions over topologically very different manifolds: ${\bf z}$
runs over the group manifold $U(N)$ of orthogonal frames in the
original Schweinler-Wigner case, the group manifold $SO(m,n)$ of
pseudo-orthogonal frames in the Lorentz case, and over the group
manifold $Sp(2n,{\cal R})$ in the symplectic case. This has the
consequence that, unlike the orthogonal case, this quartic form
is unbounded in the noncompact $SO(m,n) [SU(m,n)]$ and $Sp(2n,
{\cal R}) [Sp(2n, {\cal C})] $ cases. Insight into the structure
of these groups was used to achieve constrained extremization
within a natural maximal compact submanifold.
|
2,877,628,088,634 | arxiv | \section{Introduction}
In \cite{luddea}, a method for "lifting" representations from the Artin braid group on $n+1$ strands $B_{n+1}$ to the Artin braid group on $n$ strands $B_n$ is given by Lüdde and Toppan.
Based on this method, we construct a variant of the Burau representation of the virtual braid group on $n$ strands $VB_n$, which is the subject of Theorem \ref{dernièreproposition}.
\newline
The extension of Gassner representation of the pure braid group on $n$ strands $P_n$ was constructed independently by Bardakov \cite{barda05} and by Rubinsztein \cite{rubinsztein}. In the present paper, we propose an iterative procedure to find and generalize this representation, taking inspiration from the works \cite{mirko,luddea,luddeb}. We proceed as follows: we start with the fact that we give a subgroup $B$ of the automorphism group $Aut(F)$ of a free group $F$ of finite rank. So we have a natural embedding $B \hookrightarrow Aut(F)$ such that there is a canonical way to form a semi-direct product $F\rtimes B.$ Let $\tilde{\mathfrak{p}}:F \rtimes B \twoheadrightarrow B$ be the canonical projection whose its kernel is $F.$ We write $\mathfrak{p}:\mathbb{C}[F \rtimes B] \twoheadrightarrow \mathbb{C}[B].$ The kernel $\overline{I}_F$ of $\mathfrak{p}$ is the \emph{relative augmentation ideal} of $F$ in $\mathbb{C}[F \rtimes B].$ Using the embedding $B\hookrightarrow Aut(F),$ it is possible to define an action of $B$ on $\overline{I}_F.$ There is a representation of $B$ in terms of matrices whose entries are given in $\mathbb{C}[F \rtimes B].$ In Section \ref{sec:generalized Gassner}, we apply this procedure to the pure welded braid to obtain a faithful matrix representation of this group in Theorem \ref{4.3.6}, generalizing the Gassner representation of the pure Artin braid group.
\section{Two generalizations of the classical braid group}
\label{sec:two generalizations}
In this section we recall the standard definitions in terms of generators and relations of virtual and welded braid groups on $n$ strands. Let $VB_n$ denote the virtual braid group on the $n$ strands. This group first mentioned in \cite{kauffman} is the group defined by the presentation with generators $\sigma_1,\dots,\sigma_{n-1},\tau_1,\dots,\tau_{n-1}$ and the following relations
\begin{itemize}
\item[(V1)] $\sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1}$ for $i= 1,2,\dots,n-2$
\item[(V2)] $\sigma_{i}\sigma_{j} = \sigma_{j}\sigma_{i}$ for $|i-j| \geq 2$
\item[(V3)] $\tau_i^2=1$ for $i=1,\dots,n-1$
\item[(V4)] $\tau_{i}\tau_{i+1}\tau_{i}=\tau_{i+1}\tau_{i}\tau_{i+1}$ for $i= 1,2,\dots,n-2$
\item[(V5)] $\tau_{i}\tau_{j} = \tau_{j}\tau_{i}$ for $|i-j| \geq 2$
\item[(V6)] $\sigma_i \tau_j =\tau_j \sigma_i$ for $|i-j| \geq 2$
\item[(V7)] $\sigma_{i} \tau_{i+1}\tau_{i} = \tau_{i+1}\tau_{i} \sigma_{i+1}$ for $i= 1 \dots n-2$
\label{Eq:virtualrelations}
\end{itemize}
It is well-known that the relations (V1)-(V2) are defining relations of the classical braid group on $n$ strands, $B_n$, where the elements $\sigma_1,\dots,\sigma_{n-1}$ generate the classical braid group $B_n$ in $VB_n$. It is also known that the relations (V3)-(V5) are defining relations of the symmetric group $S_n$ on $n$ letters, where the elements $\tau_1,\dots,\tau_{n-1}$ generate the symmetric group $S_n$ in $VB_n.$ Hence $VB_n$ can be presented by the free product of $B_n$ and $S_n$ modulo relations (V6)-(V7). The mirror of the relation (V7) (i.e. when read backwards) hold in $VB_n:$
\begin{eqnarray}
\tau_i \tau_{i+1} \sigma_{i}=\sigma_{i+1}\tau_{i}\tau_{i+1} \text{ $ $ } (i=1,\dots,n-2).
\end{eqnarray}
But however, the following relation do not hold in $VB_n$ (see \cite{goussarovpolyakviro}):
\begin{eqnarray}
\tau_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i} \tau_{i+1} \text{ $ $ } (i=1,\dots,n-2)
\label{Eq:forbiddenrelation1}
\end{eqnarray}
The welded braid group on $n$ strands $WB_n$ is by definition the quotient of $VB_n$ by this relation, usually called the forbidden relation. Thus, $WB_n$ is the group obtained from $VB_n$ by introducing the forbidden relation \eqref{Eq:forbiddenrelation1} and, moreover, contains important elements of the form
\begin{eqnarray}
\xi_{i,j} &=& \tau_{i}\tau_{i+1}\cdots \tau_{j-2}\tau_{j-1}\sigma_{j-1} \tau_{j-2}\cdots\tau_{i+1} \tau_{i} \text{ $ $ } (i<j)\nonumber\\
\xi_{i,j} &=& \tau_{i-1}\tau_{i-2}\cdots\tau_{j-2}\tau_{j-1}\sigma_j\tau_j\tau_{j-1} \cdots \tau_{i-1} \text{ $ $ } (j<i)\nonumber\\
\end{eqnarray}
The elements $\xi_{i,j}$ ($1\leq i\neq j\leq n$) generate the welded pure braid group $PW_n$ also known as the group of basis-conjugating automorphisms in \cite{savushikina}, or the McCool group in \cite{berceanupapadima} which is defined as the kernel of the projection $WB_n\twoheadrightarrow S_n$ by sending $\sigma_{i}\mapsto \tau_{i}$ and $\tau_{i}\mapsto \tau_{i}.$ In \cite{mccool}, McCool proved that the following relations known as the \emph{McCool relations} determine a presentation of $PW_n$:
\begin{eqnarray}
\left[\xi_{k,j}, \; \xi_{s,t}\right] &=&1 \text{ if } \; \{i,j\} \cap \{s,t\}= \emptyset \nonumber,\\
\left[\xi_{i,j}, \; \xi_{k,j}\right ]&=&1 \text{ for } \; i,j, k \text{ distinct }\nonumber,\\
\left[\xi_{i,j}\cdot \xi_{kj}, \; \xi_{i,k}\right ]&=&1 \text{ for } \; i,j, k \text{ distinct }.
\label{Eq:2.7}
\end{eqnarray}
where $\left[a,b\right]:=a^{-1}b^{-1}ab$. In \cite{savushikina}, it is proved that the welded braid group $WB_n$ splits as a semidirect product, $WB_n=PW_n \rtimes S_n$ and there is an analogous statement for the virtual braid group \cite{bardakov04}. Moreover, $WB_n$ has a ''twin`` group denoted by $\mathcal{WB}_n$. This is the group corresponding to $WB_n$ as a set, and we have all the relations as in $WB_n$ except that the forbidden relation \eqref{Eq:forbiddenrelation1} which is replaced by
\begin{eqnarray}
\tau_{i+1} \sigma_{i} \sigma_{i+1}=\tau_{i}\sigma_{i+1}\sigma_{i}.
\end{eqnarray}
Both are isomorphic and the isomorphism is given by the map $\mathfrak{V}_n:\mathcal{WB}_n \rightarrow WB_n$ that sends $\sigma_{i}\mapsto \sigma^{-1}_{i}$ and $\tau_{i}\mapsto \tau_{i}.$
\begin{remark}
The relation $\tau_{i+1} \sigma_{i} \sigma_{i+1}=\tau_{i}\sigma_{i+1}\sigma_{i}$ which holds in $\mathcal{WB}_n$ does not hold in $WB_n.$
\end{remark}
We conclude this section with a representation for the welded braid group by automorphisms of the free group of rank $n.$ Recall that the subgroup of $PW_{n+1}$ generated by the elements $\xi_{n+1,1},\dots, \xi_{n+1,n}$ is a free group of rank $n,$ which we denote by $F_n$ (see \cite{barda03}). For $1\leq i \leq n- 1,$ let $\rho_i:F_n \rightarrow F_n$ and $\vartheta_i:F_n \rightarrow F_n$ be the automorphisms defined respectively by
\begin{eqnarray}
\rho_i: \left\{
\begin{array}{ll}
\xi_{n+1,i} &\mapsto \xi_{n+1,i} \xi_{n+1,i+1} \xi_{n+1,i}^{-1} \\
\xi_{n+1,i+1}&\mapsto \xi_{n+1,i}\\
\xi_{n+1,j} &\mapsto \xi_{n+1,j} $ $ \mbox{ $ $ } \forall j\neq \{i,i+1\}.
\end{array}
\right. \text{ and } \vartheta_i: \left\{
\begin{array}{ll}
\xi_{n+1,i}&\mapsto \xi_{n+1,i+1} \\
\xi_{n+1,i+1} &\mapsto \xi_{n+1,i}\\
\xi_{n+1,j} &\mapsto \xi_{n+1,j} $ $ \mbox{ $ $ } \forall j\neq \{i,i+1\}.
\end{array}
\right.
\end{eqnarray}
One can easily show the following.
\begin{prop}
The mapping $\sigma_i\mapsto \rho_i$ and $\tau_{i}\mapsto \vartheta_i$, $1\leq i\leq n-1$, determines a representation $\psi:WB_n\rightarrow Aut(F_n)$.
\end{prop}
The above representation $\psi:WB_n\rightarrow Aut(F_n)$ is commonly known as the Artin representation and it is faithful (for a proof see \cite{frimrourke}).
\section{Generalized Burau and Gassner representation}
Here we construct matrices representations that generalize the classical Burau and Gassner representation.
\subsection{The virtual braid group lift}
\label{sec:Un relèvement de représentation de tresses virtuelles}
The idea for this section is based on \cite{luddea}. Let $\mathbb{C}$ denote the field of complex numbers. Let us now take $\mathbb{C}[WB_{n+1}]$ to be the group algebra of $WB_{n+1}$ and consider an abstract set $\Delta^{(1)}_n:=\{\delta^{(1)}_1,\dots,\delta^{(1)}_n\}$. Then we formally construct a left free module on $\mathbb{C}[WB_{n+1}]$ from this set. The free $\mathbb{C}[WB_{n+1}]$-module to the left of rank $n$ thus obtained is denoted by $\mathcal{F}^{(1)}_n$ and it is generated by the elements $\delta^{(1)}_1,\dots,\delta^{(1)}_n.$ This module carries the following representation of the virtual braid group $VB_n$ defined as an action to the right of $VB_n$ on $\mathcal{F}^{(1)}_n:$ the generators $\left(\sigma_i\right)^{n-1}_{i=1}$ and $\left(\tau_i\right)^{n-1}_{i=1}$ in this representation act according to
\begin{eqnarray}
\delta^{(1)}_j.\sigma_{i} =\sigma_{i} &\times& \left(1-\alpha\cdot\xi_{n+1,i} \xi_{n+1,i+1} \; \xi^{-1}_{n+1,i}\right)\cdot \delta^{(1)}_{i}+ \alpha\cdot\xi_{n+1,i}\cdot \delta^{(1)}_{i+1} \text{ $ $ if $ $ } j=i \nonumber\\
\sigma_i & \times & \delta^{(1)}_i \text{ $ $ if $ $ } j=i+1\nonumber \\
\sigma_i& \times & \delta^{(1)}_j \text{ $ $ if $ $ } j\neq \{i,i+1\} \nonumber\\
\nonumber\\
\delta^{(1)}_j.\tau_{i}=\tau_{i} &\times& \beta^{-1} \cdot \delta^{(1)}_{i+1} \text{ $ $ if $ $ } j=i \nonumber\\
\tau_i&\times& \beta\cdot \delta^{(1)}_i \text{ $ $ if $ $ } j=i+1 \nonumber \\
\tau_i&\times& \delta^{(1)}_j \text{ $ $ if $ $ } j\neq \{i,i+1\} \nonumber\\
\label{Eq:7}
\end{eqnarray}
where $\alpha$ and $\beta$ are nonzero complex parameters (not necessarily distinct). The action on the right \eqref{Eq:7} can be written as follows:
\begin{eqnarray}
\delta_j.\sigma_{i}&=&\sum_{k=1}^{n} \mathcal{V}(\sigma_i)^k_j \; \delta_k, \text{ $$ $$ } (1\leq j \leq n)\nonumber \\
\nonumber\\
\delta_j.\tau_{i}&=& \sum_{k=1}^{n} \mathcal{V}(\tau_i)^k_j \; \delta_k,\text{ $$ $$ } (1\leq j \leq n)
\label{Eq:8}
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{V}(\sigma_i)^k_j:=\bordermatrix {
& & i & & & i+1 & \cr
& \sigma_i\cdot I_{i-1} & & & & \cr
& & \sigma_i\cdot(1- \alpha\cdot\xi_{n+1,i} \cdot \xi_{n+1,i+1} \cdot \xi^{-1}_{n+1,i}) & & & \sigma_i \cdot \alpha\cdot\xi_{n+1,i} & \cr
& & & & & & \cr
& & \sigma_i & & &0 & \cr
& & & & & & \cr
& & & & & &\sigma_i \cdot I_{n-i-1} \cr
} \nonumber\\
\label{Eq:9}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{V}(\tau_i)^k_j:= \bordermatrix {
& & i & & i+1 & \cr
& \tau_i\cdot I_{i-1} & & & & \cr
& & 0 & &\beta^{-1}\cdot\tau_i & \cr
& & & & & \cr
& & \beta\cdot\tau_i & &0& & \cr
& && & &\tau_i\cdot I_{n-i-1} \cr
}
\label{Eq:10}.
\end{eqnarray}
where $\alpha,\beta \in \mathbb{C}^{\ast}.$
\begin{prop}
Let $\alpha,\beta$ be nonzero complex numbers with $\alpha \neq \beta.$ Then the following assignment defines a matrix representation of $VB_n$:
\begin{eqnarray}
\phi_n:VB_n \longrightarrow GL(n,\mathbb{C}[WB_{n+1}] )
\end{eqnarray} defined by
\begin{eqnarray}
\sigma_i \mapsto \mathcal{V}(\sigma_i):=\bordermatrix {
& & i & & & i+1 & \cr
& \sigma_i\cdot I_{i-1} & & & & \cr
& & \sigma_i\cdot (1- \alpha\cdot\xi_{n+1,i} \; \xi_{n+1,i+1} \; \xi^{-1}_{n+1,i}) & & & \alpha\cdot\sigma_i\cdot\xi_{n+1,i} & \cr
& & & & & & \cr
& & \sigma_i & & &0 & \cr
& & & & & & \cr
& & & & & &\sigma_i\cdot I_{n-i-1} \cr
}
\end{eqnarray}
and
\begin{eqnarray}
\tau_{i} \mapsto \mathcal{V}(\tau_i):=\bordermatrix {
& & i & & i+1 & \cr
& \tau_i\cdot I_{i-1} & & & & \cr
& & 0 & &\beta^{-1}\cdot\tau_i & \cr
& & & & & \cr
& &\beta\cdot \tau_i & &0& & \cr
& && & &\tau_i\cdot I_{n-i-1} \cr
} .
\end{eqnarray}
Moreover, the representation $\phi_n$ factors though $\mathcal{WB}_n$ if and only if $\beta=\alpha$ and $WB_n$ if and only if $\beta=1.$
\label{theofaithful}
\end{prop}
We derive from Proposition \ref{theofaithful} a variant of the Burau representation of the virtual braid group, as far as I know, which seems to be new which we state in the following theorem.
\begin{theo}
Let $\alpha,\beta$ denote two nonzero complex parameters with $\alpha \neq \beta$ and $\beta \neq 1.$ The mapping $\tilde {h}:WB_{n+1}\rightarrow \mathbb{C}$ such that $\sigma_i \mapsto 1$ and $\tau_i \mapsto 1$, applied to each entry of the above matrix yields the variant of the Burau representation ${}^{\tilde{h}}\phi_n: VB_n \longrightarrow GL(n,\mathbb{C})$ defined by
\begin{eqnarray}
\sigma_i \mapsto {}^ {\tilde{h}} \mathcal{V}(\sigma_i)=\bordermatrix {
& & i & &i+1 & \cr
& I_{i-1} && & & \cr
& & 1-\alpha & &\alpha & \cr
& & & & & \cr
& & 1 & &0 & \cr
& &&&& I_{n-i-1} \cr
} \text{ and } \tau_i \mapsto {}^ {\tilde{h}} \mathcal{V}(\tau_i)=\bordermatrix {
& & i && i+1 & \cr
& I_{i-1} & && & \cr
& & 0 && \beta^{-1} & \cr
& & & & & \cr
& & \beta &&0 & \cr
& &&&
& I_{n-i-1} \cr}\text{ $ $ }
\end{eqnarray}
that does not factor through the welded braid group on $n$-strands. In contrast, ${}^{\tilde{h}}\phi_n$ factors through $\mathcal{WB}_n$ if and only if $\beta=\alpha$ and ${}^{\tilde{h}}\phi_n$ factors through $WB_n$ if and only if $\beta= 1.$
\label{dernièreproposition}
\end{theo}
\begin{proof}
It is a consequence of the relations of $VB_n$ to be verified by a direct calculation.
\end{proof}
\subsection{Generalized Gassner representation}
\label{sec:generalized Gassner}
Here, we propose an iterative procedure to find and generalize the extension of the Gassner representation of the pure Artin braid group on $n$ strands $P_n$. Recall the faithful representation $\psi:WB_n \rightarrow Aut(F_n)$ defined in section \ref{sec:two generalizations}. It follows from $WB_n=PW_n\rtimes S_n$ that $PW_n$ is embedded into the automorphism group $Aut(F_n)$ of the free group $F_n$. Then, using $PW_n \hookrightarrow Aut(F_n)$, a semidirect product $F_n\rtimes PW_n$ can be defined. Let $\overline{I}_{F_n}$ be the kernel of the canonical projection $\pi:\mathbb{C}[F_n\rtimes PW_n]\twoheadrightarrow \mathbb{C}[PW_n].$ The kernel $\overline{I}_{F_n}$ is called the relative augmentation ideal of $F_n$ in $\mathbb{C}[F_n\rtimes PW_n]$ and is generated by the elements $\{\xi_{n+1,1}-1,\xi_{n+1,2}-1,\dots,\xi_{n+1,n}-1\}$. For more details about the relative augmentation ideal, we refer to \cite{thetheoryofnilpotent,robinson}. Any generator $(\xi_{n+1,i}-1)$ of $\overline{I}_{F_n}$ can be written using the fundamental formula of the free differential calculus [\cite{fox},(2.3)]:
\begin{eqnarray}
\xi_{n+1,i}-1= \sum \limits_{k=1} ^{n} \frac{\partial \xi_{n+1,i}}{ \partial \xi_{n+1,k}}( \xi_{n+1,k}-1) \text{ $$ $$ for all } i=1,\dots, n.
\end{eqnarray}
Excellent references for the Fox derivation are the Birman's book \cite{birman} and the Fox's original article \cite{fox}. Thanks to [Theorem 3.9, \cite{birman}], we define an action to the right of $PW_{n+1}$ on $\overline{I}_{F_n}$ by setting
\begin{eqnarray}
(\xi_{n+1,l}-1)\cdot\xi_{i,j}&=&\sum \limits_{k=1} ^{n} \frac{\partial (\xi_{n+1,l} \;\xi_{i,j})}{ \partial \xi_{n+1,k}}( \xi_{n+1,k}-1) \text{ $$ $$ } ( 1\leq l \leq n).
\label{Eq:actiondemccool}
\end{eqnarray}
By computing the coefficients $\left(\frac{\partial(\xi_{n + 1, l}\cdot \xi_{i, j})}{\partial\xi_{n + 1, k}} \right)_{l, k}$ of \eqref{Eq:actiondemccool}, we obtain
\begin{eqnarray}
(\xi_{n+1,l}-1)\cdot\xi_{i,j}=\sum \limits_{k=1} ^{n} \mathcal{C}(\xi_{i,j})_l^{k} \cdot( \xi_{n+1,k}-1)
\label{Eq:actiondemccoolenmatrice}
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{C}(\xi_{i,j})_l^{k}=\bordermatrix {
& & i & & & j & \cr
& \xi_{i,j}\cdot I_{i-1} & & & & \cr
& & \xi_{i,j}\cdot \xi_{n+1,j} & & & \xi_{i,j} \cdot(1-\xi_{n+1,j}\cdot \xi_{n+1,i} \cdot\xi^{-1}_{n+1,j})& \cr
& & & & & \cr
& & 0 & & &\xi_{i,j}\cdot I_{j-i} & \cr
& & & & & & \cr
& & & & & &\xi_{i,j}\cdot I_{n-j} \cr
} \text{ $ $ } (i<j)
\nonumber\\
\nonumber\\
\nonumber\\
\nonumber\\
\mathcal{C}(\xi_{i,j})_l^{k}=\bordermatrix {
& & j & & & i & \cr
& \xi_{i,j}\cdot I_{j-1} & & & & \cr
& & \xi_{i,j}\cdot I_{i-j} & & & 0& \cr
& & & & & \cr
& & \xi_{i,j} \cdot(1-\xi_{n+1,j}\cdot \xi_{n+1,i} \cdot\xi^{-1}_{n+1,j}) & & &\xi_{i,j}\cdot \xi_{n+1,j} & \cr
& & & & & & \cr
& & & & & &\xi_{i,j}\cdot I_{n-i} \cr
} \text{ $ $ }(j<i) \nonumber\\
\label{Eq:matriceassociéeàmccool}
\end{eqnarray}
where $1\leq i \neq j \leq n.$ Moving from the generators $\xi_{i,j}$ in $PW_{n + 1}$ to their corresponding matrix $\mathcal{C}(\xi_{i,j})_l^{k}$, we find that the number of generators decreases by one, and then we assign each generator $\xi_{i,j}$ of $PW_n$ to this matrix. We arrive at the following result.
\begin{theo}
The mapping $\xi_{i,j} \mapsto \mathcal{C}(\xi_{i,j})$, $1\leq i\neq j\leq n$, determine a faithful matrix representation $\Psi_n:PW_n \rightarrow GL(n,\mathbb{C}[F_n\rtimes PW_n] )$ where
\begin{eqnarray}
\mathcal{C}(\xi_{i,j})&=&\bordermatrix {
& & i & & & j & \cr
& \xi_{i,j}\cdot I_{i-1} & & & & \cr
& & \xi_{i,j}\cdot \xi_{n+1,j} & & & \xi_{i,j} \cdot(1-\xi_{n+1,j}\cdot \xi_{n+1,i} \cdot\xi^{-1}_{n+1,j})& \cr
& & & & & \cr
& & 0 & & &\xi_{i,j}\cdot I_{j-i} & \cr
& & & & & & \cr
& & & & & &\xi_{i,j}\cdot I_{n-j} \cr
}\text{ } (i<j)\nonumber\\
\label{Eq:basisconj}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{C}(\xi_{i,j})&=&\bordermatrix {
& & j & & & i & \cr
& \xi_{i,j}\cdot I_{j-1} & & & & \cr
& & \xi_{i,j}\cdot I_{i-j} & & & 0& \cr
& & & & & \cr
& & \xi_{i,j} \cdot(1-\xi_{n+1,j}\cdot \xi_{n+1,i} \cdot\xi^{-1}_{n+1,j}) & & &\xi_{i,j}\cdot \xi_{n+1,j} & \cr
& & & & & & \cr
& & & & & &\xi_{i,j}\cdot I_{n-i} \cr
} \text{ $ $ }(j<i) \nonumber\\
\label{Eq:basisconj1}
\end{eqnarray}
\label{4.3.6}
\end{theo}
\begin{proof}
Since the faithful action $PW_n$ on $F_n$ extends over the free group algebra $\mathbb{C}[F_n]$ and, moreover, the matrix representation given in this theorem is equivalent to the right action of $PW_n$ on the augmentation ideal $I_{F_n}$ which is the kernel of the augmentation map $\epsilon:\mathbb{C}[F_n]\rightarrow \mathbb{C}$.
\end{proof}
\begin{remark}
Under Theorem \ref{4.3.6} and Formula \eqref{Eq:actiondemccoolenmatrice}, we can say that $\overline{I}_{F_n}$ satisfies the following abstract notation:
\begin{eqnarray}
\overline{I}_{F_n}\cdot \mathfrak{s}=\sum \mathcal{C}(\mathfrak{s})\cdot \overline{I}_{F_n}
\label{Eq:écritureabstraite}
\end{eqnarray}
where $\mathfrak{s}$ represents the generators of $PW_{n+1}$ and $\mathcal{C}(\mathfrak{s})$ is their corresponding matrix considered as generators of $PW_n.$
\end{remark}
\begin{prop}
The homomorphism $\mathfrak{a}:\mathbb{C}[F_n\rtimes PW_n]\rightarrow \mathbb{C}[t^{\pm1}_1,\dots,t^{\pm1}_n]$ such that $\xi_{n+1,k}\mapsto t_k$ and $\xi_{i,j}\mapsto 1$ for all $1\leq k \leq n$ and $1\leq i \neq j \leq n.$ applied to each entry of the above matrix yields the representation ${}^{\mathfrak{a}}\Psi_n:PW_n\rightarrow Gl(n,\mathbb{C}[t^{\pm1}_1,\dots,t^{\pm1}_n]$, such that $\xi_{i,j} \mapsto {}^ {\mathfrak{a}} \mathcal{C}(\xi_{i,j})$, where
\small{ \begin{eqnarray}
{}^ {\mathfrak{a}} \mathcal{C}(\xi_{i,j})=\bordermatrix {
& & i & & j & \cr
& I_{i-1} & & & & \cr
& & t_j & &1-t_i& \cr
& & & & \cr
& & 0 & & I_{j-i} & \cr
& & & & & I_{n-j} \cr
} (i<j)\text{ $ $ and } {}^ {\mathfrak{a}} \mathcal{C}(\xi_{i,j})=\bordermatrix {
& & j & & & i & \cr
& I_{j-1} & & & & \cr
& & I_{i-j} & & &0& \cr
& & & & & \cr
& & 1-t_i & & &t_j & \cr
& & & & & & \cr
& & & & & & I_{n-i} \cr
} (j<i)
\end{eqnarray} }
with $1\leq i\neq j \leq n.$
\label{extensiondegassner}
\end{prop}
\begin{remark}
The representation ${}^{\mathfrak{a}}\Psi_n$ ($n\geq1$) in Proposition \ref{extensiondegassner} was constructed respectively: by Bardakov \cite{barda05} using the Magnus construction in the Birman sense defined in \cite{birman} and by Rubinsztein \cite{rubinsztein} using the construction in the original sense considered by Magnus in \cite{magnussurvey}. This representation is not faithful for $n\geq 2$ and is an extension of Gassner representation of the pure braid group $P_n$ (see \cite{barda05}). In addition, M. Nasser and M. Abdulrahim recently studied the irreducibility of ${}^{\mathfrak{a}}\Psi_n$ in \cite{nasserabdul}.
\end{remark}
Let us now see how we can iterate the above procedure as follows. We first iteratively construct semi-direct products
\begin{equation}
\begin{split}
PW_n(1) &:= F_{n} \rtimes PW_n \leq PW_{n+1}\\
PW_n(2) &:= F_{n} \rtimes( F_{n-1} \rtimes PW_{n-1}) \leq PW_{n+1}\\
\vdots\\
PW_n(r) &:= F_{n} \rtimes(F_{n-1} \rtimes( \dots ( F_{n+1-r} \rtimes PW{n+1-r})\dots )) \leq PW_{n+1} \text{ $ $ } (1\leq r\leq n),
\end{split}
\label{Eq:4.54}
\end{equation}
where $F_k$ is a free group on $\{\xi_{k+1,1},\dots,\xi_{k+1,k}\}$ (see \cite{barda03,cpvw}). From the decompositions \eqref{Eq:4.54}, we iteratively define relative augmentation ideals $\overline{I}_{F_{n+1-r}}$ associated with the free groups $F_{n + 1-r}$, and then consider the action of $PW_{n + 1}$ on the direct product $\mathfrak{d}_{l^r}\cdot\mathfrak{d}_{l^{r-1}}\cdots\mathfrak{d}_{l^2}\cdot\mathfrak{d}_{l ^ 1}$, where $\mathfrak{d}_{l^s}:=(\xi_{n-r+2,l^s}-1)$ for each $s=1,\dots,r$. Here the $\xi_{i,j}$-action ($1\leq i \neq j \leq n + 1$) on the direct product $\mathfrak{d}_{l^r}\cdot\mathfrak{d}_{l^{r-1}}\cdots\mathfrak{d}_{l^2}\cdot\mathfrak{d}_{l ^ 1}$ is given by iteration of \eqref{Eq:actiondemccool} as follows:
\footnotesize{\begin{eqnarray} \mathfrak{d}_{l^r}\cdot\mathfrak{d}_{l^{r-1}}\cdots\mathfrak{d}_{l^2}\cdot \mathfrak{d}_{l^1}\cdot \xi_{i,j}&=&\mathfrak{d}_{l^r}\cdot\mathfrak{d}_{l^{r-1}}\cdots\mathfrak{d}_{l^{2}}\cdot\sum \limits_{k^1=1} ^{n} \mathcal{C}(\xi_{i,j})_{l^1}^{k^1} \cdot\mathfrak{d}_{k^1}\text{ $ $ by } \eqref{Eq:actiondemccoolenmatrice} \nonumber\\ &=& \mathfrak{d}_{l^r}\cdot\mathfrak{d}_{l^{r-1}}\cdots \mathfrak{d}_{l^3}\cdot\sum \limits_{k^{2}=1} ^{n-1}\sum \limits_{k^1=1} ^{n} \mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}\cdot\mathfrak{d}_{k^1}\cdot\mathfrak{d}_{k^{2}}\text{ $ $ by } \eqref{Eq:écritureabstraite} \text{ $ $ and } \eqref{Eq:actiondemccoolenmatrice} \nonumber\\ &\vdots&\nonumber\\
&=&\sum \limits_{k^r=1} ^{n+1-r}\cdots \sum \limits_{k^{2}=1} ^{n-1}\sum \limits_{k^1=1} ^{n} \mathcal{C}(\cdots\mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}\cdots)_{l^r}^{k^r}\cdot \mathfrak{d}_{k^1}\cdots\mathfrak{d}_{k^r} \text{ by iteration of } \eqref{Eq:écritureabstraite} \text{ and } \eqref{Eq:actiondemccoolenmatrice},\nonumber\\
\end{eqnarray} }
where the coefficients $\mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}$ are exactly like the matrices \eqref{Eq:matriceassociéeàmccool}, but the matrix elements $\xi_{i,j}\in PW_{n+1}$ present in these matrices are replaced by their corresponding matrix representations $\mathcal{C}(\xi_{i,j})_{l^1}^{k^1}.$ Similarly, $\mathcal{C}(\cdots\mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}\cdots)_{l^r}^{k^r}$ are also matrices obtained iteratively from the previously introduced matrices by iteratively replacing the matrix elements present in \eqref{Eq:matriceassociéeàmccool} with their corresponding matrix representations. The substitutions of $\xi_{i,j}$ by $\mathcal{C}(\xi_{i, j})_{l^1}^{k^1}$ reduce the number of generators of $PW_{n + 1}$ by two and, moreover, since these substitutions respect the McCool relations \eqref{Eq:2.7}, it must be clear that the mapping $\xi_{i,j}\mapsto \mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}$ establishes a matrix representation of $PW_{n-1}$ in $GL(2(n-1), \mathbb{C}[PW_n(2)]).$ For the same reason as above, the mapping $\xi_{i,j}\mapsto \mathcal{C}(\cdots\mathcal{C}(\mathcal{C}(\xi_{i,j})_{l^1}^{k^1})_{l^{2}}^{k^{2}}\cdots)_{l^r}^{k^r}$ defines a representation of $PW_{n+1-r}$ in $GL(r(n+1-r),\mathbb{C}[PW_n(r)]).$ To illustrate more precisely the form of this matrix representation, if we use $\xi^{(r)}_{i, j}$ to represent the matrix representation of $\xi_{i,j}$ at the $r$-th iteration, then the matrix representation $\Psi^{(r)}_n$ of $PW_{n + 1-r}$ in $GL(r(n+1-r),\mathbb{C}[PW_n(r)])$ is given by
\footnotesize{ \begin{eqnarray}
\xi^{(0)}_{i,j}&=&\xi_{i,j} \in PW_{n+1} \text{ \color{blue} $$ $$ (\emph{ initial representation } ) }
\nonumber\\
\xi^{(r)}_{i,j}&=&\bordermatrix {
& & i & & & j & \cr
& \xi^{(r-1)}_{i,j}\cdot I_{i-1} & & & & \cr
& & \xi^{(r-1)}_{i,j}\cdot t^{(r)}_j \cdot \xi^{(r-1)}_{n+1,j} & & & \xi^{(r-1)}_{i,j} \cdot(I^{(r)}-t^{(r)}_i\cdot\xi^{(r-1)}_{n+1,j}\cdot \xi^{(r-1)}_{n+1,i} \cdot\xi^{-(r-1)}_{n+1,j})& \cr
& & & & & \cr
& & & & & \cr
& & 0 & & &\xi^{(r-1)}_{i,j}\cdot I_{j-i} & \cr
& & & & & & \cr
& & & & & &\xi^{(r-1)}_{i,j}\cdot I_{n+1-r-j}\cr
}(i<j)\nonumber\\
\label{Eq:itérationdemccool}
\end{eqnarray} }
and
\footnotesize{\begin{eqnarray}
\xi^{(r)}_{i,j}&=& \bordermatrix {
& & j & & & i & \cr
&\small{ \xi^{(r-1)}_{i,j}\cdot I_{j-1} } & & & & \cr
& & \xi^{(r-1)}_{i,j}\cdot I_{i-j} & & & 0& \cr
& & & & & \cr
& &\small{ \xi^{(r-1)}_{i,j} \cdot(I^{(r)}-t^{(r)}_i\cdot\xi^{(r-1)}_{n+1,j}\cdot \xi^{(r-1)}_{n+1,i} \cdot\xi^{-(r-1)}_{n+1,j}) }& & &\small {\xi^{(r-1)}_{i,j}\cdot t^{(r)}_j \cdot \xi^{(r-1)}_{n+1,j} } & \cr
& & & & & & \cr
& & & & & &\small {\xi^{(r-1)}_{i,j}\cdot I_{n+1-r-i} } \cr
} \text{ }(j<i)\nonumber\\
\label{Eq:itérationdemccool1}
\end{eqnarray} }
where $r=1,\dots,n$, $t^{(r)}_j,t^{(r)}_i$ are nonzero, not necessarily distinct, parameters in $\mathbb{C}$ introduced at each iteration of each matrix representation, and $I^{(r)}$ is the identity matrix of the size of the recurrent matrices $\xi^{(r-1)}_{i,j}$. Applying them to a trivial representation $\xi^{(0)}_{i,j}=1$ for all $1\leq i\neq j\leq n+1$ of $PW_{n+1},$ the formulas \eqref{Eq:itérationdemccool}-\eqref{Eq:itérationdemccool1} yield for the first iteration, the extension of the Gassner representation ${}^ {\mathfrak{a}} \Psi_n$ of $P\Sigma_n$ (see Proposition \ref{extensiondegassner}) and it yields in the second iteration, a product tensor ${}^ {\mathfrak{a}} \Psi_n\otimes {}^ {\mathfrak{a}} \Psi_n$ of two extensions ${}^ {\mathfrak{a}} \Psi_n$ of the Gassner representation. Let us illustrate this with a simple example.
\begin{example}
Applying them to a trivial representation $\xi^{(0)}_{i,j}=1$ for all $1\leq i\neq j\leq 5$ of $P\Sigma_{5},$ the formulas \eqref{Eq:itérationdemccool}-\eqref{Eq:itérationdemccool1} yield, without any real surprise, for the first iteration the representation ${}^ {\mathfrak{a}} \Psi_4$ of $P\Sigma_4.$ For the second iteration $r=2,$ we obtain for example by applying the formulas \eqref{Eq:itérationdemccool}-\eqref{Eq:itérationdemccool1} on ${}^ {\mathfrak{a}} \Psi_4(\xi_{1,2})$:
\begin{eqnarray}
\xi^{(2)}_{1,2}= \bordermatrix{ & & & & & & & & & & & & \cr
& s_2t_2 & s_2(1-t_1) & 0 & 0 & t_2(1-s_1) & (1-t_1)(1-s_1) & 0 & 0 & 0 & 0 & 0 &0\cr
& 0 & s_2 & 0 & 0 & 0 & 1-s_1 & 0 & 0 & 0 & 0 & 0 &0 \cr
& 0 & 0 & s_2 & 0 & 0 & 0 & 1-s_1 & 0 & 0 & 0 & 0 &0\cr
& 0 & 0 & 0 &s_2t_2 & 0 & 0 & 0 & 1-s_1t_1 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & t_2 & 1-t_1 & 0 & 0 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & t_2 & 1-t_1 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \cr
}\text{ }
\end{eqnarray}
As we can see by omitting columns 4,8 and 12 and their rows in the above matrix, the result of $\xi^{(2)}_{1,2}$ becomes a tensor product of two matrices $\xi^{(1)}_{1,2}:$
\begin{eqnarray}
\widehat{\xi^{(2)}}_{1,2}= \bordermatrix{ & & & & & & & & & & & & \cr
& s_2t_2 & s_2(1-t_1) & 0 & t_2(1-s_1) & (1-t_1)(1-s_1) & 0 & 0 & 0 & 0 \cr
& 0 & s_2 & 0 & 0 & 1-s_1 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & s_2 & 0 & 0 & 1-s_1 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & t_2 & 1-t_1 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & t_2 & 1-t_1 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \cr
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \cr
}=\xi^{(1)}_{1,2}\otimes \xi^{(1)}_{1,2}. \nonumber\\
\end{eqnarray}
In other words, it is a tensor product $\left({}^{\mathfrak{a}}\Psi_3\otimes {}^{\mathfrak{a}}\Psi_3 \right)(\xi_{1 ,2})$ of $P\Sigma_3.$
\end{example}
\bibliographystyle{abbrv}
|
2,877,628,088,635 | arxiv | \section{Proofs}
\subsection{Lemma \ref{thm:gradient-flow}}
\label{subsec:proof_gradient-flow}
\begin{proof}
Gradient flows in the Wasserstein space are of the form of the continuity equation (see \citet{ambrosio2008gradient}, page 281), i.e,
\begin{align}
\partial_t\rho_t + \nabla\cdot(\rho_t {\mathbf{v}}) = 0.
\label{eq:app-continuity-eq}
\end{align}
The velocity field ${\mathbf{v}}$ in Eq. (\ref{eq:app-continuity-eq}) is given by
\begin{align}
{\mathbf{v}}({\mathbf{x}}) = -\nabla_{\mathbf{x}}\frac{\delta {\mathcal{F}}}{\delta\rho}(\rho),\label{eq:velocity-formula}
\end{align}
where $\frac{\delta {\mathcal{F}}}{\delta\rho}(\rho)$ denotes the first variation of the functional ${\mathcal{F}}$. The first variation is defined as
\begin{align}
\frac{d}{d \varepsilon} {\mathcal{F}}(\rho + \varepsilon\chi)\bigg|_{\varepsilon = 0} = \int \frac{\delta {\mathcal{F}}}{\delta\rho}(\rho)\chi,
\end{align}
where $\chi = \nu - \rho$ for some $\nu \in {\mathcal{P}}_2(\Omega)$.
Let's derive an expression for the the first variation of ${\mathcal{F}}$. In the following, we drop the notation for dependence on ${\mathbf{x}}$ for clarity,
\begin{align}
\frac{d}{d \varepsilon} {\mathcal{F}}(\rho + \varepsilon\chi)\bigg|_{\varepsilon = 0} &= \frac{d}{d \varepsilon} \int f\left(\frac{\rho + \varepsilon\chi}{\mu}\right)\mu + \gamma\int (\rho + \varepsilon\chi) \log (\rho + \varepsilon\chi)\Bigg|_{\varepsilon = 0}\\
&=\int f'\left(\frac{\rho + \varepsilon\chi}{\mu}\right)\chi + \gamma\int (\log(\rho + \varepsilon\chi) + 1)\chi\Bigg|_{\varepsilon = 0}\\
&=\int \left[f'\left(\frac{\rho}{\mu}\right) + \gamma\log(\rho) + \gamma\right]\chi.
\end{align}
Substituting $\frac{\delta {\mathcal{F}}}{\delta\rho}(\rho)$ in Eq. (\ref{eq:velocity-formula}) we get,
\begin{align}
{\mathbf{v}}({\mathbf{x}}) &= -\nabla_{\mathbf{x}} \left[f'\left(\frac{\rho}{\mu}\right) + \gamma\log(\rho) + \gamma\right]\\
&= -\nabla_{\mathbf{x}} f'\left(\frac{\rho}{\mu}\right)-\frac{\gamma}{\rho}\nabla_{\mathbf{x}} \rho.
\end{align}
Substituting ${\mathbf{v}}$ in Eq. (\ref{eq:app-continuity-eq}) we get the gradient flow,
\begin{align}
\partial_t\rho_t - \nabla_{\mathbf{x}}\cdot\left(\rho_t \nabla_{\mathbf{x}} f'\left(\frac{\rho}{\mu}\right) + \rho_t\frac{\gamma}{\rho}\nabla_{\mathbf{x}} \rho\right) = 0\\
\partial_t\rho_t({\mathbf{x}}) - \nabla_{\mathbf{x}}\cdot\left(\rho_t\nabla_{\mathbf{x}} f'\left(\frac{\rho({\mathbf{x}})}{\mu({\mathbf{x}})}\right)\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0,
\end{align}
where $\Delta_{{\mathbf{x}}\rvx}$ and $\nabla_{\mathbf{x}}\cdot$ denote the Laplace and the divergence operators respectively.
\end{proof}
\subsection{Lemma \ref{lemma:latent-density-ratio}}
\label{subsec:proof_densityratio}
\begin{proof}
Let $f$ be an integrable function on ${\mathcal{X}}$. If $\mathbf{J}_g$ has full column rank and $g$ is an injective function, then we have the following change-of-variables equation~\citep{ben1999change,gemici2016normalizing},
\begin{align}
\int_{{\mathcal{X}}} f({\mathbf{x}})d{\mathbf{x}} = \int_{{\mathcal{Z}}}(f \circ g)({\mathbf{z}})\sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g({\mathbf{z}})}d{\mathbf{z}}.
\end{align}
This implies that the infinitesimal volumes $d{\mathbf{x}}$ and $d{\mathbf{z}}$ are related as $d{\mathbf{x}} = \sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g({\mathbf{z}})}d{\mathbf{z}}$ and the densities $p_Z({\mathbf{z}})$ and $q_X({\mathbf{x}})$ are related as $p_Z({\mathbf{z}}) = q_X(g({\mathbf{z}}))\sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g({\mathbf{z}})}$.
Similarly, $p_{\hat{Z}}(\hat{{\mathbf{z}}}) = q_{\hat{X}}(g(\hat{{\mathbf{z}}}))\sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g(\hat{{\mathbf{z}}})}$. Finally, the density-ratio $p_{\hat{Z}}({\mathbf{u}})/p_{Z}({\mathbf{u}})$ at the point ${\mathbf{u}} \in {\mathcal{Z}}$ is given by
\begin{align}
\frac{p_{\hat{Z}}({\mathbf{u}})}{p_{Z}({\mathbf{u}})} = \frac{q_{\hat{X}}(g({\mathbf{u}}))\sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g({\mathbf{u}})}}{q_{X}(g({\mathbf{u}}))\sqrt{\det \mathbf{J}_g^{\top} \mathbf{J}_g({\mathbf{u}})}} = \frac{q_{\hat{X}}(g({\mathbf{u}}))}{q_{X}(g({\mathbf{u}}))}.
\end{align}
\end{proof}
\section{A Discussion on \textsc{DG}$f$low{} for WGAN}
\label{sec:wgan-discussion}
We apply \textsc{DG}$f$low{} to WGAN models by treating the output from their critics as the logit for the estimation of density-ratio. However, it is well-known that WGAN critics are not density-ratio estimators as they are trained to maximize the 1-Wasserstein distance with an unconstrained output. In this section, we provide theoretical justification for the good performance of \textsc{DG}$f$low{} on WGAN models. We show that \textsc{DG}$f$low{} is related to the gradient flow of the entropy-regularized 1-Wasserstein functional ${\mathcal{F}}_\mu^{\mathcal{W}}: {\mathcal{P}}_2(\Omega) \rightarrow {\mathbb{R}}$,
\begin{align}
{\mathcal{F}}_\mu^{\mathcal{W}}(\rho) \triangleq \underset{\text{1-Wasserstein distance}}{\underbrace{\sup_{\|d\|_{\text{Lip}} \leq 1} \int d\left({\mathbf{x}}\right)\mu({\mathbf{x}})d{\mathbf{x}} - \int d\left({\mathbf{x}}\right)\rho({\mathbf{x}})d{\mathbf{x}}}} +\gamma\underset{\text{negative entropy}}{\underbrace{\int \rho({\mathbf{x}}) \log \rho({\mathbf{x}})d{\mathbf{x}}}},\label{eq:-er-1-wasserstein}
\end{align}
where $\mu$ denotes the target density, $\|d\|_{\text{Lip}}$ denotes the Lipschitz constant of the function $d$.
Let $d^{*}$ be the function that achieves the supremum in Eq. (\ref{eq:-er-1-wasserstein}). This results in the functional,
\begin{align}
{\mathcal{F}}_\mu^{\mathcal{W}}(\rho) = \int d^{*}\left({\mathbf{x}}\right)\mu({\mathbf{x}})d{\mathbf{x}} - \int d^{*}\left({\mathbf{x}}\right)\rho({\mathbf{x}})d{\mathbf{x}} +\gamma\int \rho({\mathbf{x}}) \log \rho({\mathbf{x}})d{\mathbf{x}}.\label{eq:sup-er-1-wasserstein}
\end{align}
Following a similar derivation as in Appendix \ref{subsec:proof_gradient-flow}, the gradient flow of ${\mathcal{F}}_\mu^{\mathcal{W}}(\rho)$ is given by the following PDE,
\begin{align}
\partial_t\rho_t({\mathbf{x}}) + \nabla_{\mathbf{x}}\cdot\left(\rho_t \nabla_{{\mathbf{x}}}d^*({\mathbf{x}})\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0.
\end{align}
If $d^*$ is approximated using the critic ($d_\phi$) of WGAN, we get the following gradient flow,
\begin{align}
\partial_t\rho_t({\mathbf{x}}) + \nabla_{\mathbf{x}}\cdot\left(\rho_t \nabla_{{\mathbf{x}}}d_\phi({\mathbf{x}})\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0,\label{eq:gradient-flow-w1}
\end{align}
which is same as the gradient flow of entropy-regularized $f$-divergence with $f = r\log r$ (i.e., the KL divergence) when $d_\phi$ is treated as a density-ratio estimator. The gradient flow of entropy-regularized $f$-divergence with $f = r\log r$ is simplified below,
\begin{align}
\partial_t\rho_t({\mathbf{x}}) &- \nabla_{\mathbf{x}}\cdot\left(\rho_t\nabla_{\mathbf{x}} f'\left(\exp(-d_\phi({\mathbf{x}}))\right)\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0\\
\partial_t\rho_t({\mathbf{x}}) &- \nabla_{\mathbf{x}}\cdot\left(\rho_t\nabla_{\mathbf{x}} \left(\log(\exp(-d_\phi({\mathbf{x}}))) + 1\right)\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0\\
\partial_t\rho_t({\mathbf{x}}) &+ \nabla_{\mathbf{x}}\cdot\left(\rho_t\nabla_{\mathbf{x}} d_\phi({\mathbf{x}})\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0.\label{eq:gradient-flow-kl}
\end{align}
The equality of Eq. (\ref{eq:gradient-flow-w1}) and Eq. (\ref{eq:gradient-flow-kl}) implies that \textsc{DG}$f$low{} approximates the gradient flow of the 1-Wasserstein distance when the critic of WGAN is used for density-ratio estimation.
\section{Further Discussion on Related Work}
\label{app:relatedwork}
\paragraph{Energy-based \& Score-based Generative Models}
\textsc{DG}$f$low{} is related to recently proposed energy-based generative models~\citep{arbel2020generalized, Deng2020Residual} --- one can view the energy functions used in these methods as a component that improves some base model. For example, the Generalized Energy-Based Model (GEBM) \citep{arbel2020generalized} jointly trains an implicit generative model with an energy function and uses Langevin dynamics to sample from the combination of the two. Similarly, in \citet{Deng2020Residual}, a discriminator that estimates the energy function is combined with a language model to train an energy-based text-generation model.
Score-based generative modeling (SBGM)~\citep{song2019generative,song2020improved} is another active area of research closely-related to energy-based models. Noise Conditional Score Network (NCSN)~\citep{song2019generative,song2020improved}, a SBGM, trains a neural network to estimate the score function of a probability density at various noise levels. Once trained, this score network is used to evolve samples from noise to the data distribution using Langevin dynamics. NCSN can be viewed as a gradient flow that refines a sample right from noise to data; however, unlike \textsc{DG}$f$low{}, NCSN is a complete generative models in itself and not a sample refinement technique that can be applied to other generative models.
\paragraph{Other Related Work} Monte Carlo techniques have been used for improving various components in generative models, e.g., \citet{grover18variational} proposed Variational Rejection Sampling which performs rejection sampling in the latent space of VAEs to improve the variational posterior and \citet{grover2019bias} used likelihood-free importance sampling for bias correction in generative models . \citet{wu2020logan} proposed Latent Optimization GAN (LOGAN) which optimizes the latent vector as part of the training process unlike \textsc{DG}$f$low{} that refines the latent vector post training.
\section{Implementation Details}
\subsection{2D Datasets}
\paragraph{Datasets} The 25 Gaussians dataset was constructed by generating 100000 samples from a mixture of 25 equally likely 2D isotropic Gaussians with means $\{-4, -2, 0, 2, 4\} \times \{-4, -2, 0, 2, 4\} \subset {\mathbb{R}}^2$ and standard deviation 0.05. Once generated, the data-points were normalized by $2\sqrt{2}$ following \citet{tanaka2019discriminator}. The 2DSwissroll dataset was constructed by first generating 100000 samples of the 3D swissroll dataset using \texttt{make\_swiss\_roll} from \texttt{scikit-learn} with \texttt{noise=0.25} and then only keeping dimensions $\{0, 2\}$. The generated samples were normalized by 7.5.
\paragraph{Base Models} We trained a WGAN-GP model for both the datasets. The generator was a fully-connected network with ReLU non-linearities that mapped $z \sim {\mathcal{N}}(0, I_{2\times 2})$ to $x \in {\mathbb{R}}^2$. Similarly, the discriminator was a fully-connected network with ReLU non-linearities that mapped $x \in {\mathbb{R}}^2$ to ${\mathbb{R}}$. We refer the reader to \citet{gulrajani2017improved} for the exact network structures. The gradient penalty factor was set to 10. The models were trained for 10K generator iterations with a batch size of 256 using the Adam optimizer with a learning rate of $10^{-4}$, $\beta_1=0.5$, and $\beta_1=0.9$. We updated the discriminator 5 times for each generator iteration.
\paragraph{Hyperparameters} We ran DOT for 100 steps and performed gradient descent using the Adam optimizer with a learning rate of 0.01 and $\beta = (0., 0.9)$ as suggested by \citet{tanaka2019discriminator}. DDLS was run for 50 iterations with a step-size of 0.01 and the Gaussian noise was scaled by a factor of 0.1 as suggested by \citet{che2020your}. For \textsc{DG}$f$low{}, we set the step-size $\eta = 0.01$, the number of steps $n = 100$, and the noise regularizer $\gamma = 0.01$. We used the output from the WGAN-GP discriminator directly as a logit for estimating the density ratio for DDLS and \textsc{DG}$f$low{}.
\paragraph{Metrics} We compared the different methods quantitatively on two metrics: \% high quality samples and kernel density estimate (KDE) score. A sample is classified as a high quality sample if it lies within 4 standard deviations of its nearest Gaussian. The KDE score is computed by first estimating the KDE using generated samples and then computing the log-likelihood of the training samples under the KDE estimate. KDE was performed using \texttt{sklearn.neighbors.KernelDensity} with a Gaussian kernel and a kernel bandwidth of 0.1. The quantitative metrics were averaged over 10 runs with 5000 samples from each method.
\subsection{Image Experiments}
\begin{table*}
\centering
\caption{\small Network architectures used for MMDGAN and VAE models.}
\label{tab:network-arch}
\subfloat[Generator or Decoder]{\begin{tabular}{l}
\toprule
Input Shape: \texttt{(b, d, 1, 1)}\\
\midrule
\texttt{Upconv(256)}\\
\texttt{BatchNorm}\\
\texttt{ReLU}\\
\texttt{Upconv(128)}\\
\texttt{BatchNorm}\\
\texttt{ReLU}\\
\texttt{Upconv(64)}\\
\texttt{BatchNorm}\\
\texttt{ReLU}\\
\texttt{Upconv(3)}\\
\texttt{Tanh}\\
\midrule
Output Shape: \texttt{(b, 3, 32, 32)}\\
\bottomrule
\end{tabular}}\qquad
\subfloat[Discriminator or Encoder]{\begin{tabular}{l}
\toprule
Input Shape: \texttt{(b, 3, 32, 32)}\\
\midrule
\texttt{Conv(64)}\\
\texttt{LeakyReLU(0.2)}\\
\texttt{Conv(128)}\\
\texttt{BatchNorm}\\
\texttt{LeakyReLU(0.2)}\\
\texttt{Conv(256)}\\
\texttt{BatchNorm}\\
\texttt{LeakyReLU(0.2)}\\
\texttt{Conv(m)}\\
\midrule
Output Shape: \texttt{(b, m, 1, 1)}\\
\bottomrule
\end{tabular}}
\end{table*}
\paragraph{Datasets} CIFAR10~\citep{krizhevsky2009learning} is a dataset of 60K natural RGB images of size $32 \times 32$ from 10 classes. STL10 is a dataset of 100K natural RGB images of size $96 \times 96$ from 10 classes. We resized the STL10~\citep{coates2011analysis} dataset to $48 \times 48$ for SNGAN and WGAN-GP, and to $32 \times 32$ for MMDGAN, OCFGAN-GP, and VAE since the respective base models were trained on these sizes.
\paragraph{Base Models for CIFAR10} We used the publicly available pre-trained models for WGAN-GP, SN-DCGAN (hi), and SN-DCGAN (ns). We refer the reader to \citet{tanaka2019discriminator} for exact details about these models. For SN-ResNet-GAN and OCFGAN-GP we used the pre-trained models from \citet{miyato2018spectral} and \citet{ansari2020characteristic} respectively. We used the respective discriminators of SN-DCGAN (ns), SN-DCGAN (hi), and WGAN-GP for density-ratio estimation when refining their generators. For the SN-ResNet-GAN (hi) generator, we used SN-DCGAN (ns) discriminator as the non-saturating loss provides a better density-ratio estimation than a discriminator trained using the hinge loss.
We trained our own models for MMDGAN, VAE, and Glow. We used the generator and discriminator architectures shown in Table \ref{tab:network-arch} for MMDGAN with $d=32$. VAE used the same architecture with $d=64$. Our Glow model was trained using the code available at \url{https://github.com/y0ast/Glow-PyTorch} with a batch size of 56 for 150 epochs. The density ratio correctors, $D_\lambda$ (see section \ref{sec:refinement-for-all}), were initialized with the weights from the SN-DCGAN (ns) released by \citet{tanaka2019discriminator}. $D_\lambda$ was then fine-tuned on images from SN-DCGAN (ns)'s generator and the generator being improved (e.g., MMDGAN and OCFGAN-GP) using SGD with a learning rate of $10^{-4}$ and momentum of 0.9. We fine-tuned $D_\lambda$ for 10000 iterations with a batch size of 64.
\paragraph{Base Models for STL10} We used the publicly available pre-trained models~\citep{tanaka2019discriminator,ansari2020characteristic} for WGAN-GP, SN-DCGAN (hi), SN-DCGAN (ns), and OCFGAN-GP. We trained our own models for MMDGAN and VAE with the same architecture and training details as CIFAR10. We fine-tuned the density ratio correctors for STL10 for 5000 iterations with other details being the same as CIFAR10.
\paragraph{Hyperparameters} We performed 25 updates of \textsc{DG}$f$low{} for CIFAR10 and STL10 with a step size of 0.1 for models that do not require density ratio corrections. For STL10 models that require a density ratio correction, we performed 15 updates with a step size of 0.05. The noise regularizer ($\gamma$), whenever used, was set to 0.01.
\paragraph{Metrics} We used the Fr\'echet Inception Distance (FID)~\citep{heusel2017gans} and Inception Score (IS)~\citep{salimans2016improved} metrics to evaluate the quality of generated samples before and after refinement. The IS denotes the confidence in classification of the generated samples using a pretrained InceptionV3 network whereas the FID is the Fr\'echet distance between multivariate Gaussians fitted to the 2048 dimensional feature vectors extracted from the InceptionV3 network for real and generated data. Both the metrics were computed using 50K samples for all the models, except Glow where we used 10K samples. Following \citet{tanaka2019discriminator}, we used the entire training and test set (60K images) for CIFAR10 and the entire unlabeled set (100K images) for STL10 as the set of real images used to compute FID.
\subsection{Character Level Language Modeling}
\paragraph{Dataset} We used the Billion Words dataset~\citep{chelba2013one} which was pre-processed into 32-character long strings.
\paragraph{Base Model} Our generator was a 1D CNN which followed the architecture used by \citet{gulrajani2017improved}.
\paragraph{Hyperparameters} We performed 50 updates of \textsc{DG}$f$low{} with a step size of 0.1 and noise factor $\gamma = 0$.
\paragraph{Metrics} The JS-4 and JS-6 scores were computed using the code provided by \citet{gulrajani2017improved} at \url{https://github.com/igul222/improved_wgan_training}. We used 10000 samples from the models to compute the JS-4 score.
\begin{table*}
\centering
\footnotesize
\caption{\small $f$-divergences and their derivatives.}
\label{tab:f-divergences}
\begin{tabular}{llll}
\toprule
$f$-divergence & \multicolumn{1}{c}{$f$} & \multicolumn{1}{c}{$f'$} & \multicolumn{1}{c}{$f''$}\\
\midrule
KL & $r \log r$ & $\log r + 1$ & $\frac{1}{r}$ \\
JS & $r \log r - (r+1)\log\frac{r+1}{2}$ & $\log \frac{2r}{r+1}$ & $\frac{1}{r^2 + r}$ \\
$\log$ D & $(r+1)\log (r+1) - 2\log2$ & $\log(r+1) +1$ & $\frac{1}{r + 1}$ \\
\bottomrule
\end{tabular}
\end{table*}
\section{Additional Results}
\label{sec:additional-results}
\begin{figure}[htb]
\centering
\subfloat{\includegraphics[width=0.7\textwidth]{comp-f-wgan-25gaussians}}\\
\subfloat{\includegraphics[width=0.7\textwidth]{comp-f-wgan-swissroll}}
\caption{\small Qualitative comparison of \textsc{DG}$f$low{} with different $f$-divergences on the 25Gaussians and 2DSwissroll datasets.}
\label{fig:f-div-comparison-2d-datasets}
\end{figure}
\begin{figure}[htb]
\centering
\subfloat{\includegraphics[width=0.7\textwidth]{velocity-wgan-25gaussians}}\\
\subfloat{\includegraphics[width=0.7\textwidth]{velocity-wgan-swissroll}}
\caption{\small A vector plot showing the deterministic component of the velocity, i.e., the drift $-\nabla_{\mathbf{x}} f'(\rho_0/\mu)({\mathbf{x}}_0)$, for different $f$-divergences on the 25Gaussians and 2DSwissroll dataset.}
\label{fig:velocity-plot}
\end{figure}
\begin{wrapfigure}{R}{.4\textwidth}
\vspace{-3em}
\centering
\subfloat[25 Gaussians]{\includegraphics[width=0.4\textwidth]{wgan-25gaussians-ls}}\\
\subfloat[2D Swissroll]{\includegraphics[width=0.4\textwidth]{wgan-swissroll-ls}}
\caption{\small Latent space recovered by \textsc{DG}$f$low{} (right) for the 2D datasets is same as the one derived by \citet{che2020your} (left).}
\label{fig:induced-latent-space}
\end{wrapfigure}
Fig. \ref{fig:f-div-comparison-2d-datasets} shows the samples generated by WGAN-GP (leftmost, blue) and refined samples generated using \textsc{DG}$f$low{} with different $f$-divergences (red). Fig. \ref{fig:velocity-plot} shows the deterministic component, $-\nabla_{\mathbf{x}} f'(\rho_0/\mu)({\mathbf{x}}_0)$, of the velocity for different $f$-divergences on the 2D datasets. Fig. \ref{fig:induced-latent-space} (right) shows the latent space distribution recovered by \textsc{DG}$f$low{} when applied in the latent space for the 2D datasets. This latent space is same as the one derived by \citet{che2020your}, i.e., $p_{t}({\mathbf{z}}) \propto p_Z({\mathbf{z}})\exp(d(g_\theta({\mathbf{z}})))$ which is shown in Fig. \ref{fig:induced-latent-space} (left) for both datasets.
Table \ref{tab:gans-iscore} shows the comparison of \textsc{DG}$f$low{} with DOT in terms of the inception score for the CIFAR10 and STL10 datasets. \textsc{DG}$f$low{} outperforms DOT significantly for all the base GAN models on both the datasets. Table \ref{tab:others-iscore} compares different variants of \textsc{DG}$f$low{} applied to MMDGAN, OCFGAN-GP, VAE, and Glow generators in terms of the inception score. \textsc{DG}$f$low{} leads to a significant improvement in the quality of samples for all the models. Tables \ref{tab:gans-without-noise} and \ref{tab:cifar-inception-no-noise} compare the deterministic variant of \textsc{DG}$f$low{} ($\gamma = 0$) against DOT and DDLS. These results show that the diffusion term only serves as an enhancement for \textsc{DG}$f$low{}, not a necessity, and it outperforms competing methods even without added noise. Table \ref{tab:other-gen-models-fid-without-correction} shows the results of \textsc{DG}$f$low{} on MMDGAN, OCFGAN-GP, and VAE models when the SN-DCGAN (ns) discriminator is directly used as a density-ratio estimator without an additional density-ratio corrector. Figures \ref{fig:cifar10-samples1}, \ref{fig:cifar10-samples2}, \ref{fig:stl10-samples1}, and \ref{fig:stl10-samples2} show the samples generated by the base model (left) and the refined samples (right) using \textsc{DG}$f$low{} for the CIFAR10 and STL10 datasets.
\paragraph{Runtime}
\textsc{DG}$f$low{} performs a backward pass through $d_\phi \circ g_\theta$ to compute the gradient of the density-ratio with respect to the latent vector. This results in the same runtime complexity as that of DOT and DDLS. Table \ref{tab:runtime-comparison} shows a comparison of the runtimes of DOT, DDLS, and, \textsc{DG}$f$low{} on the 25Gaussians dataset under same conditions. As expected, these refinement methods have similar runtimes in practice. The wall-clock time required for \textsc{DG}$f$low{}\textsubscript{(KL)} to refine 100 samples from different base models on the CIFAR10 and STL10 datasets is reported in tables \ref{tab:runtime-nodrc} and \ref{tab:runtime-drc}.
\begin{table*}
\centering
\caption{\small Runtime comparison of DOT, DDLS, and \textsc{DG}$f$low{}\textsubscript{(KL)} on the 25Gaussians dataset. The runtime is averaged over 100 runs with standard deviation reported in parentheses.}
\label{tab:runtime-comparison}
\begin{tabular}{lr}
\toprule
Method & Runtime (s) per 5K samples\\
\midrule
DOT & 2.24 (0.18)\\
DDLS & 2.23 (0.14) \\
\textsc{DG}$f$low{} & 2.22 (0.15)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{\small Runtime of \textsc{DG}$f$low{}\textsubscript{(KL)} for models that do not require density-ratio correction on a single GeForce RTX 2080 Ti GPU. The runtime is averaged over 100 runs with standard deviation reported in parentheses.}
\label{tab:runtime-nodrc}
\begin{tabular}{llr}
\toprule
& Model & Runtime (s) per 100 samples \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}} & WGAN-GP & 0.897 (0.017)\\
& SN-DCGAN (hi) & 0.952 (0.008)\\
& SN-DCGAN (ns) & 0.952 (0.007)\\
& SN-ResNet-GAN & 1.982 (0.013)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}&
WGAN-GP & 1.376 (0.025)\\
& SN-DCGAN (hi) & 1.413 (0.015)\\
& SN-DCGAN (ns) & 1.415 (0.013)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{\small Runtime of \textsc{DG}$f$low{}\textsubscript{(KL)} for models that require density-ratio correction on a single GeForce RTX 2080 Ti GPU. The runtime is averaged over 100 runs with standard deviation reported in parentheses.}
\label{tab:runtime-drc}
\begin{tabular}{llr}
\toprule
& Model & Runtime (s) per 100 samples\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}} & MMDGAN & 1.192 (0.007)\\
& OCFGAN-GP & 1.186 (0.011)\\
& VAE & 1.186 (0.012)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}& MMDGAN & 1.036 (0.004)\\
& OCFGAN-GP & 1.029 (0.010)\\
& VAE & 1.028 (0.011)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small Comparison of different variants of \textsc{DG}$f$low{} with DOT on the CIFAR10 and STL10 datasets. Higher scores are better.}
\label{tab:gans-iscore}
\centering
\begin{tabular}{llrrrrr}
\toprule
& \multirow{3}{*}{Model} & \multicolumn{5}{c}{Inception Score}\\
\cmidrule{3-7}
& & \multicolumn{1}{c}{Base Model} & \multicolumn{1}{c}{DOT} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(KL)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(JS)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{($\log$ D)}} \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}}
& WGAN-GP & 6.51 (.02) & 7.45 & \textbf{7.99 (.02)} & 7.71 (.02) & 7.11 (.03)\\
& SN-DCGAN (hi) & 7.35 (.03) & 8.02 & \textbf{8.13 (.02)} & 7.98 (.01) & 7.85 (.02)\\
& SN-DCGAN (ns) & 7.38 (.03) & 7.97 & \textbf{8.14 (.03)} & 7.98 (.04) & 7.94 (.01)\\
& SN-ResNet-GAN & 8.38 (.03) & -- & \textbf{9.35 (.03)} & 9.13 (.04) & 9.05 (.03)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}
& WGAN-GP & 8.72 (.02) & 9.31 & \textbf{10.41 (.02)} & 8.85 (.06) & 9.80 (.03)\\
& SN-DCGAN (hi) & 8.77 (.03) & 9.35 & \textbf{9.74 (.04)} & 9.50 (.05) & 9.41 (.07)\\
& SN-DCGAN (ns) & 8.61 (.04) & 9.45 & \textbf{9.66 (.01)} & 9.49 (.03) & 9.18 (.03)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small Comparison of different variants of \textsc{DG}$f$low{} applied to MMDGAN, OCFGAN-GP, VAE, and Glow models. Higher scores are better.}
\label{tab:others-iscore}
\centering
\begin{tabular}{llrrrr}
\toprule
& \multirow{2}{*}{Model} & \multicolumn{4}{c}{Inception Score}\\
\cmidrule{3-6}
& & Base Model & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(KL)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(JS)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{($\log$ D)}} \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}}
& MMDGAN & 5.74 (.02) & \textbf{6.27 (.05)} & 5.99 (.03) & 6.02 (.01)\\
& OCFGAN-GP & 6.52 (.02) & \textbf{7.21 (.05)} & 6.93 (.03) & 6.92 (.03)\\
& VAE & 3.20 (.01) & \textbf{3.85 (.01)} & 3.21 (.02) & 3.57 (.02)\\
& Glow & 3.64 (.02) & \textbf{4.57 (.02)} & 3.91 (.03) & 4.47 (.03)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}}
& MMDGAN & 6.07 (.02) & \textbf{6.16 (.01)} & 6.12 (.03) & 6.12 (.03)\\
& OCFGAN-GP & 7.09 (.01) & \textbf{7.46 (.04)} & 7.10 (.03) & 7.33 (.02)\\
& VAE & 3.25 (.01) & \textbf{3.72 (.04)} & 3.27 (.01) & 3.65 (.03)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small Comparison of different variants of \textsc{DG}$f$low{} without diffusion (i.e., $\gamma=0$) on the CIFAR10 and STL10 datasets. Lower scores are better.}
\label{tab:gans-without-noise}
\centering
\begin{tabular}{llrrrrr}
\toprule
& \multirow{3}{*}{Model} & \multicolumn{5}{c}{Fr\'echet Inception Distance}\\
\cmidrule{3-7}
& & \multicolumn{1}{c}{Base Model} & \multicolumn{1}{c}{DOT} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(KL)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(JS)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{($\log$ D)}} \\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}}
& WGAN-GP & 28.34 (.11) & 24.14 & 24.64 (.13) & \textbf{23.30 (.11)} & 24.42 (.19)\\
& SN-DCGAN (hi) & 20.67 (.09) & 17.12 & \textbf{15.79 (.07)} & 16.79 (.09) & 17.79 (.05)\\
& SN-DCGAN (ns) & 20.94 (.12) & 15.78 & \textbf{15.47 (.11)} & 16.32 (.11) & 16.97 (.08)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}
& WGAN-GP & 51.34 (.21) & 44.45 & \textbf{38.96 (.08)} & 50.44 (.09) & 39.35 (.12)\\
& SN-DCGAN (hi) & 40.82 (.16) & \textbf{34.85} & 35.18 (.09) & 36.53 (.13) & 36.75 (.13)\\
& SN-DCGAN (ns) & 41.83 (.20) & \textbf{34.84} & \textbf{34.81 (.08)} & 35.75 (.10) & 37.68 (.08)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small Comparison of DDLS with \textsc{DG}$f$low{} (with and without diffusion) on the CIFAR10 dataset. Higher scores are better.}
\label{tab:cifar-inception-no-noise}
\centering
\begin{tabular}{lr}
\toprule
\multirow{1}{*}{Model} & \multicolumn{1}{c}{Inception Score}\\
\midrule
SN-ResNet-GAN~\citep{miyato2018spectral} & 8.22 (.05)\\
SN-ResNet-GAN + DDLS (cal)~\citep{che2020your} & 9.09 (.10)\\
\midrule
SN-ResNet-GAN (our evaluation) & 8.38 (.03)\\
SN-ResNet-GAN + \textsc{DG}$f$low{}\textsubscript{(KL)} ($\gamma = 0$) & \textbf{9.35 (.04)}\\
SN-ResNet-GAN + \textsc{DG}$f$low{}\textsubscript{(KL)} ($\gamma = 0.01$) & \textbf{9.35 (.03)}\\
\midrule
BigGAN & 9.22\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small Comparison of different variants of \textsc{DG}$f$low{} applied to MMDGAN, OCFGAN-GP, and VAE models without density-ratio correction. Lower scores are better.}
\label{tab:other-gen-models-fid-without-correction}
\centering
\begin{tabular}{lrrrrr}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{Model}} & \multicolumn{3}{c}{Fr\'echet Inception Distance}\\
\cmidrule{3-6}
& & \multicolumn{1}{c}{Base Model} & \multicolumn{1}{c}{KL} & \multicolumn{1}{c}{JS} & \multicolumn{1}{c}{$\log$ D} \\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}} &
MMDGAN & 42.03 (.06) & \textbf{39.06 (.08)} & 39.68 (.06) & 39.47 (.07)\\
& OCFGAN-GP & 31.95 (.07) & 27.92 (.08) & 29.25 (.06) & \textbf{28.82 (.10)}\\
& VAE & 129.49 (.19) & \textbf{127.50 (.15)} & 128.24 (.11) & 128.3 (.14)\\
& Glow & 100.7 (.14) & \textbf{93.47 (.09)} & 97.50 (.11) & 97.78 (.14)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}} &
MMDGAN & 47.22 (.04) & \textbf{45.75 (.10)} & 45.96 (.07) & 46.26 (.13)\\
& OCFGAN-GP & 36.60 (.15) & \textbf{34.17 (.18)} & 34.42 (.04) & 34.99 (.07)\\
& VAE & \textbf{150.49 (.07)} & 151.76 (.01) & 152.03 (.05) & 151.88 (.11)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}
\centering
\subfloat[WGAN-GP]{\includegraphics[width=0.32\textwidth]{cifar10_wgan_before}}\qquad
\subfloat[WGAN-GP + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_wgan_after}}\\
\subfloat[SN-DCGAN (hi)]{\includegraphics[width=0.32\textwidth]{cifar10_snganhi_before}}\qquad
\subfloat[SN-DCGAN (hi) + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_snganhi_after}}\\
\subfloat[SN-DCGAN (ns)]{\includegraphics[width=0.32\textwidth]{cifar10_snganns_before}}\qquad
\subfloat[SN-DCGAN (ns) + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_snganns_after}}\\
\subfloat[SN-ResNet-GAN]{\includegraphics[width=0.32\textwidth]{cifar10_snresnet_before}}\qquad
\subfloat[SN-ResNet-GAN + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_snresnet_after}}\\
\vspace{-0.7em}
\caption{\small Samples from different models for the CIFAR10 dataset before (left) and after (right) refinement using \textsc{DG}$f$low{}.}
\label{fig:cifar10-samples1}
\end{figure}
\begin{figure}
\centering
\subfloat[MMDGAN]{\includegraphics[width=0.32\textwidth]{cifar10_mmdgan_before}}\qquad
\subfloat[MMDGAN + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_mmdgan_after}}\\
\subfloat[OCFGAN-GP]{\includegraphics[width=0.32\textwidth]{cifar10_ocfgan_before}}\qquad
\subfloat[OCFGAN-GP + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_ocfgan_after}}\\
\subfloat[VAE]{\includegraphics[width=0.32\textwidth]{cifar10_vae_before}}\qquad
\subfloat[VAE + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_vae_after}}\\
\subfloat[Glow]{\includegraphics[width=0.32\textwidth]{cifar10_glow_before}}\qquad
\subfloat[Glow + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{cifar10_glow_after}}\\
\vspace{-0.7em}
\caption{\small Samples from different models for the CIFAR10 dataset before (left) and after (right) refinement using \textsc{DG}$f$low{}.}
\label{fig:cifar10-samples2}
\end{figure}
\begin{figure}
\centering
\subfloat[WGAN-GP]{\includegraphics[width=0.32\textwidth]{stl10_wgan_before}}\qquad
\subfloat[WGAN-GP + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_wgan_after}}\\
\subfloat[SN-DCGAN (hi)]{\includegraphics[width=0.32\textwidth]{stl10_snganhi_before}}\qquad
\subfloat[SN-DCGAN (hi) + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_snganhi_after}}\\
\subfloat[SN-DCGAN (ns)]{\includegraphics[width=0.32\textwidth]{stl10_snganns_before}}\qquad
\subfloat[SN-DCGAN (ns) + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_snganns_after}}
\vspace{-0.7em}
\caption{\small Samples from different models for the STL10 dataset before (left) and after (right) refinement using \textsc{DG}$f$low{}.}
\label{fig:stl10-samples1}
\end{figure}
\begin{figure}
\centering
\subfloat[MMDGAN]{\includegraphics[width=0.32\textwidth]{stl10_mmdgan_before}}\qquad
\subfloat[MMDGAN + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_mmdgan_after}}\\
\subfloat[OCFGAN-GP]{\includegraphics[width=0.32\textwidth]{stl10_ocfgan_before}}\qquad
\subfloat[OCFGAN-GP + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_ocfgan_after}}\\
\subfloat[VAE]{\includegraphics[width=0.32\textwidth]{stl10_vae_before}}\qquad
\subfloat[VAE + \textsc{DG}$f$low{}]{\includegraphics[width=0.32\textwidth]{stl10_vae_after}}
\vspace{-0.7em}
\caption{\small Samples from different models for the STL10 dataset before (left) and after (right) refinement using \textsc{DG}$f$low{}.}
\label{fig:stl10-samples2}
\end{figure}
\section{Conclusion}
\vspace{-1em}
In this paper, we proposed a technique to improve samples from deep generative models by refining them using gradient flow of $f$-divergences between the real and the generator data distributions. We also presented a simple framework that extends the proposed technique to commonly used deep generative models: GANs, VAEs, and Normalizing Flows. Experimental results indicate that gradient flows provide an excellent alternative methodology to refine generative models. Moving forward, we are considering several technical enhancements to improve \textsc{DG}$f$low{}'s performance. At present, \textsc{DG}$f$low{} uses a stale estimate of the density-ratio, which could adversely affect sample evolution when the gradient flow is simulated for larger number of steps; how we can efficiently update this estimate is an open question. Another related question is when the evolution of the samples should be stopped; running chains for too long may modify characteristics of the original sample (e.g., orientation and color) which may be undesirable. This issue does not just affect \textsc{DG}$f$low{}; a method for automatically stopping sample evolution could improve results across refinement techniques.
\section*{Acknowledgements}
This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2019-011) to H. Soh. Thank you to J. Scarlett for his comments regarding the proofs.
\section{Experiments}
\vspace{-.8em}
In this section, we present empirical results on various deep generative models trained on multiple synthetic and real world datasets. Our primary goals were to determine if (a) \textsc{DG}$f$low{} is effective in improving the quality of samples from generative models, (b) the proposed extension to other generative models improves their sample quality, and (c) \textsc{DG}$f$low{} is generalizable to different types of data and metrics. Note that we did not seek to achieve state-of-the-art results for the datasets studied but to demonstrate that \textsc{DG}$f$low{} is able to significantly improve samples from the bare generators for different models.
We experimented with three $f$-divergences, namely the Kullback-Leibler (KL) divergence, the Jensen-Shannon (JS) divergence, and the $\log$ D divergence~\citep{gao2019deep}. The specific forms of the functions $f$ and corresponding derivatives are tabulated in Table \ref{tab:f-divergences} (appendix). We compare \textsc{DG}$f$low{} with two state-of-the-art competing methods: DOT and DDLS. In this section we discuss the main results and relegate details to the appendix. Our code is available online at \url{https://github.com/clear-nus/DGflow}.
\begin{figure}[!htb]
\centering
\subfloat{\includegraphics[width=0.7\textwidth]{comp-wgan-25gaussians}}\\
\subfloat{\includegraphics[width=0.7\textwidth]{comp-wgan-swissroll}}
\caption{\small{Qualitative comparison of \textsc{DG}$f$low{}\textsubscript{(KL)} with DOT and DDLS on synthetic 2D datasets.}}
\label{fig:2ddatasets}
\vspace{-1em}
\end{figure}
\subsection{2D Datasets}
\begin{wraptable}{R}{.5\textwidth}
\vspace{-.8em}
\scriptsize
\caption{\small{Quantitative comparison on the 25Gaussians dataset. Higher scores are better.}}
\label{tab:2ddatasets}
\centering
\begin{tabular}{lrr}
\toprule
& \% High Quality & KDE Score\\
\midrule
GAN & 26.5 $\pm$ .8 & -7037 $\pm$ 64\\
DOT & 69.8 $\pm$ .7 & -4149 $\pm$ 39\\
DDLS & \textbf{89.3 $\pm$ .6} & -2997 $\pm$ 17\\
\midrule
DG$f$low\textsubscript{(KL)} & \textbf{89.5 $\pm$ .4} & \textbf{-2893 $\pm$ 07}\\
DG$f$low\textsubscript{(JS)} & 82.6 $\pm$ .4 & -3118 $\pm$ 19\\
DG$f$low\textsubscript{($\log$ D)} & 84.5 $\pm$ .3 & -3036 $\pm$ 14\\
\bottomrule
\end{tabular}
\end{wraptable}
We first tested \textsc{DG}$f$low{} on two synthetic datasets, 25Gaussians and 2DSwissroll, to visually inspect the improvement in the quality of generated samples. We generated 5000 samples from a trained WGAN-GP generator and refined them using DOT, DDLS, and \textsc{DG}$f$low{}. We performed refinement in the latent space for DDLS and directly in the data-space for DOT and \textsc{DG}$f$low{}. Fig. \ref{fig:2ddatasets} shows the samples generated from the WGAN-GP generator (blue) and the refined samples using different techniques (red) against the real samples from the training dataset (brown). Although the WGAN-GP generator learned the overall structure of the dataset, it also learned a number of spurious modes. DOT is able to refine the spurious samples but to a limited degree. In contrast, DDLS and \textsc{DG}$f$low{} are able to correct almost all spurious samples and are able to recover the correct structure of the data. Visualizations for \textsc{DG}$f$low{} with different $f$-divergences can be found in the appendix (Fig. \ref{fig:f-div-comparison-2d-datasets}).
We also compared the different methods quantitatively on two metrics: \% high quality samples and kernel density estimate (KDE) score. A sample is classified as a high quality sample if it lies within 4 standard deviations of its nearest Gaussian. The KDE score is computed by first estimating the KDE using generated samples and then computing the log-likelihood of the training samples under the KDE estimate. We computed both the metrics 10 times using 5000 samples and report the mean in Table \ref{tab:2ddatasets}. The quantitative metrics reinforce the qualitative analysis and show that DDLS and \textsc{DG}$f$low{} significantly improve the samples from the generator, with \textsc{DG}$f$low{} performing slightly better than DDLS in terms of the KDE score.
\begin{wrapfigure}[20]{r}{.6\textwidth}
\vspace{-1.5em}
\centering
\subfloat[CIFAR10]{\includegraphics[width=0.25\textwidth]{cifar_resnet}}\quad
\subfloat[STL10]{\includegraphics[width=0.25\textwidth]{stl}}
\caption{\small{Improvement in the quality of samples generated from the base model (leftmost columns) over the steps of \textsc{DG}$f$low{} for SN-ResNet-GAN and SN-DCGAN on the CIFAR10 and STL10 datasets respectively.}}
\label{fig:refinement-over-steps}
\vspace{-1em}
\end{wrapfigure}
\vspace{-.8em}
\subsection{Image Experiments}
\vspace{-.5em}
We conducted experiments on the CIFAR10 and STL10 datasets to demonstrate the efficacy of \textsc{DG}$f$low{} in the real-world setting. We followed the setup of \citet{tanaka2019discriminator} for our image experiments. We used the Fr\'echet Inception Distance (FID)~\citep{heusel2017gans} and Inception Score (IS)~\citep{salimans2016improved} metrics to evaluate the quality of generated samples before and after refinement.
A high value of IS and a low value of FID corresponds to high quality samples, respectively.
We first applied \textsc{DG}$f$low{} to GANs with scalar-valued discriminators (e.g., WGAN-GP, SNGAN) trained on the CIFAR10 and the STL10 datasets. %
Table \ref{tab:scalar-valued-fid} shows that \textsc{DG}$f$low{} significantly improves the quality of the samples in terms of the FID score and outperforms DOT on multiple models. The corresponding values of the Inception score can be found in the Appendix (Table \ref{tab:gans-iscore}), which shows \textsc{DG}$f$low{} outperforms DOT on all models. In Table \ref{tab:cifar-inception}, we reproduce previously reported IS results for generative models and other sample improvement methods (DRS, MH-GAN, and DDLS) for completeness. \textsc{DG}$f$low{} performs the best in terms of relative improvement from the base score and even outperforms the state-of-the-art BigGAN~\citep{brock2018large}, a conditional generative model, without the need for additional labels. Qualitatively, \textsc{DG}$f$low{} improves the vibrance of the samples and corrects deformations in the foreground object. Fig. \ref{fig:refinement-over-steps} shows the change in the quality of samples when using \textsc{DG}$f$low{} where the leftmost columns show the image generated form the base models and the successive columns show the refined sample using \textsc{DG}$f$low{} over increments of 5 update steps.
We then evaluated the ability of \textsc{DG}$f$low{} to refine samples from generative models \emph{without corresponding discriminators}, namely MMDGAN, OCFGAN-GP, VAEs, and Normalizing Flows (Glow). We used the SN-DCGAN (ns) as the surrogate discriminator $D_\phi$ for these models and fine-tuned density ratio correctors $D_\lambda$ for each model as described in section \ref{sec:refinement-for-all}. Table \ref{tab:other-gen-models-fid} shows the FID scores achieved by these models without and with refinement using \textsc{DG}$f$low{}. We obtain a clear improvement in quality of samples when these generative models are combined with \textsc{DG}$f$low{}.
\begin{table*}
\footnotesize
\caption{\small{Comparison of different variants of \textsc{DG}$f$low{} with DOT on the CIFAR10 and STL10 datasets. For SN-DCGAN, (hi) denotes the hinge loss and (ns) denotes the non-saturating loss. Lower scores are better. \textsc{DG}$f$low{}'s results have been averaged over 5 random runs with the standard deviation in parentheses.}}
\label{tab:scalar-valued-fid}
\centering
\begin{tabular}{llrrrrr}
\toprule
& \multirow{3}{*}{Model} & \multicolumn{5}{c}{Fr\'echet Inception Distance}\\
\cmidrule{3-7}
& & \multicolumn{1}{c}{Base Model} & \multicolumn{1}{c}{DOT} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(KL)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(JS)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{($\log$ D)}} \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}} &
WGAN-GP & 28.37 (.08) & 24.14 & 24.68 (.09) & \textbf{23.15 (.07)} & 24.53 (.11)\\
& SN-DCGAN (hi) & 20.70 (.05) & 17.12 & \textbf{15.68 (.07)} & 16.45 (.06) & 17.36 (.05)\\
& SN-DCGAN (ns) & 20.90 (.11) & 15.78 & \textbf{15.30 (.08)} & 15.90 (.11) & 16.42 (.05)\\
& SN-ResNet-GAN & 14.10 (.06) & -- & \textbf{9.62 (.03)} & 9.79 (.02) & 9.73 (.05)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}
& WGAN-GP & 51.50 (.15) & 44.45 & \textbf{39.07 (.07)} & 50.83 (.06) & 39.71 (.29)\\
& SN-DCGAN (hi) & 40.54 (.17) & \textbf{34.85} & 34.95 (.06) & 36.37 (.12) & 36.56 (.08)\\
& SN-DCGAN (ns) & 41.86 (.12) & 34.84 & \textbf{34.60 (.11)} & 35.37 (.12) & 37.07 (.14)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small{Inception scores of different generative models, DRS, MH-GAN, DDLS, and \textsc{DG}$f$low{} on the CIFAR10 dataset. Higher scores are better.}}
\label{tab:cifar-inception}
\centering
\begin{tabular}{lr}
\toprule
\multirow{1}{*}{Model} & \multicolumn{1}{c}{Inception Score}\\
\midrule
WGAN-GP~\citep{gulrajani2017improved} & 7.86 (.07)\\
ProgressiveGAN~\citep{karras2017progressive} & 8.80 (.05)\\
SN-ResNet-GAN~\citep{miyato2018spectral} & 8.22 (.05)\\
NCSN~\citep{song2019generative} & 8.87 (.12)\\
\midrule
DCGAN & 2.88\\
DCGAN + DRS (cal)~\citep{azadi2018discriminator} & 3.07\\
DCGAN + MH (cal)~\citep{turner2019metropolis} & 3.38\\
\midrule
SN-ResNet-GAN (our evaluation) & 8.38 (.03)\\
SN-ResNet-GAN + DDLS (cal)~\citep{che2020your} & 9.09 (.10)\\
SN-ResNet-GAN + \textsc{DG}$f$low{}\textsubscript{(KL)} & \textbf{9.35 (.03)}\\
\midrule
BigGAN & 9.22\\
\bottomrule
\end{tabular}
\vspace{-.8em}
\end{table*}
\begin{table*}
\footnotesize
\caption{\small{Comparison of different variants of \textsc{DG}$f$low{} applied to MMDGAN, OCFGAN-GP, VAE, and Glow models. Lower scores are better. Results have been averaged over 5 random runs with the standard deviation in parentheses.}}
\label{tab:other-gen-models-fid}
\centering
\begin{tabular}{llrrrr}
\toprule
& \multirow{2}{*}{Model} & \multicolumn{4}{c}{Fr\'echet Inception Distance}\\
\cmidrule{3-6}
& & Base Model & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(KL)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{(JS)}} & \multicolumn{1}{c}{\textsc{DG}$f$low{}\textsubscript{($\log$ D)}} \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR10}}}
& MMDGAN & 41.98 (.12) & \textbf{36.75 (.09)} & 38.06 (.14) & 37.75 (.10)\\
& OCFGAN-GP & 31.98 (.12) & \textbf{26.89 (.06)} & 28.20 (.06) & 27.82 (.09)\\
& VAE & 129.5 (.13) & \textbf{116.0 (.21)} & 128.9 (.13) & 115.2 (.06)\\
& Glow & 100.5 (.52) & \textbf{79.02 (.23)} & 94.61 (.34) & 81.12 (.35)\\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{\scriptsize{STL10}}}
& MMDGAN & 47.20 (.07) & 43.21 (.06) & 46.74 (.05) & \textbf{43.06 (.05)}\\
& OCFGAN-GP & 36.55 (.08) & 31.12 (.13) & 36.05 (.11) & \textbf{30.61 (.14)}\\
& VAE & 150.5 (.09) & \textbf{130.1 (.18)} & 149.9 (.08) & 132.5 (.28)\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table}[htb]
\centering
\caption{\small{Results of \textsc{DG}$f$low{} on a character-level GAN language model.}}
\label{tab:language-results}
\subfloat[JS-4 and JS-6 scores. Lower scores are better.]{
\begin{tabular}{lrr}
\toprule
Model & JS-4 & JS-6\\
\midrule
WGAN-GP & 0.224 (.0009) & 0.574 (.0015)\\
\textsc{DG}$f$low{}\textsubscript{(KL)} & 0.212 (.0008) & 0.512 (.0012)\\
\textsc{DG}$f$low{}\textsubscript{(JS)} & \textbf{0.186} (.0007) & 0.508 (.0011)\\
\textsc{DG}$f$low{}\textsubscript{($\log$ D)} & 0.209 (.0005) & \textbf{0.506} (.0008)\\
\bottomrule
\end{tabular}
}\\
\subfloat[Examples of text samples refined by \textsc{DG}$f$low{}.]{
\footnotesize
\begin{tabular}{ll}
\toprule
Generated by WGAN-GP & Refined by \textsc{DG}$f$low{}\\
\midrule
\texttt{In Ruoduce that fhance would pol} & \texttt{In product that chance could rol}\\
\texttt{I said thowe toot lind talker . } & \texttt{I said this stood line talked 10}\\
\texttt{Now their rarning injurer hows } & \texttt{Now their warning injurer shows }\\
\texttt{Police report in B0sbu does off } & \texttt{Police report inturner will befe}\\
\texttt{We gine jaid 121 , one bub like } & \texttt{We gave wall said left out like }\\
\texttt{In years in 19mbisuch said he h} & \texttt{In years in 1900b such said he h}\\
\bottomrule
\end{tabular}}
\vspace{-2em}
\end{table}
\subsection{Character-Level Language Modeling}
Finally, we conducted an experiment on the character-level language modeling task proposed by~\citet{gulrajani2017improved} to show that \textsc{DG}$f$low{} works on different types of data. We trained a character-level GAN language model on the Billion Words Dataset~\citep{chelba2013one}, which was pre-processed into 32-character long strings. We evaluated the generated samples using the JS-4 and JS-6 scores which compute the Jensen-Shannon divergence between the 4-gram and 6-gram probabilities of the data generated by the model and the real data. Table \ref{tab:language-results} (a) shows that \textsc{DG}$f$low{} leads to an improvement in the JS-4 and JS-6 scores. Table \ref{tab:language-results} (b) shows example sentences where \textsc{DG}$f$low{} visibly improves the quality of generated text.
\section{Background: Gradient Flows}
The following gives a brief introduction to gradient flows; we refer readers to the excellent overview by~\citet{santambrogio2017euclidean} for a more thorough introduction.
Let $({\mathcal{X}},\|\cdot\|_2)$ be a Euclidean space and $F: {\mathcal{X}} \rightarrow {\mathbb{R}}$ be a smooth energy function. The gradient flow of $F$ is the smooth curve $\{{\mathbf{x}}_t\}_{t\in{\mathbb{R}}_+}$ that follows the direction of steepest descent, i.e.,
\begin{align}
{\mathbf{x}}'(t) = -\nabla F({\mathbf{x}}(t)).
\end{align}
The value of the energy $F$ is minimized along this curve.
This idea of steepest descent curves can be characterized in arbitrary metric spaces via the \emph{minimizing movement scheme}~\citep{jordan1998variational}. Of particular interest is the metric space of probability measures that is endowed with the Wasserstein distance (${\mathcal{W}}_p$); the Wasserstein distance is a metric and the ${\mathcal{W}}_p$ topology satisfies weak convergence of probability measures~\citep[Theorem~6.9]{villani2008optimal}. Gradient flows in the 2-Wasserstein space (${\mathcal{P}}_2(\Omega), {\mathcal{W}}_2$) --- i.e., the space of probability measures with finite second moments and the 2-Wasserstein metric --- have been studied extensively. Let $\{\rho_t\}_{t\in{\mathbb{R}}_+}$ be the gradient flow of a functional ${\mathcal{F}}$ in the 2-Wasserstein space, where $\rho_t$ is absolutely continuous with respect to the Lebesgue measure. The curve $\{\rho_t\}_{t\in{\mathbb{R}}_+}$ satisfies the continuity equation~\citep[Theorem~8.3.1]{ambrosio2008gradient},
\begin{align}
\partial_t \rho_t +\nabla\cdot(\rho_t{\mathbf{v}}_t) = 0.\label{eq:cont-eq}
\end{align}
The velocity field ${\mathbf{v}}_t$ in Eq. (\ref{eq:cont-eq}) is given by
\begin{align}
{\mathbf{v}}_t({\mathbf{x}}) = -\nabla_{\mathbf{x}}\frac{\delta {\mathcal{F}}}{\delta \rho}(\rho),
\end{align}
where $\frac{\delta {\mathcal{F}}}{\delta \rho}$ denotes the first variation of the functional ${\mathcal{F}}$.
Since the seminal work of \citet{jordan1998variational} that showed that the Fokker-Plank equation is the gradient flow of a particular functional in the Wasserstein space, gradient flows in the Wasserstein metric have been a popular tool in the analysis of partial differential equations (PDEs). For example, they have been applied to the study of the porous-medium equation~\citep{otto2001geometry}, crowd modeling~\citep{maury2010macroscopic,maury2011handling}, and mean-field games~\citep{almulla2017two}. More recently, gradient flows of various distances used in deep generative modeling literature have been proposed, notably that of the sliced Wasserstein distance~\citep{liutkus2019sliced}, the
maximum mean discrepancy~\citep{arbel2019maximum}, the Stein discrepancy~\citep{liu2017stein}, and the Sobolev discrepancy~\citep{mroueh2019sobolev}. Gradient flows have also been used for learning non-parametric and parametric implicit generative models~\citep{liutkus2019sliced,gao2019deep,gao2020learning}. As an example of the latter, Variational Gradient Flow~\citep{gao2019deep} %
learns a mapping between latent vectors and samples evolved using the gradient flow of $f$-divergences. In this work, we present a method using gradient flows of entropy-regularized $f$-divergences for refining samples from deep generative models employing existing discriminators as density-ratio estimators.
\section{Introduction}
Deep generative models (DGMs) have excelled at numerous tasks, from generating realistic images~\citep{brock2018large} to
learning policies in reinforcement learning~\citep{Ho2016GenerativeAI}. %
Among the variety of proposed DGMs, Generative Adversarial Networks (GANs)~\citep{goodfellow2014generative} have received widespread popularity for their ability to generate high quality samples that resemble real data.
Unlike Variational Autoencoders (VAEs)~\citep{Kingma2014AutoEncodingVB} and Normalizing Flows~\citep{Rezende2015VariationalIW,kingma2018glow}, GANs are likelihood-free methods; training is formulated as a minimax optimization problem involving a generator and a discriminator. The generator seeks to generate samples that are similar to the real data by minimizing a measure of discrepancy (between the generated samples and real samples) furnished by the discriminator. The discriminator is trained to distinguish the generated samples from the real samples. Once trained, the generator is used to simulate samples and the discriminator has traditionally been discarded.
However, recent work has shown that discarding the discriminator is wasteful --- it actually contains useful information about the underlying data distribution. This insight has led to \emph{sample improvement} techniques that use this information to improve the quality of generated samples~\citep{azadi2018discriminator,turner2019metropolis,tanaka2019discriminator,che2020your}. Unfortunately, current methods either rely on wasteful rejection operations in the data space~\citep{azadi2018discriminator,turner2019metropolis}, or require a sensitive diffusion term to ensure sample diversity~\citep{che2020your}. Prior work has also focused on improving GANs with scalar-valued discriminators, which excludes a large family of GANs with vector-valued critics, e.g., MMDGAN \citep{li2017mmd,binkowski2018demystifying} and OCFGAN~\citep{ansari2020characteristic}, and likelihood-based generative models.
\begin{wrapfigure}[25]{r}{0.40\textwidth}
\centering
\includegraphics[width=0.40\textwidth]{IntroGraphic}
\caption{\small An illustration of refinement using \textsc{DG}$f$low{}, with the gradient flow in the 2-Wasserstein space ${\mathcal{P}}_2$ (top) and the corresponding discretized SDE in the latent space ${\mathcal{Z}}$ (bottom). The image samples from the densities along the gradient flow are shown in the middle.}
\label{fig:intro_fig}
\end{wrapfigure}
In this work, we propose Discriminator Gradient $f$low (\textsc{DG}$f$low{}) which formulates sample improvement as refining inferior samples using the \emph{gradient flow} of $f$-divergences between the generator and the real data distributions (Fig. \ref{fig:intro_fig}). \textsc{DG}$f$low{} avoids wasteful rejection operations and can be used in a deterministic setting without a diffusion term. Existing state-of-the-art methods --- specifically, Discriminator Optimal Transport (DOT)~\citep{tanaka2019discriminator} and Discriminator Driven Latent Sampling (DDLS)~\citep{che2020your} --- can be viewed as special cases of \textsc{DG}$f$low{}. Similar to DDLS, \textsc{DG}$f$low{} recovers the real data distribution when the gradient flow is simulated exactly.
We further present a generalized framework that employs existing pre-trained discriminators to refine samples from a \emph{variety} of deep generative models: we demonstrate our method can be applied to GANs with vector-valued critics, and even likelihood-based models such as VAEs and Normalizing Flows. Empirical results on synthetic datasets, and benchmark image (CIFAR10, STL10) and text (Billion Words) datasets demonstrate that our gradient flow-based approach outperforms DOT and DDLS on multiple quantitative evaluation metrics.
In summary, this paper's key contributions are:
\begin{itemize}
\item \textsc{DG}$f$low{}, a method to refine deep generative models using the gradient flow of $f$-divergences;
\end{itemize}
\begin{itemize}
\item a framework that extends \textsc{DG}$f$low{} to GANs with vector-valued critics, VAEs, and Normalizing Flows;
\item experiments on a variety of generative models trained on synthetic, image (CIFAR10 \& STL10), and text (Billion Words) datasets demonstrating that \textsc{DG}$f$low{} is effective in improving samples from generative models. %
\end{itemize}
\section{Generator Refinement via Discriminator Gradient Flow}
\vspace{-.8em}
This section describes our main contribution: Discriminator Gradient $f$low (DG$f$low). As an overview, we begin with the construction of the gradient flow of entropy-regularized $f$-divergences and describe its application to sample refinement. We then discuss how to simulate the gradient flow in the latent space of the generator --- a procedure more suitable for high-dimensional datasets. Finally, we present a simple technique that extends our method to generative models that have not yet been studied in the context of refinement. Due to space constraints, we focus on conveying the key concepts and relegate details (e.g., proofs) to the appendix.
The entropy-regularized $f$-divergence functional is defined as
\begin{align}
{\mathcal{F}}_{\mu}^f(\rho) \triangleq {\mathcal{D}}_f(\mu\|\rho) - \gamma{\mathcal{H}}(\rho),
\end{align}
where the $f$-divergence term ${\mathcal{D}}_f(\mu\|\rho)$ ensures that the ``distance" between the probability density $\rho$ and the target density $\mu$ decreases along the gradient flow. The differential entropy term ${\mathcal{H}}(\rho)$ improves diversity and expressiveness when the gradient flow is simulated for finite time-steps. %
We now construct the gradient flow of ${\mathcal{F}}_\mu^f$.
\begin{lemma}
\label{thm:gradient-flow}
Define the functional ${\mathcal{F}}_\mu^f: {\mathcal{P}}_2(\Omega) \rightarrow {\mathbb{R}}$ as
\begin{align}
{\mathcal{F}}_\mu^f(\rho) \triangleq \underset{\text{f-divergence}}{\underbrace{\int f\left(\rho({\mathbf{x}})/\mu({\mathbf{x}})\right)\mu({\mathbf{x}})d{\mathbf{x}}}} +\gamma\underset{\text{negative entropy}}{\underbrace{\int \rho({\mathbf{x}}) \log \rho({\mathbf{x}})d{\mathbf{x}}}},
\end{align}
where $f$ is a twice-differentiable convex function with $f(1) = 0$. The gradient flow of the functional ${\mathcal{F}}_\mu^f(\rho)$ in the Wasserstein space ($P_2(\Omega), {\mathcal{W}}_2$) is given by the following PDE,
\begin{align}
\partial_t\rho_t({\mathbf{x}}) - \nabla_{\mathbf{x}}\cdot\left(\rho_t({\mathbf{x}})\nabla_{\mathbf{x}} f'\left(\rho_t({\mathbf{x}})/\mu({\mathbf{x}})\right)\right) - \gamma\Delta_{{\mathbf{x}}\rvx}\rho_t({\mathbf{x}}) = 0,\label{eq:fokker-plankpde}
\end{align}
where $\nabla_{\mathbf{x}}\cdot$ and $\Delta_{{\mathbf{x}}\rvx}$ denote the divergence and the Laplace operators respectively.
\end{lemma}
The proof is given in Appendix \ref{subsec:proof_gradient-flow}.
The PDE in Eq. (\ref{eq:fokker-plankpde}) is a type of Fokker-Plank equation (FPE). FPEs have been studied extensively in the literature of stochastic processes and have a Stochastic Differential Equation (SDE) counterpart~\citep{risken1996fokker}. In the case of Eq. (\ref{eq:fokker-plankpde}), the equivalent SDE is given by
\begin{align}
d{\mathbf{x}}_t = \underset{\textit{drift}}{\underbrace{-\nabla_{\mathbf{x}} f'\left(\rho_t/\mu\right)({\mathbf{x}}_t)dt}}+\underset{\textit{diffusion}}{\underbrace{\sqrt{2\gamma}d{\mathbf{w}}_t}},
\label{eq:mckean-vlasov-process}
\end{align}
where $d{\mathbf{w}}_t$ denotes the standard Wiener process. Eq. (\ref{eq:mckean-vlasov-process}) defines the evolution of a particle ${\mathbf{x}}_t$ under the influence of drift and diffusion. Specifically, it is a McKean-Vlasov process~\citep{braun1977vlasov} which is a type of \emph{non-linear} stochastic process as the drift term at any time $t$ depends on the distribution $\rho_t$ of the particle ${\mathbf{x}}_t$. Eqs. (\ref{eq:fokker-plankpde}) and (\ref{eq:mckean-vlasov-process}) are equivalent in the sense that the distribution of the particle ${\mathbf{x}}_t$ in Eq. (\ref{eq:mckean-vlasov-process}) solves the PDE in Eq. (\ref{eq:fokker-plankpde}). Consequently, samples from the density $\rho_t$ along the gradient flow can be obtained by first drawing samples ${\mathbf{x}}_0 \sim \rho_0$ and then simulating the SDE in Eq. (\ref{eq:mckean-vlasov-process}). The SDE can be approximately simulated via the stochastic Euler scheme (also known as the Euler-Maruyama method)~\citep{beyn2011numerical} given by
\begin{align}
{\mathbf{x}}_{\tau_{n+1}} = {\mathbf{x}}_{\tau_n} -\eta\nabla_{\mathbf{x}} f'\left(\rho_{\tau_n}/\mu\right)({\mathbf{x}}_{\tau_n}) + \sqrt{2\gamma\eta}\bm{\xi}_{\tau_n},
\label{eq:euler-maruyama}
\end{align}
where $\bm{\xi}_{\tau_n} \sim {\mathcal{N}}(\bm{0}, \mathbf{I})$, the time interval $[0, T]$ is partitioned into equal intervals of size $\eta$ and $\tau_0 < \tau_1 < \dots < \tau_N$ denote the discretized time-steps.
Eq. (\ref{eq:euler-maruyama}) provides a non-parametric procedure to refine samples from a generator $g_\theta$ where we let $\mu$ be the density of real samples and $\rho_{\tau_0}$ the density of samples generated from $g_\theta$ obtained by first sampling from the prior latent distribution ${\mathbf{z}} \sim p_Z({\mathbf{z}})$ and then feeding ${\mathbf{z}}$ into $g_\theta$. We first generate particles ${\mathbf{x}}_0 \sim \rho_{\tau_0}$ and then update the particles using Eq. (\ref{eq:euler-maruyama}) for $N$ time steps.
Given a binary classifier (discriminator) $D$ that has been trained to distinguish between samples from $\mu$ and $\rho_{\tau_0}$, the density-ratio $\rho_{\tau_0}({\mathbf{x}})/\mu({\mathbf{x}})$ can be estimated via the well-known \emph{density-ratio trick}~\citep{sugiyama2012density},
\begin{align}
\label{eq:density-ratio-trick}
\rho_{\tau_0}({\mathbf{x}})/\mu({\mathbf{x}}) = \frac{1-D(y=1|{\mathbf{x}})}{D(y=1|{\mathbf{x}})} = \exp(-d({\mathbf{x}})),
\end{align}
where $D(y=1|{\mathbf{x}})$ denotes the conditional probability of the sample ${\mathbf{x}}$ being from $\mu$ and $d({\mathbf{x}})$ denotes the logit output of the classifier $D$. We term this procedure where samples are refined via gradient flow of $f$-divergences as Discriminator Gradient $f$low (\textsc{DG}$f$low{}).
\subsection{Refinement in the latent space}
\label{sec:latent-space-refinement}
Eq. (\ref{eq:euler-maruyama}) requires a running estimate of the density-ratio $\rho_{\tau_n}({\mathbf{x}})/\mu({\mathbf{x}})$, which can be approximated using the \emph{stale estimate} $\rho_{\tau_n}({\mathbf{x}})/\mu({\mathbf{x}}) \approx \rho_{\tau_0}({\mathbf{x}})/\mu({\mathbf{x}})$ for $\eta \to 0$ and small $N$, where the density $\rho_{\tau_n}$ will be close to $\rho_{\tau_0}$. However, our initial image experiments showed that refining directly in high-dimensional data-spaces with the stale estimate is problematic; error is accumulated at each time-step leading to a visible degradation in the quality of data samples (e.g., appearance of artifacts in images).
To tackle this problem, we propose refining the latent vectors before mapping them to samples in data-space using $g_\theta$. We describe a procedure analogous to Eq. (\ref{eq:euler-maruyama}) but in the latent space for generators $g_\theta$ that take a latent vector ${\mathbf{z}} \in {\mathcal{Z}}$ as input and generate a sample ${\mathbf{x}} \in {\mathcal{X}}$. We first show in Lemma \ref{lemma:latent-density-ratio} that the density-ratio in the latent space between two distributions can be estimated via the density-ratio of corresponding distributions in the data space.
\begin{lemma}
\label{lemma:latent-density-ratio}
Let $g: {\mathcal{Z}} \rightarrow {\mathcal{X}}$ be a sufficiently well-behaved injective function where ${\mathcal{Z}} \subseteq {\mathbb{R}}^n$ and ${\mathcal{X}} \subset {\mathbb{R}}^m$ with $m > n$.
Let $p_{Z}({\mathbf{z}})$, $p_{\hat{Z}}(\hat{{\mathbf{z}}})$ be probability densities on ${\mathcal{Z}}$ and $q_{X}({\mathbf{x}})$, $q_{\hat{X}}(\hat{{\mathbf{x}}})$ be the densities of the pushforward measures $g \sharp Z$, $g \sharp \hat{Z}$ respectively. Assume that $p_{Z}({\mathbf{z}})$ and $p_{\hat{Z}}(\hat{{\mathbf{z}}})$ have same support, and the Jacobian matrix $\mathbf{J}_g$ has full column rank. Then, the density-ratio $p_{\hat{Z}}({\mathbf{u}})/p_{Z}({\mathbf{u}})$ at the point ${\mathbf{u}} \in {\mathcal{Z}}$ is given by
\begin{align}
\frac{p_{\hat{Z}}({\mathbf{u}})}{p_{Z}({\mathbf{u}})} = \frac{q_{\hat{X}}(g({\mathbf{u}}))}{q_{X}(g({\mathbf{u}}))}.
\end{align}
\end{lemma}
\begin{wrapfigure}[12]{r}{0.55\textwidth}
\vspace{-2em}
\begin{minipage}{0.55\textwidth}
\begin{algorithm}[H]
\caption{Refinement in the Latent Space using \textsc{DG}$f$low{}.}
\label{alg:dgflow}
\begin{algorithmic}[1]
\Require First derivative of $f$ ($f'$), generator ($g_\theta$), discriminator ($d_\phi$), number of update steps ($N$), step-size ($\eta$), noise factor ($\gamma$).
\State{${\mathbf{z}}_0 \sim p_Z({\mathbf{z}})$\Comment{Sample from the prior.}}
\For{$i\gets 0,N$}
\State $\bm{\xi}_i \sim {\mathcal{N}}(0,I)$
\State ${\mathbf{z}}_{i+1} = {\mathbf{z}}_i - \eta\nabla_{{\mathbf{z}}_i}f'(e^{-d_\phi(g_\theta({\mathbf{z}}_i))}) + \sqrt{2\eta\gamma}\bm{\xi}_i$
\EndFor
\State \textbf{return} $g_\theta({\mathbf{z}}_n)$\Comment{The refined sample.}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
The proof is in Appendix \ref{subsec:proof_densityratio}. We let $p_{\hat{Z}}(\hat{{\mathbf{z}}})$ be the density of the ``correct" latent space distribution induced by a generator $g_\theta$, i.e., $p_{\hat{Z}}(\hat{{\mathbf{z}}})$ is the density of a probability measure whose pushforward under $g_\theta$ approximately equals the target data density $\mu$. The density-ratio of the prior latent distribution $p_Z({\mathbf{z}})$ and $p_{\hat{Z}}(\hat{{\mathbf{z}}})$ can now be computed by combining Lemma \ref{lemma:latent-density-ratio} with Eq. (\ref{eq:density-ratio-trick}),
\begin{align}
\label{eq:latent-density-ratio}
\frac{p_{Z}({\mathbf{u}})}{p_{\hat{Z}}({\mathbf{u}})} = \frac{\rho_{\tau_0}(g_\theta({\mathbf{u}}))}{\mu(g_\theta({\mathbf{u}}))} = \exp(-d(g_\theta({\mathbf{u}}))).
\end{align}
Although a generator $g_\theta$ parameterized by a neural network may not satisfy the conditions of injectivity and full column rank Jacobian matrix $\mathbf{J}_{g_\theta}$, Eq. (\ref{eq:latent-density-ratio}) provides an approximation that works well in practice as shown by our experiments. Combining Eq. (\ref{eq:latent-density-ratio}) with Eq. (\ref{eq:euler-maruyama}) provides us with an update rule for refining samples in the latent space,
\begin{align}
{\mathbf{u}}_{\tau_{n+1}} = {\mathbf{u}}_{\tau_n} -\eta\nabla_{\mathbf{u}} f'\left(p_{{\mathbf{u}}_{\tau_n}}/p_{\hat{Z}}\right)({\mathbf{u}}_{\tau_n}) + \sqrt{2\gamma\eta}\bm{\xi}_{\tau_n},
\label{eq:euler-maruyama-latent}
\end{align}
where ${\mathbf{u}}_{\tau_{0}} \sim p_Z({\mathbf{z}})$ and the density-ratio $p_{{\mathbf{u}}_{\tau_n}}/p_{\hat{Z}}$ is approximated using the stale estimate $p_{{\mathbf{u}}_{\tau_0}}/p_{\hat{Z}} = \exp(-d(g_\theta({\mathbf{u}})))$. We summarize the complete algorithm in Algorithm \ref{alg:dgflow}.
\subsection{Refinement for All}
\label{sec:refinement-for-all}
Thus far, prior work~\citep{azadi2018discriminator,turner2019metropolis,tanaka2019discriminator,che2020your} has focused on improving samples for GANs with scalar-valued discriminators, which comprises the canonical GAN as well as recent variants, e.g., WGAN~\citep{gulrajani2017improved}, and SNGAN~\citep{miyato2018spectral}. Here, we propose a technique that extends our approach to refine samples from a larger class of DGMs including GANs with vector-valued critics, VAEs, and Normalizing Flows.
Let $p_\theta$ be the density of the samples generated by a generator $g_\theta$ and $\mu$ be the density of the real data distribution. We are interested in refining samples from $g_\theta$; however, a corresponding density-ratio estimator for $p_\theta/\mu$ is unavailable, as is the case with the aforementioned generative models.
Let $D_\phi$ be a discriminator that has been trained on the same dataset but for a different generative model $g_\phi$ (e.g., let $g_\phi$ and $D_\phi$ be the generator and discriminator of SNGAN respectively). $D_\phi$ can be used to compute the density ratio $p_\phi/\mu$. A straightforward technique would be to use the crude approximation $p_\theta/\mu \approx p_\phi/\mu$, which could work provided $p_\theta$ and $p_\phi$ are not too far from each other. Our experiments show that this simple approximation works to a limited extent (see appendix \ref{sec:additional-results}). %
To improve upon the crude approximation above, we propose to correct the density-ratio estimate. Specifically, a discriminator $D_\lambda$ is initialized with the weights from $D_\phi$ and is fine-tuned on samples from $g_\phi$ and $g_\theta$. $D_\phi$ and $D_\lambda$ are then used to approximate the density-ratio $p_\theta/\mu$,
\begin{align}
\frac{p_\theta({\mathbf{x}})}{\mu({\mathbf{x}})} = \frac{p_\phi({\mathbf{x}})}{\mu({\mathbf{x}})}\frac{p_\theta({\mathbf{x}})}{p_\phi({\mathbf{x}})} = \exp(-d_\phi({\mathbf{x}}))\cdot\exp(-d_\lambda({\mathbf{x}})),\label{eq:density-ratio-correction}
\end{align}
where $d_\phi$ and $d_\lambda$ are logits output from $D_\phi$ and $D_\lambda$, respectively. We term the network $D_\lambda$ the \emph{density ratio corrector}, which experiments show produces higher quality samples than using $p_\theta/\mu \approx p_\phi/\mu$. The estimate in Eq. (\ref{eq:density-ratio-correction}) is similar to telescoping density-ratio estimation (TRE), a technique proposed in very recent independent work~\citep{rhodes2020telescoping}. In brief, \citet{rhodes2020telescoping} show that classifier-based density ratio estimators perform poorly when distributions are ``too far apart'';
the classifier can easily distinguish between the distributions, even with a poor estimate of the density ratio. TRE expands the standard density ratio into a telescoping product of more difficult-to-distinguish intermediate density ratios. Likewise, in Eq. (\ref{eq:density-ratio-correction}), we treat $p_\phi$ as an intermediate distribution and estimate the final density-ratio as a product of two density-ratios.
\section{Related Work}
\vspace{-.8em}
\citet{azadi2018discriminator} first proposed the idea of improving samples from a GAN's generator by discriminator rejection sampling (DRS), making use of the density-ratio provided by the discriminator to estimate the acceptance probability. Metropolis-Hastings GAN (MH-GAN) \citep{turner2019metropolis} improved upon the costly rejection sampling procedure via the Metropolis-Hastings algorithm. Unlike \textsc{DG}$f$low{}, both of these methods reject inferior samples instead of refining them.
Our method is closely related to recent state-of-the-art sample refinement techniques, specifically Discriminator-Driven Latent Sampling (DDLS)~\citep{che2020your} and Discriminator Optimal Transport (DOT)~\citep{tanaka2019discriminator}. In fact, both these methods can be seen as special cases of \textsc{DG}$f$low{}.
DDLS treats a GAN as an energy-based model and uses Langevin dynamics to sample from the energy-based latent distribution $p_{t}({\mathbf{z}}) \propto p_Z({\mathbf{z}})\exp(d(g_\theta({\mathbf{z}})))$ induced by performing rejection sampling in the latent space. This distribution is the same as $p_{\hat{Z}}(\hat{{\mathbf{z}}})$, which can be seen by rearranging terms in Eq. (\ref{eq:latent-density-ratio}). If we use the KL-divergence by setting $f=r \log r$ , \textsc{DG}$f$low{} is equivalent to DDLS. However, there are practical differences that make \textsc{DG}$f$low{} more appealing. DDLS requires estimation of the score function $\nabla_{\mathbf{z}}\{\log p_Z({\mathbf{z}}) + d(g_\theta({\mathbf{z}}))\}$ to perform the update which becomes undefined if ${\mathbf{z}}$ escapes the support of $p_Z({\mathbf{z}})$, e.g., in the case of the uniform prior distribution commonly used in GANs; handling such cases would require techniques such as projected gradient descent. This problem does not arise in the case of \textsc{DG}$f$low{} since it only uses the density-ratio that is implicitly defined by the discriminator. Moreover, DDLS uses Langevin dynamics which \emph{requires} the sensitive diffusion term to ensure diversity and to prevent points from collapsing to the maximum-likelihood point. In \textsc{DG}$f$low{}, the sample diversity is ensured by the density-ratio term and the diffusion term serves as an enhancement. Note that \textsc{DG}$f$low{} performs well even without the diffusion term (i.e., with $\gamma=0$, see Tables \ref{tab:gans-without-noise} \& \ref{tab:cifar-inception-no-noise} in the appendix). This deterministic variant of \textsc{DG}$f$low{} is a practical alternative with one less hyperparameter to tune.
DOT refines samples by constructing an Optimal Transport (OT) map induced by the WGAN discriminator. The OT map is realized by means of a deterministic optimization problem in the vicinity of the generated samples.
If we further analyze the case of \textsc{DG}$f$low{} with $\gamma=0$ and solve the resulting ordinary differential equation (ODE) using the backward Euler method,
\begin{align}
{\mathbf{u}}_{\tau_{n+1}} = \argmin_{{\mathbf{u}} \in \mathbb{R}^{n}} \bigg\{f'\left(p_{{\mathbf{u}}_{\tau_n}}/p_{\hat{Z}}\right)({\mathbf{u}}) + \frac{1}{2\lambda}\|{\mathbf{u}} - {\mathbf{u}}_{\tau_n}\|^2\bigg\},
\label{eq:backward-euler-latent}
\end{align}
DOT emerges as a special case when we consider a single update step of Eq. (\ref{eq:backward-euler-latent}) using gradient descent and set $f'(t) = \log(t)$\footnote{This implies that $f(t) = t\log t - t + 1$, which is a twice-differentiable convex function with $f(0)=1$.} with $\lambda = \frac{1}{2}$. This connection of \textsc{DG}$f$low{} to DOT, an optimal transport technique, is perhaps unsurprising given the relationship between gradient flows and the dynamical Benamou-Brenier formulation of optimal transport~\citep{santambrogio2017euclidean}.
Recent work has also sought to improve generative models via sample evolution in the training/generation process. In energy-based generative models~\citep{arbel2020generalized, Deng2020Residual}, the energy functions can be viewed as a component that improves some base generator. For example, the Generalized Energy-Based Model (GEBM) \citep{arbel2020generalized} jointly trains a base generator by minimizing a lower bound of the KL divergence along with an energy function in an alternating fashion. Once trained, the energy function is used to refine samples from the base generator using Langevin dynamics and serves a similar purpose to the discriminator in DDLS and \textsc{DG}$f$low. The Noise Conditional Score Network (NCSN)~\citep{song2019generative,song2020improved} --- a score-based generative model --- can be seen as a gradient flow that refines a sample right from noise to data. Latent Optimization GAN (LOGAN)~\citep{wu2020logan} optimizes a latent vector via natural gradient descent as part of the GAN training process. In contrast to these works, we primarily focus on refining samples from \emph{pretrained} generative models using the gradient flow of $f$-divergences.\footnote{For further discussion about these techniques, please refer to appendix \ref{app:relatedwork}.}
|
2,877,628,088,636 | arxiv | \section{Appendix A: Global and local scale transformations on the lattice}
MERA is a tensor network representation of many-body wave-functions that, upon optimization, approximates ground states of local Hamiltonians. Fig.~\ref{fig:global} shows MERA networks for one-dimensional systems. The open indices of the networks correspond to sites of the lattice on which the many-body wave-function is defined.
\textit{Global scale transformations.---} Each layer of tensors in the MERA consists of a row of disentanglers and a row of isometries and defines a coarse-graining transformation of the lattice. This coarse-graining implements a global scale transformation, in that it produces a renormalization group flow (in the space of ground states and, when applied by conjugation, in the space of local Hamiltonians) with the expected structure of RG fixed points, both at criticality and off criticality \cite{meraCFT}. These fixed-points are characterized by an explicitly scale invariant wave-function.
As illustrated in fig.~\ref{fig:global}, a layer of the MERA can be used to map the wave-function on a lattice with $N$ sites into the wave-function of a finer lattice with $2N$ sites, as well as into the wave-function of a coarser lattice with $N/2$ sites. Accordingly, the MERA does not define a single wave-function, but a collection of wave-functions, one for each of the lattices in a sequence of increasingly fine-grained (or coarse-grained) lattices. These wave-functions are all mutually consistent in that for any pair of wave-functions, there exists (by construction) a uniform fine-graining or coarse-graining transformation that maps one wave-function to the other. Notice that we can order all the wave-functions in this collection from most fine-grained to most coarse-grained.
\begin{figure}[!t]
\begin{center}
\includegraphics[trim = 4mm 3mm 4mm 3mm, clip, width=8.5cm]{global.pdf}
\caption{
By adding or subtracting layers of the MERA, we can fine-grain or coarse-grain the lattice, thus implementing a global scale transformation by a discrete factor $1/2$ or $2$, respectively.}
\label{fig:global}
\end{center}
\end{figure}
\textit{Local scale transformations.---} Individual disentanglers and isometries can also be used to coarse-grain a region of the lattice without coarse-graining the rest. We would like to think of such a transformation as a local scale transformation. In order to show that individual disentanglers and isometries properly implement a local scale transformation, we investigate whether they indeed act on the lattice consistently with what we expect of a local scale transformation in the continuum. In a quantum critical system, the ground state in the continuum is often the vacuum of a conformal field theory. Conformal invariance in 1+1 dimensions implies in particular that applying certain local scale transformation to the ground state on the infinite line should result in a thermal state on the infinite line (and on a finite circle after a quotient). By showing that the same is true with the MERA, this paper provides strong evidence that we can use the disentanglers and isometries to properly implement local scale transformations.
Fig.~\ref{fig:local2} shows an example of the result of applying a local scale transformation to a MERA by adding and/or deleting disentanglers and isometries. Accordingly, the MERA defines an even larger collection of wave-functions than mentioned above. Each wave-function in this collection corresponds to a different way in which we locally coarse-grain the most refined lattice under consideration. All these wave-functions are once again mutually consistent, since by construction there is a local scale transformation mapping any pair of such wave-functions. Notice that this time we can define a \textit{partial order} in the set of wave-functions, according to whether one wave-function is more coarse-grained than another. This order is only partial because the map connecting the two wave-functions may require both coarse-graining one region and fine-graining another region, in which case none of the wave-functions is coarser or finer than the other.
\textit{Causal structure.---} The MERA can be understood as a quantum circuit with some auxiliary time running from top to bottom \cite{mera}. The gates in this circuit are the disentanglers and isometries, which are unitary/isometric with respect to this auxiliary time. Given a site of the underlying lattice, we define its causal cone as the subset of gates in the quantum circuit that can influence the reduced density matrix on that site. Fig.~\ref{fig:causal} shows the causal cone of one site. More generally, we can define the quantum circuit of a set of sites of the lattice, which is seen to be the union of the causal cones for individual sites inside the region. Notice that the causal cone expands back in time at 45$^\circ$ with respect to the horizontal. We can think of 45$^\circ$ as corresponding to a null or lightlike direction, less than 45$^\circ$ as corresponding to a spacelike direction, and more than 45$^\circ$ as corresponding to a timelike direction. This analogy makes sense, since 45$^\circ$ defines the direction of propagation of information in the quantum circuit.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{causal.pdf}
\caption{
The causal cone of a site of the underlying lattice is the region of the tensor network that can affect the reduced density matrix on that site. Notice that the boundaries of the causal cone are at 45$^\circ$ (null or lightlike).
}
\label{fig:causal}
\end{center}
\end{figure}
It is easily seen that the MERA describes a wave-function for each lattice that can be obtained by a slice of the network that is piecewise lightlike and/or spacelike, but not timelike. Fig.~\ref{fig:local2} shows an example of a slice of the network that is piecewise horizontal and at 45$^\circ$.
\textit{Scale transformations and entanglement renormalization.---} We conclude this Appendix by pointing out that one can also coarse-grain a lattice (both globally and locally) by using instead the isometries of a \textit{tree tensor network} \cite{tree}-- which corresponds to a MERA with trivial disentanglers. In this case, however, at criticality we do not recover explicit invariance under global scale transformations or produce a thermal state under the local scale transformation studied in this paper. This can be traced back to the fact that a coarse-graining transformation based only on isometries fails to properly remove short-range entanglement. We thus conclude that the removal of short-range entanglement, as performed in the MERA by the disentanglers, is key to properly defining both global and local scale transformations on the lattice.
\begin{figure}[!t]
\begin{center}
\includegraphics[trim = 33mm 73mm 33mm 73mm, clip, width=8.5cm]{explicit.pdf}
\caption{
(a) Tensor network for the thermal state of inverse temperature $\beta = \pi/\log 2$.
(b) Tensor network for the thermal state of inverse temperature $4\beta$.
Notice that we can obtain one tensor network form the other by adding (or removing) a layer of tensors that consists of identical unit cells, each of which is made up of three disentanglers and three isometries and maps one site of (a) into four sites of (b).
}
\label{fig:explicit}
\end{center}
\end{figure}
\section{Appendix B: Exact scaling symmetry and quotient}
Under a discrete logarithmic scale transformation, the MERA representing the ground state on a discrete infinite line is taken to a MERA for a thermal state. This is illustrated in fig.~\ref{fig:explicit}. We re-display the two different implementations of the logarithmic transformation discussed in the main text, which correspond to inverse temperatures $\beta$ and $4\beta$, where $\beta = \pi/\log 2$. The figure emphasizes that the two resulting tensor networks are related by a relative global coarse-graining, formed by a uniform repetition of the same unit cell of tensors.
A symmetry of the tensor network for the thermal state with $\beta$, $2\beta$ and $4\beta$ is highlighted in fig.~\ref{fig:symmetryx3}. Each network is invariant under a finite scaling by a factor $1/2$ in the lattice where the MERA represents the ground state. In the logarithmic coordinate on the cut, this symmetry becomes a finite translation, as figs.~\ref{fig:commute} and \ref{fig:quotient1} of Appendix~C make explicit in the case of the $2\beta$ network.
Notice that the MERA is in general an \textit{approximate} representation of the ground state on the lattice, one that can be made systematically more accurate by increasing the bond dimension of the tensors. However, the above symmetry is not approximate, but an \textit{exact} symmetry of the tensor network.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{symmetryx3.pdf}
\caption{
Tensor networks for a thermal state on the discrete infinite line, which result from applying a logarithmic local scale transformation to the MERA representing the ground state on the discrete infinite line. The networks in (a), (b) and (c) describe thermal states on a lattice that has a unit cell with $p$ sites, with $p=1,2,4$, respectively. As emphasized in fig.~\ref{fig:explicit}, these tensor networks are related: from network (b) we can produce network (a) or (c) by removing or adding a uniform row of tensors. By adding even more tensors, thermal states on a lattice with a larger unit cell can also be built. All these networks are invariant under translations by a unit cell.
}
\label{fig:symmetryx3}
\end{center}
\end{figure}
Because this symmetry is exact, quotient by it by reconnecting lines. In the three networks of fig.~\ref{fig:symmetryx3}, the symmetry is a translation by $p$ sites, where respectively $p = 1, 2, 4$. We may quotient by a $k$-fold multiple of this translation. The result is a periodic tensor network describing a thermal state with the same inverse temperature (respectively, $\beta$, $2\beta$ and $4\beta$) but on a finite discrete circle made of $k\, p$ sites. Consequently, the reduced inverse temperature is independent of the choice of logarithmic map:
\begin{equation}
\tilde{\beta} = \frac{p\, \beta}{k\, p} = \frac{\pi}{k\log 2}.
\end{equation}
The tensor networks obtained from the quotient with $k=1$ and $p=1,2,4$ are shown in fig.~\ref{fig:quotientx3}. The quotient for $k=2$ is shown in fig.~\ref{fig:quotient}(b) of the main text.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=7.5cm]{quotientx3.pdf}
\caption{
Quotient of the tensor networks in fig.~\ref{fig:symmetryx3} by a translation by a single unit cell ($k=1$), representing a thermal state on a lattice made up of $p$ sites, where $p=1,2,4$ for the networks (a), (b), and (c), respectively.
}
\label{fig:quotientx3}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{layers.pdf}
\caption{
(a) Layer of tensors that map the lattice with a unit cell of $p=1$ sites into the lattice with a unit cell of $p=2$ sites, for $k=1$, $k=2$, and $k=3$. Notice that this part of the network is lightlike (45$^\circ$), with the periodic boundary conditions connecting the past and the future of a light ray.
(b) Layer of tensors that map the lattice with a unit cell of $p=2$ sites into the lattice with a unit cell of $p=4$ sites, for $k=1$ and $k=2$. This layer of tensors is an isometry, see fig.~\ref{fig:isometry}.
(c) Layer of tensors that map the lattice with a unit cell of $p=4$ sites into the lattice with a unit cell of $p=8$ sites, for $k=1$.}
\label{fig:layers}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.3cm]{isometry.pdf}
\caption{
Sequence of replacements, which shows that the layer of disentanglers and isometries in fig.~\ref{fig:layers}(b left) is an exact isometry. Only the two equalities in the inset are used.
}
\label{fig:isometry}
\end{center}
\end{figure}
The quotient tensor network can be divided into three parts: a central core where the quotient reconnects tensors that are timelike-separated, a layer of lightlike-identified tensors, and a remainder, where the quotient acts spacelike. This last part consists of tensors organized in layers that implement an isometric fine-graining transformation; see fig.~\ref{fig:quotientx3}. Finally, fig.~\ref{fig:layers} shows more details of some of these layers of tensors while fig.~\ref{fig:isometry} shows with a concrete example that the spacelike layers are exact isometries.
\section{Appendix C: MERA quotient versus thermal MERA}
Here we contrast two different procedures for preparing a thermal state on the infinite, discretized line shown in figs.~\ref{fig:commute}(a) and (b).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{commute.pdf}
\caption{
The Euclidean path integral on the upper half plane prepares the ground state on the infinite line. We can prepare a thermal state on the discretized infinite line in two ways: (a) By discretizing the Euclidean path integral on the upper half plane, then using TNR to produce a MERA for the ground state, then applying the logarithmic scaling transformation described in this paper. (b) By applying a logarithmic scaling transformation in the continuum to obtain the infinite strip and then discretizing the path integral.
}
\label{fig:commute}
\end{center}
\end{figure}
In the procedure outlined in fig.~\ref{fig:commute}(a), we start by discretizing the Euclidean path integral (on the upper half plane) that prepares the ground state of the continuous theory on the infinite line. This yields a discrete path integral (in the form of a square tensor network on the upper half plane) that prepares the ground state of the theory on an infinite one-dimensional lattice. Then we use TNR to transform the discrete Euclidean path integral into a MERA. The next step is the focus of the present paper. We apply an inhomogeneous change of cutoff by adding and removing tensors. This implements a lattice version of the conformal transformation
$z \rightarrow (\beta/\pi) \log z$.
In the second procedure, shown in \ref{fig:commute}(b), we start by transforming the Euclidean path integral in the continuum with the conformal map $z \rightarrow (\beta/\pi) \log z$. This produces a Euclidean path integral on an infinite strip of width $\beta$, which prepares the thermal state of inverse temperature $\beta$ on the infinite line. After that, we discretize the Euclidean path integral by writing it as a square tensor network made of $\infty \times m_y$ tensors for some positive integer $m_y$. Each tensor corresponds to an interval $\delta_x$ on the horizontal axis and $\delta_y \equiv \beta/m_y$ on the vertical axis.
Both procedures prepare a thermal state on the infinite, discretized line. In addition, we may now prepare a thermal state on a finite, discretized circle by modding out a discrete translation. This step is applied to the networks obtained from the two procedures in figs.~\ref{fig:quotient1} and \ref{fig:quotient2}, respectively.
\begin{figure}[!t]
\includegraphics[width=8.5cm]{quotient1.pdf}
\caption{
The transformed MERA tensor network on an infinite strip obtained through the procedure of fig.~\ref{fig:commute}(a). It represents a thermal state on the discretized infinite line. Because it is invariant under discrete translations, we may take its quotient to produce a tensor network for a thermal state on a spatial circle.
}
\label{fig:quotient1}
\includegraphics[width=8.5cm]{quotient2.pdf}
\caption{
The discretized Euclidean partition function on an infinite strip obtained through the procedure of fig.~\ref{fig:commute}(b), which also represents a thermal state on the discretized infinite line. Ref.~\cite{tnr2mera} showed how to produce a MERA for this thermal state by applying TNR. The resulting thermal MERA is invariant under discrete translations, which allow us to take the quotient, producing a thermal state on a finite geometry. Alternatively, we can arrive to the same construction by first taking the quotient on the initial tensor network and only then using TNR to produce a MERA.
}
\label{fig:quotient2}
\end{figure}
The quotients may be taken, because both networks have an exact symmetry. The network produced in the first procedure is invariant under translations by a discrete amount $k \cdot \beta \log 2/\pi$, which corresponds to discrete scalings of the ground state by $2^{k} = \exp(k\log 2)$ (for any positive integer $k$). Modding out a $k \cdot \beta \log 2/\pi$ translation as in fig.~\ref{fig:quotient1} produces a thermal state of inverse temperature $\beta$ on a circle of length $2\pi L = k \beta \log 2 /\pi $. Its reduced inverse temperature
\begin{equation}
\tilde{\beta} \equiv \frac{\beta}{2\pi L} = \frac{\pi }{k\log 2}
\end{equation}
determines the spectrum of the state according to eq.~(\ref{eq:spectrum}).
On the other hand, at the end of the second procedure the discretized Euclidean path integral is explicitly invariant under translations by $m_x$ tensors (or length $m_x \delta_x$) in the horizontal direction, for any positive integer $m_x$. Taking the quotient by a discrete $m_x \delta_x$ translation as in fig.~\ref{fig:quotient2} produces a network made up of $m_x \times m_y$ tensors, which represents a thermal state on a circle of length $m_x \delta_x$ and inverse temperature $\beta$. The reduced inverse temperature is $\tilde{\beta} = \beta / m_x\delta_x$. If we choose $m_x=k m_y$ for a positive integer $k$, then we have a rectangular tensor network for a thermal state with reduced inverse temperature $\tilde{\beta} = \delta_y/k \delta_x$. If, in addition, $m_y=2^q$ for some positive integer $q$, then we can apply $q$ rounds of TNR coarse-graining transformations to obtain a tensor network consisting of $q$ layers of MERA, a central row of $k$ coarse-grained tensors, and $q$ conjugated layers of MERA \cite{tnr2mera}.
With the choice $m_x \delta_x = k \cdot \beta \log 2 /\pi$, both constructions produce the thermal state with inverse temperature $\beta$ on a finite circle of length $k \cdot \beta \log 2 /\pi$ whose reduced inverse temperature is $\tilde{\beta}= \pi / k \log 2$.
|
2,877,628,088,637 | arxiv | \section{Equations of motion and boundary conditions}
Here we present a more detailed description of the derivation of the amplitude equations describing two-photon scattering off a single atom. The Hamiltonian for a single emitter is given by
\begin{equation}
\begin{split}
\hat{h} = \int {\rm d}x (-i)\hat{c}^{\dagger}(x)\partial_{x} \hat{c}(x) + \left[\omega_{2} - i\frac{\gamma}{2}\right]\hat{\sigma}_{ee} + (\omega_3 + \bar{\omega})\hat{\sigma}_{bb} + \left[\int dx V\delta(x) \hat{c}(x)\hat{\sigma}_{ea} + \Omega \hat{\sigma}_{be} + \textup{h.c.}\right], \label{eq:Hsingle}
\end{split}
\end{equation}
where $V = \sqrt{\Gamma}$ is the atom-waveguide coupling \cite{fks10}. As described in the main text, we use the following ansatz
\begin{equation}
\ket{\phi} = \bigg(\int {\rm d}x{\rm d}x' \psi(x,x')\frac{1}{\sqrt{2}}\hat{c}^{\dagger}(x)\hat{c}^{\dagger}(x') + \int {\rm d}x\left[e(x)\hat{\sigma}_{ea} + b(x)\hat{\sigma}_{ba}\right]\hat{c}^{\dagger}(x) \bigg) \vert 0,a\rangle. \label{eq:scattering-state}
\end{equation}
to derive the equations of motion, where $\vert 0, a\rangle$ is the state with zero photons and the emitter in the ground state. Requiring that $\vert \phi\rangle$ is an eigenstate of the Hamiltonian \eqref{eq:Hsingle} with energy $E$ one obtains the following equations (see Ref. \cite{sf07} for a detailed description for two-level systems)
\begin{subequations}
\label{eq:equations-of-motion}
\begin{align}
\left[-i\partial_{x} - i\partial_{x'} - E\right]\psi(x,x') + \frac{V}{\sqrt{2}}\left[\delta(x)e(x') + \delta(x')e(x)\right] &= 0, \label{eq:equations-of-motion1} \\
\left[-i\partial_{x} + \omega_2 - i\frac{\gamma}{2} - E\right]e(x) + \frac{V}{\sqrt{2}}\left[\psi(0,x) + \psi(x,0)\right] + \Omega b(x) &= 0, \label{eq:equations-of-motion2} \\
\left[-i\partial_{x} + \omega_3 + \bar{\omega} - E\right]b(x) + \Omega e(x) &= 0, \label{eq:equations-of-motion3}
\end{align}
\end{subequations}
for the different amplitudes, where
\begin{equation}
\psi(x,0) = \psi(0,x) = \frac{1}{2}\left[\psi(0^-,x) + \psi(0^+,x)\right]
\end{equation}
From Eqs. \eqref{eq:equations-of-motion} we obtain the boundary conditions for $x < x'$
\begin{subequations}
\label{eq:boundary}
\begin{align}
-i\left[\psi(x,0^+) - \psi(x,0^-)\right] + \frac{V}{\sqrt{2}}e(x) &= 0 \quad (x < 0), \label{eq:boundary1} \\
-i\left[\psi(0^+,x') - \psi(0^-,x')\right] + \frac{V}{\sqrt{2}}e(x') &= 0 \quad (x' > 0), \label{eq:boundary2} \\
e(0^+) &= e(0^-), \label{eq:boundary3} \\
b(0^+) &= b(0^-). \label{eq:boundary4}
\end{align}
\end{subequations}
Apart from the boundary conditions in Eqs. \eqref{eq:boundary} we require that $\psi(x,x')$, $e(x)$, and $b(x)$ are continuous functions when $x,x'\neq 0$ \cite{sf07}.
Now we note that when $x,x'\neq 0$, it follows from Eq. \eqref{eq:equations-of-motion1} that $\psi(x,x') \propto e^{iEr_c}$, where $r_c \equiv (x+x')/2$ is the center of mass coordinate. Therefore $\psi(x,x')$ must have the general form
\begin{equation}
\psi(x,x') = e^{iEr_c} F(r), \label{eq:general-form}
\end{equation}
where $r \equiv x - x'$ is the relative coordinate, and $F(r)$ is continuous when $x,x'\neq 0$. The main features of $\psi(x,x')$ are hidden in $F(r)$ since $e^{iEr_c}$ is only a phase that varies with the center of mass position, but not the distance between the photons. We see that $F(r)$ must have the form
\begin{equation}
F(r) = \begin{cases}
F_{\textup{in}}(r), & x < x' < 0, \\
F_{\textup{0}}(r), & x < 0 < x', \\
F_{\textup{out}}(r) & x' > x > 0,
\end{cases} \label{eq:F-form}
\end{equation}
where the part for $x > x'$ ($r > 0$) is given by bosonic symmetry.
We now derive a set of equations for $F(r)$ when $x < x'$. First we substitute Eqs. \eqref{eq:general-form} and \eqref{eq:F-form} into Eqs. \eqref{eq:boundary1} and \eqref{eq:boundary2} to obtain
\begin{subequations}
\label{eq:e2}
\begin{align}
e(x) &= \frac{\sqrt{2}i}{V} \left[F_{\textup{0}}(x) - F_{\textup{in}}(x)\right] e^{iEx/2} \quad (x < 0), \label{eq:e2_x1}\\
e(x') &= \frac{\sqrt{2}i}{V} \left[F_{\textup{out}}(-x') - F_{\textup{0}}(-x')\right] e^{iEx'/2} \quad (x' > 0).
\end{align}
\end{subequations}
Using Eqs. \eqref{eq:general-form}, \eqref{eq:F-form}, and \eqref{eq:e2_x1} in Eqs. \eqref{eq:equations-of-motion2} and \eqref{eq:equations-of-motion3} yields for $x < 0$
\begin{subequations}
\label{eq:F-e_3-system}
\begin{align}
\left[\partial_{x} + i\Delta + \frac{\gamma+\Gamma}{2}\right]F_{\textup{0}}(x) - \left[\partial_{x} + i\Delta + \frac{\gamma-\Gamma}{2}\right] F_{\textup{in}}(x) + \frac{V\Omega}{\sqrt{2}}b(x)e^{-iEx/2} &= 0, \\
\left[\partial_{x} + i\bar{\Delta}\right] \left[b(x)e^{-iEx/2}\right] - \frac{\sqrt{2}\Omega}{V}\left[F_{\textup{0}}(x) - F_{\textup{in}}(x)\right] &= 0,
\end{align}
\end{subequations}
where $\Delta = \omega_e - \omega$ is the single-photon detuning when two incident photons have the same energies $\omega = E/2$, and $\bar{\Delta} = \omega_b - (\omega - \bar{\omega})$ is the two-photon detuning. Finally, $b(x)$ can be eliminated in Eqs. \eqref{eq:F-e_3-system} to obtain a single second order differential equation
\begin{equation}
\left[\partial_r^2 + \eta\partial_r + \alpha\right] F_{\textup{0}}(r) = \left[\partial_r^2 + \eta\partial_r + \alpha\right]F_{\textup{in}}(r) + \left[\mu - \Gamma\partial_r\right]F_{\textup{in}}(r) \quad (r < 0), \label{eq:F1}
\end{equation}
where
\begin{align}
\alpha &\equiv \Omega^2 + i\bar{\Delta}\left(i\Delta + \frac{\gamma + \Gamma}{2}\right), \label{eq:alpha} \\
\eta &\equiv i(\Delta+\bar{\Delta}) + \frac{\gamma+\Gamma}{2}, \\
\mu &\equiv -i\Gamma\bar{\Delta}. \label{eq:mu}
\end{align}
Similarly, it can be shown that
\begin{equation}
\left[\partial_r^2 - \eta\partial_r + \alpha\right] F_{\textup{out}}(r) = \left[\partial_r^2 - \eta\partial_r + \alpha\right]F_{\textup{0}}(r) + \left[\mu + \Gamma\partial_r\right]F_{\textup{0}}(r) \quad (r < 0). \label{eq:F2}
\end{equation}
Using Eqs. \eqref{eq:equations-of-motion2} and \eqref{eq:e2} one finds that the boundary conditions \eqref{eq:boundary3} and \eqref{eq:boundary4} yield
\begin{subequations}
\label{eq:boundary-F}
\begin{align}
F_{\textup{out}}(0) &= 2F_{\textup{0}}(0)-F_{\textup{in}}(0), \\
\partial_r F_{\textup{out}}(r)\vert_{r=0} &= \partial_r F_{\textup{in}}(r)\vert_{r=0} + \Gamma\left[F_{\textup{0}}(0) - F_{\textup{in}}(0)\right].
\end{align}
\end{subequations}
Eqs. \eqref{eq:F1}, \eqref{eq:F2}, and \eqref{eq:boundary-F} constitutes the set of equations we will use to solve the scattering problem in the non-dissipative and dissipative case in the following two sections.
\section{Description of unitary photon propagation}
In this section we show how we find the outgoing two-photon wave function of the chain using the eigenstates of the single-atom scattering matrix ($S$-matrix). As noted in the main text, the two-photon correlation function can be found directly from this outgoing wave function. In short, we find the eigenstates of the single-atom $S$-matrix and decompose our incident state into these eigenstates \cite{sf07,sf07-prl}. With this decomposition at hand, the outgoing state after all $N$ atoms is found by multiplying the eigenstates by their eigenvalues to the $N$th power in the decomposition \cite{Mahmoodian_2018}. Throughout this section we assume that there is no decay to other modes than the guided one ($\gamma =0$).
To find the eigenstates of the $S$-matrix we start by eliminating $F_{\textup{0}}(r)$ in Eqs. \eqref{eq:F1}, and \eqref{eq:F2}, and thereby obtain the fourth order differential equation
\begin{equation}
\left[\partial_r^4 - (\eta^2-2\alpha)\partial_r^2 + \alpha^2\right]F_{\textup{out}}(r)
= \left[\partial_r^4 - \left[(\Gamma - \eta)^2-2(\alpha + \mu)\right]\partial_r^2 + (\alpha+\mu)^2\right]F_{\textup{in}}(r). \label{eq:diff-Fout-Fin}
\end{equation}
To find eigenstates of the $S$-matrix we impose the condition $F_{\textup{out}}(r) = \lambda F_{\textup{in}}(r)$, where $\lambda$ is the eigenvalue of the $S$-matrix, in Eq. \eqref{eq:diff-Fout-Fin}
\begin{equation}
\lambda\left[\partial_r^4 - (\eta^2-2\alpha)\partial_r^2 + \alpha^2\right]F_{\textup{in}}(r)
= \left[\partial_r^4 - \left[(\Gamma - \eta)^2-2(\alpha + \mu)\right]\partial_r^2 + (\alpha+\mu)^2\right]F_{\textup{in}}(r). \label{eq:diff-Fin}
\end{equation}
This equation is solved by the form
\begin{equation}
F_{\textup{in}}(r) = A e^{-i\nu x} + B e^{i\nu r} + C e^{-i\tilde{\nu} r} + De^{i\tilde{\nu} r}, \label{eq:Fin-full}
\end{equation}
where $A$, $B$, $C$, and $D$ are constants to be found. Note that for a two-level atom the form would be \cite{sf07,sf07-prl}
\begin{equation}
F_{\textup{in}}^{(\textup{two-level})}(r) = A' e^{-i\nu r} + B' e^{i\nu r},
\end{equation}
where $\nu$ is half the momentum difference of the two photons. Our interpretation of Eq. \eqref{eq:Fin-full} is therefore that the eigenstates for the three-level system mix two different values ($\nu$ and $\tilde{\nu}$) of relative momenta, which means that states with momenta $k,p$ hybridize with another set $\tilde{k},\tilde{p}$, where the total energy $k + p = \tilde{k} + \tilde{p}$ is the same for both [by Eq. \eqref{eq:general-form}].
Substituting Eq. \eqref{eq:Fin-full} into Eq. \eqref{eq:diff-Fin} shows that the eigenvalue of the $S$-matrix is given by
\begin{equation}
\lambda(E,\nu) = \frac{\nu^4 + \left[(\Gamma - \eta)^2-2(\alpha + \mu)\right]\nu^2 + (\alpha+\mu)^2 }{\nu^4 + (\eta^2-2\alpha)\nu^2 + \alpha^2} = \frac{\tilde{\nu}^4 + \left[(\Gamma - \eta)^2-2(\alpha + \mu)\right]\tilde{\nu}^2 + (\alpha+\mu)^2 }{\tilde{\nu}^4 + (\eta^2-2\alpha)\tilde{\nu}^2 + \alpha^2} = \lambda(E,\tilde{\nu}). \label{eq:a}
\end{equation}
This means that for a given $\nu$ the value of $\tilde{\nu}$ is fixed, and we can find it by solving the equation above with the constraint $\tilde{\nu} \neq \pm \nu$. This gives
\begin{equation}
\tilde{\nu}^2 = \frac{\mu(2\alpha+\mu)\nu^2 + \alpha^2\left[\Gamma(2\eta-\Gamma)+2\mu\right] + (\eta^2 - 2\alpha)\mu(2\alpha+\mu)}{\left[\Gamma(2\eta-\Gamma)+2\mu\right]\nu^2 - \mu(2\alpha+\mu)}. \label{eq:nutilde}
\end{equation}
In the absence of dissipation ($\gamma = 0$) it follows from Eq. \eqref{eq:nutilde} that $\tilde{\nu}^2$ is real when $\nu^2$ is real. Note, moreover, that Eq. \eqref{eq:nutilde} leaves an ambiguity in the sign of $\tilde{\nu}$. To get rid of this ambiguity we define the phase of $\tilde{\nu}$ to be in the interval between $0$ and $\pi$, i.e. $0 \leq \operatorname{arg}[\tilde{\nu}] < \pi$.
It is worth noting that when the system is non-dissipative, we must have $|\lambda(E,\nu)|^2 = 1$ (perfect transmission), which implies $\lambda(E,\nu)^* = \lambda(E,\nu)^{-1}$. Using that $\alpha^* = \alpha + \mu$ and $\eta^* = \Gamma - \eta$ when $\gamma = 0$ together with Eq. \eqref{eq:a}, it follows that
\begin{equation}
{\nu^*}^2 = \nu^2 \quad \textup{or} \quad {\nu^*}^2 = \tilde{\nu}^2.
\end{equation}
In particular, any real value of $\nu$ gives an allowed value of the eigenvalue $\lambda(E,\nu)$.
The actual eigenstates of the $S$-matrix have [by Eq. \eqref{eq:general-form}] real space representations (or wave functions)
\begin{equation}
\Psi_{E,\nu}(x,x') = \langle x,x'\ket{E',\nu} = e^{iE r_c}F_{E,\nu}(r), \label{eq:eig-realspace}
\end{equation}
where $F_{E,\nu}(r)$ is given by Eq. \eqref{eq:Fin-full}. These states are shown exemplarily in Fig. 1(c) of the main text. \\
To find the constants $A$, $B$, $C$, and $D$ in the eigenstates given by Eq. \eqref{eq:Fin-full}, we apply the boundary conditions \eqref{eq:boundary-F}. For this purpose we need $F_{\textup{0}}(r)$, which we find by solving Eq. \eqref{eq:F1} to obtain
\begin{equation}
\begin{split}
F_{\textup{0}}(r) &= \frac{\nu^2 - i(\Gamma - \eta)\nu - (\alpha + \mu)}{\nu^2 + i\eta\nu - \alpha}Ae^{-i\nu r} + \frac{\nu^2 + i(\Gamma - \eta)\nu - (\alpha + \mu)}{\nu^2 - i\eta\nu - \alpha}Be^{i\nu r} \\
&\quad+ \frac{\tilde{\nu}^2 - i(\Gamma - \eta)\tilde{\nu} - (\alpha + \mu)}{\tilde{\nu}^2 + i\eta\tilde{\nu} - \alpha}Ce^{-i\tilde{\nu} r} + \frac{\tilde{\nu}^2 + i(\Gamma - \eta)\tilde{\nu} - (\alpha + \mu)}{\tilde{\nu}^2 - i\eta\tilde{\nu} - \alpha}De^{i\tilde{\nu} r}.
\end{split}
\end{equation}
Since there are only two boundary conditions and the constraint of normalization, there is a free parameter in Eq. \eqref{eq:Fin-full}. To still find an expression we initially set $D = 0$. In this case we find that the eigenstate is
\begin{equation}
F_{E,\nu}^{(D=0)}(r) = \frac{1}{\xi(E,\nu)}\left[u_E(\tilde{\nu},-\nu)h_E(\nu)e^{-i\nu r} + u_E(\nu,\tilde{\nu})h_E(\nu)e^{i\nu r} - u_E(\nu,-\nu)h_E(\tilde{\nu})e^{-i\tilde{\nu} r}\right], \label{eq:F_D=0}
\end{equation}
where
\begin{align}
\begin{split}
u_E(\nu,\nu') &= \left[\Gamma(\Gamma\alpha+\eta\mu) + \mu^2\right](\nu-\nu')\big[2(2\eta-\Gamma)\nu\nu'(\nu+\nu') \\
&\quad + 2i\mu(\nu^2 + \nu\nu' + {\nu'}^2) - i\Gamma(2\eta-\Gamma)\nu\nu' + \Gamma\mu(\nu+\nu') - i\mu(2\alpha+\mu)\big],
\end{split}\\
h_E(\nu) &= \nu^4 + (\eta^2 - 2\alpha)\nu^2 + \alpha^2,
\end{align}
and
\begin{equation}
\begin{split}
\xi(E,\nu) &= 2\pi\left|h_E(\nu)\right| \Big(|u_E(\tilde{\nu},-\nu)|^2 + |u_E(\nu,\tilde{\nu})|^2 + 4\left[\Gamma(\Gamma\alpha + \eta\mu) + \mu^2\right]^2\textup{Re}[\tilde{\nu}] \\
&\quad \times \left|\nu\left[\left[\Gamma(2\eta-\Gamma)+2\mu\right]\tilde{\nu}^2 - \mu(2\alpha+\mu)\right]\left[\left[\Gamma(2\eta-\Gamma)+2\mu\right]\nu^2 - \mu(2\alpha+\mu)\right]\right|\Big)^{1/2}
\end{split}
\end{equation}
is a normalization factor. By setting $C = 0$ instead, we get
\begin{equation}
F_{E,\nu}^{(C=0)}(r) = \frac{1}{\xi(E,\nu)}\left[u_E(-\tilde{\nu},-\nu)h_E(\nu)e^{-i\nu r} + u_E(\nu,-\tilde{\nu})h_E(\nu)e^{i\nu r} - u_E(\nu,-\nu)h_E(\tilde{\nu})e^{i\tilde{\nu} r}\right]. \label{eq:F_C=0}
\end{equation}
In order to find all eigenstates, we go through the different scenarios for $\nu$ and $\tilde{\nu}$. First, we consider the case where $\nu$ is real. If $\tilde{\nu}$ is not real, we have $\operatorname{Im}[\tilde{\nu}] > 0$ by our choice of phase ($0 \leq \operatorname{arg}[\tilde{\nu}] < \pi$), and because $\tilde{\nu}^2$ is real, we must have $\operatorname{Re}[\tilde{\nu}] = 0$ in this case. Then Eq. \eqref{eq:F_D=0} gives the only physical eigenstate since Eq. \eqref{eq:F_C=0} diverges when $r \to -\infty$. These states are the continuum eigenstates with wave functions that are superpositions of plane waves and a localized term, mentioned in the main text, that emerge due to the coupling to the third level $| b \rangle$.
Conversely, if both $\nu$ and $\tilde{\nu}$ are real, both Eq. \eqref{eq:F_D=0} and Eq. \eqref{eq:F_C=0} describe two different valid states, which are degenerate both in energy and in the eigenvalue of the $S$-matrix. Therefore they need not be orthogonal. Another point of view is to note that when both $\nu$ and $\tilde{\nu}$ are real, these quantum numbers can be interchanged in Eq. \eqref{eq:F_D=0}, which gives rise to a new eigenstate $F_{E,\tilde{\nu}}^{(D=0)}(r)$ with the same eigenvalue as $F_{E,\nu}^{(D=0)}(r)$, and therefore these states need not be orthogonal. Instead we can use
\begin{equation}
F_{E,\nu}^{(\textup{I})}(r) = \frac{1-\zeta}{2}F_{E,\nu}^{(D=0)}(r) + \frac{1+\zeta}{2}\left[\theta(\nu^2 - \tilde{\nu}^2)F_{E,\nu}^{(D=0)}(r) + \theta(\tilde{\nu}^2 - \nu^2)F_{E,\nu}^{(C=0)}(r)\right], \label{eq:F_Enu_I}
\end{equation}
or
\begin{equation}
F_{E,\nu}^{(\textup{II})}(r) = \frac{1-\zeta}{2}F_{E,\nu}^{(C=0)}(r) + \frac{1+\zeta}{2}\left[\theta(\nu^2 - \tilde{\nu}^2)F_{E,\nu}^{(C=0)}(r) + \theta(\tilde{\nu}^2 - \nu^2)F_{E,\nu}^{(D=0)}(r)\right], \label{eq:F_Enu_II}
\end{equation}
when $\tilde{\nu}$ is real, and where
\begin{equation}
\zeta = \textup{sgn}\left[\frac{\left[\Gamma(2\eta-\Gamma)+2\mu\right]\tilde{\nu}^2 - \mu(2\alpha+\mu)}{\left[\Gamma(2\eta-\Gamma)+2\mu\right]\nu^2 - \mu(2\alpha+\mu)}\right],
\end{equation}
where $\textup{sgn}$ is the sign function. The states represented by $F_{E,\nu}^{(\textup{I})}(r)$ and $F_{E,\tilde{\nu}}^{(\textup{I})}(r)$ are orthogonal, and similarly the states represented by $F_{E,\nu}^{(\textup{II})}(r)$ and $F_{E,\tilde{\nu}}^{(\textup{II})}(r)$ are orthogonal, so these states thereby ensure orthogonality. The states \eqref{eq:F_Enu_I} and \eqref{eq:F_Enu_II} are the continuum states that consist of only plane waves, mentioned in the main text.
Note that the Eq. \eqref{eq:F_Enu_I} can also be used as the eigenstate for complex values of $\tilde{\nu}$, since we automatically have $\tilde{\nu}^2 < \nu^2$ in that case. The two continuum states shown in Fig. 1(c) of the main text have the form of Eq. \eqref{eq:F_Enu_I} with different real values of $\nu$ such that $\tilde{\nu}$ is real for the eigenstates consisting only of plain waves, and $\tilde{\nu}$ is complex for the eigenstates consisting of plane waves and a localized term.
All the states found so far are continuum states, i.e. they exist for a continuous set of quantum numbers $E$ and $\nu$, and their real space representations do not vanish for $r \to \pm\infty$. It is, however, known that for a two-level system, the continuum states do not constitute a complete set, but a bound state, with the property that the real space representation vanishes for $r \to \pm\infty$, is needed at every energy \cite{sf07,sf07-prl}. To search for such an eigenstate in the present setting, we use that such an eigenstate must still have the form \eqref{eq:Fin-full}, but with complex $\nu$ and $\tilde{\nu}$. With $0 < \arg[\nu],\arg[\tilde{\nu}] < \pi$ we find the bound state numerically by requiring that $B = D = 0$ (so the state can be normalized) in \eqref{eq:Fin-full}. This state is a bound state with a value of $\nu$, which we denote by $\nu_b$. The eigenstate is in this case given by
\begin{equation}
F_{E,\nu_b}(r) = \frac{1}{\xi_b(E,\nu_b)}\left[u_E(\tilde{\nu}_b,-\nu_b)h_E(\nu_b)e^{-i\nu_b r} - u_E(\nu_b,-\nu_b)h_E(\tilde{\nu}_b)e^{-i\tilde{\nu}_b r}\right], \label{eq:F_bound}
\end{equation}
where
\begin{equation}
\xi_b(E,\nu_b) = 2\sqrt{\pi}\sqrt{\frac{\left|u_E(\tilde{\nu}_b,-\nu_b)h_E(\nu_b)\right|^2}{2\operatorname{Im}[\nu_b]} + \frac{\left|u_E(\nu_b,-\nu_b)h_E(\tilde{\nu}_b)\right|^2}{2\operatorname{Im}[\tilde{\nu}_b]} - 2\operatorname{Im}\left[\frac{u(\tilde{\nu}_b,-\nu_b)^*u(\nu_b,-\nu_b)h_E(\nu_b)^*h_E(\tilde{\nu_b})}{\nu_b^*-\tilde{\nu}_b}\right] }.
\end{equation}
An example of such a bound state is shown in Fig. 1(c) of the main text.
To find the output after $N$ atoms for a given incident state $\vert \textup{in} \rangle$, we decompose the incident state into the eigenstates of the $S$-matrix, and then apply the $S$-matrix $N$ times, which just amounts to multiplying by the $N$th power of the eigenvalue in the decomposition \cite{Mahmoodian_2018}. In this way we find the real space representation [see also Eq. (3) in the main text]
\begin{equation}
\psi_{\textup{out}}(x,x') = \langle x,x' \vert \textup{out} \rangle = \int dE' \left\{ \lambda_{E'}(\nu_b)^N\langle x,x' \vert E',\nu_b\rangle \langle E',\nu_b \vert \textup{in}\rangle + \int_{0}^{\infty} d\nu \lambda_{E'}(\nu)^N\langle x,x' \vert E',\nu\rangle \langle E',\nu \vert \textup{in}\rangle \right\}, \label{eq:decompose-in}
\end{equation}
where $\vert E',\nu\rangle$ are the eigenstates of the $S$-matrix with real space representations given by Eq. \eqref{eq:eig-realspace}. Note that the integration over $\nu$ in Eq. \eqref{eq:decompose-in} only ranges from $0$ to $\infty$ because changing the sign of $\nu$ gives the same eigenstate.
We take care of the $E'$-integration in Eq. \eqref{eq:decompose-in} using $\langle E',\nu | \textup{in}\rangle \propto \delta(E'-E)$, where $E$ is the total energy of the incoming photons, and handle the $\nu$-integration numerically. We consider only the case where the two incident photons have identical energies $\omega = E/2$, such that the input is $\langle x,x' \vert \textup{in} \rangle = e^{iE r_c}\sqrt{2}/(2\pi)$. The final expression for the wave function of the output is
\begin{equation}
\begin{split}
\psi_{\textup{out}}(x,x') &= 2\sqrt{2}ie^{iEr_c} \bigg\{\frac{\lambda_E(\nu_b)^N}{\xi_b(E,\nu_b)}\left[\frac{u_E(\nu_b,-\nu_b)h_E(\tilde{\nu}_b)}{\tilde{\nu}_b} - \frac{u_E(\tilde{\nu}_b,-\nu_b)h_E(\nu_b)}{\nu_b}\right]^* F_{E,\nu_b}(r) \\
&\quad + \int_{\mathcal{D}_{\tilde{\nu}^2 < 0}} d\nu \frac{\lambda_E(\nu)^N}{\xi(E,\nu)}\left[\frac{u_E(\nu,\tilde{\nu}) - u_E(\tilde{\nu},-\nu)}{\nu}h_E(\nu) + \frac{u_E(\nu,-\nu)}{\tilde{\nu}}h_E(\tilde{\nu})\right]^*F_{E,\nu}^{(D = 0)}(r) \\
&\quad + \frac{1}{2}\int_{\mathcal{D}_{\tilde{\nu}^2 \geq 0}} d\nu \frac{\lambda_E(\nu)^N}{\xi(E,\nu)}\bigg\{\left[\frac{u_E(\nu,\tilde{\nu}) - u_E(\tilde{\nu},-\nu)}{\nu}h_E(\nu) + \frac{u_E(\nu,-\nu)}{\tilde{\nu}}h_E(\tilde{\nu})\right]^*F_{E,\nu}^{(D = 0)}(r) \\
&\quad + \left[\frac{u_E(\nu,-\tilde{\nu}) - u_E(-\tilde{\nu},-\nu)}{\nu}h_E(\nu) - \frac{u_E(\nu,-\nu)}{\tilde{\nu}}h_E(\tilde{\nu})\right]^*F_{E,\nu}^{(C = 0)}(r)\bigg\} \bigg\}, \label{eq:out-final}
\end{split}
\end{equation}
where $\mathcal{D}_{\tilde{\nu}^2 < 0}$ is set of real-valued $\nu$ such that $\tilde{\nu}^2 < 0$, and $\mathcal{D}_{\tilde{\nu}^2 \geq 0}$ is set of real-valued $\nu$ such that $\tilde{\nu}^2 \geq 0$. In writing Eq. \eqref{eq:out-final} we partially used Eq. \eqref{eq:F_Enu_I} and partially Eq. \eqref{eq:F_Enu_II} for the decomposition of the input. In Figs. 2(b)-(d) of the main text we plot the wave function of the output, which is given by Eq. \eqref{eq:out-final}. The in-state in Fig. 2(a) of the main text is found by setting $N = 0$ in Eq. \eqref{eq:out-final}, which means that Fig. 2(a) of the main text also serves as a numerical completeness check of the eigenstates, since we get the expected result for the incident wave function.
We note that $|\psi_{\textup{out}}(x,x')|^2$ yields the second order correlation function of the transmitted photons output up to normalization
\begin{equation}
g^{(2)}(r) = 2\pi^2|\psi_{\textup{out}}(x,x')|^2
\end{equation}
that is discussed in the main text and shown in Fig.~1 of the main text.
In summary, we analyzed three different classes of $S$-matrix eigenstates: Continuum eigenstates that consist only of plane waves, continuum eigenstates consisting of plane waves and a localized term, and a bound state. The general expression for these states are given by Eq. \eqref{eq:eig-realspace}, where $F_{E,\nu}(r)$ is given by Eqs. \eqref{eq:F_Enu_I} and \eqref{eq:F_Enu_II} for the continuum states and Eq. \eqref{eq:F_bound} for the bound state. These expressions are used to plot the eigenstates in Fig. 1(c) of the main text.
\section{Photon propagation in the presence of dissipation}
The addition of dissipative photon loss into non-guided modes ($\gamma > 0$) renders the $S$-matrix non-unitary. It is therefore no longer guaranteed that its eigenstates with different eigenvalues are orthogonal which poses problems for the approach based on eigenstate decomposition.
In this section, we therefore describe another method that treats the chain as a cascaded quantum system, to derive the effect of $N$ emitters from the underlying single-emitter scattering physics. To this end, we find the eigenstates of the full single-atom Hamiltonian \eqref{eq:Hsingle} to determine the outgoing wave function after the scattering off a single atom for a given incoming boundary condition \cite{zgb12,zgb10}. The outgoing wave function after the $j$th atom can then be used as the input for atom $j+1$ to find the output after $j+1$ atoms. This yields a recursion relations, which can be used to find the output after all $N$ atoms and thereby the two-photon correlation function \cite{zgb12,zgb10}.
We denote the positions of the atoms by $x_j$ for $j = 1,2,\dots,N$, and we, moreover, define $x_0 = -\infty$ and $x_{N+1} = +\infty$. The wave function of the eigenstate of the Hamiltonian is given by Eq. \eqref{eq:general-form}, where $F(r)$ is continuous when $x,x' \neq x_j$ for all $j = 1,2,\dots,N$. So $F(r)$ can be written as
\begin{equation}
F(r) = F_{\ell,j}(r), \quad x_{\ell} < x < x_{\ell+1} \textup{ and } x_j < x' < x_{j+1},
\end{equation}
so $F_{\ell,j}(r)$ [together with Eq. \eqref{eq:general-form}] gives the wave function when one photon has passed $\ell$ atoms, and the other has passed $j$ atoms. Still we only consider the case where $r = x - x' < 0$, which implies that $\ell \leq j$ in what follows. We aim to find $F_{N,N}(r)$, which gives the output of the entire chain through
\begin{equation}
\psi_{\textup{out}}(x,x') = e^{iEr_c}F_{N,N}(r), \label{eq:output-general}
\end{equation}
where $E$ is the energy.
As previously, we restrict the input to be two photons with identical energies $\omega = E/2$, which formally means that $F_{0,0}(r) = T_0$, where $T_0$ is a constant that defines the normalization. It is assumed that output after both photons have passed $j$ atoms has the form
\begin{equation}
F_{j,j}(r) = T_j + \sum_{n=0}^{j-1}\left[A_{j,n}e^{\kappa_1 r} + B_{j,n}e^{\kappa_2 r}\right]\frac{r^n}{n!}, \label{eq:Fjj}
\end{equation}
where $T_j$, $A_{j,n}$, and $B_{j,n}$ are constants to be determined, and $\kappa_1$, $\kappa_2$ are the solutions of
\begin{equation}
\kappa^2 - \eta\kappa + \alpha = 0. \label{eq:kappa}
\end{equation}
It can be shown that the real parts of $\kappa_1$ and $\kappa_2$ are positive, so $F_{j,j}(r)$ converges for $r\to -\infty$. The assumption that $F_{j,j}(r)$ is of the form \eqref{eq:Fjj} will be justified by showing that $F_{j+1,j+1}(r)$ is of the form \eqref{eq:Fjj} given that $F_{j,j}(r)$ is. Note that $F_{0,0}(r)$ is of the form \eqref{eq:Fjj}. As mentioned above, we aim to find $F_{N,N}(r)$ because this gives the output after all $N$ atoms. We also note that if we set $T_0 = 1$, then $T_N$ is the transmission coefficient for two uncorrelated (individual) photons, since $F_{N,N}(r\to -\infty) = T_N$. The two-photon correlation function of the output (still with $T_0 = 1$) is $g^{(2)}(r) = |F_{N,N}(r)|^2/|T_N|^2$, so finding $F_{N,N}(r)$ directly gives the two-photon correlation function, which we plot in Fig. 3(a) of the main text.
Now that it has been established that $F_{N,N}(r)$ contains the information we search for, we derive a recursive relation for $F_{j,j}(x)$. To this end, we use Eqs. \eqref{eq:F1} and \eqref{eq:F2} that now read
\begin{align}
\left(\partial_r^2 + \eta\partial_r + \alpha\right)F_{j,j+1}(r) &= \left(\partial_r^2 + \eta\partial_r + \alpha\right)F_{j,j}(r) + \left(\mu - \Gamma\partial_r\right)F_{j,j}(r), \label{eq:pass-first} \\
\left(\partial_r^2 - \eta\partial_r + \alpha\right)F_{j+1,j+1}(r) &= \left(\partial_r^2 - \eta\partial_r + \alpha\right)F_{j,j+1}(r) + \left(\mu + \Gamma\partial_r\right)F_{j,j+1}(r). \label{eq:pass-second}
\end{align}
To find $F_{j,j+1}$ from $F_{j,j}$ we start by substituting Eq. \eqref{eq:Fjj} into Eq. \eqref{eq:pass-first} using Eq. \eqref{eq:kappa}
\begin{align}
\MoveEqLeft
\left(\partial_r^2 + \eta\partial_r + \alpha\right)F_{j,j+1}(r) \nonumber \\
&= (\alpha+\mu)T_j + \sum_{n=0}^{j-1}\left[ ((2\eta-\Gamma)\kappa_1 + \mu)A_{j,n} e^{\kappa_1 r} + ((2\eta-\Gamma)\kappa_2 + \mu)B_{j,n} e^{\kappa_2 r} \right]\frac{r^n}{n!} \nonumber \\
&\quad + \sum_{n=1}^{j-1}\left[(2\kappa_1 + \eta - \Gamma)A_{j,n}e^{\kappa_1 r} + (2\kappa_2 + \eta - \Gamma)B_{j,n}e^{\kappa_2 r}\right]\frac{r^{n-1}}{(n-1)!} + \sum_{n=2}^{j-1}\left[A_{j,n}e^{\kappa_1 r} + B_{j,n}e^{\kappa_2 r}\right]\frac{r^{n-2}}{(n-2)!}. \label{eq:pass-first-use-ansatz}
\end{align}
To find a solution of this equation we use the ansatz
\begin{equation}
F_{j,j+1}(r) = \bar{T}_j + \sum_{n=0}^{j-1}\left[\bar{A}_{j,n}e^{\kappa_1 r} + \bar{B}_{j,n}e^{\kappa_2 r}\right]\frac{r^n}{n!}. \label{eq:Fjj+1}
\end{equation}
By substituting this ansatz into Eq. \eqref{eq:pass-first-use-ansatz} and using Eq. \eqref{eq:kappa} we obtain
\begin{equation}
\bar{T}_j = \frac{\alpha + \mu}{\alpha}T_j, \label{eq:cjj+1}
\end{equation}
and
\begin{subequations}
\label{eq:ABbar}
\begin{align}
2\eta\kappa_1\bar{A}_{j,j-1} &= [(2\eta-\Gamma)\kappa_1 + \mu]A_{j,j-1}, \\
2\eta\kappa_1\bar{A}_{j,j-2} + (2\kappa_1 + \eta)\bar{A}_{j,j-1} &= [(2\eta-\Gamma)\kappa_1 + \mu]A_{j,j-2} + (2\kappa_1 + \eta - \Gamma)A_{j,j-1}, \\
2\eta\kappa_1\bar{A}_{j,n} + (2\kappa_1 + \eta)\bar{A}_{j,n+1} + \bar{A}_{j,n+2} &= [(2\eta-\Gamma)\kappa_1 + \mu]A_{j,n} + (2\kappa_1 + \eta - \Gamma)A_{j,n+1} + A_{j,n+2}, \quad 0\leq n \leq j-3, \\
%
2\eta\kappa_2\bar{B}_{j,j-1} &= [(2\eta-\Gamma)\kappa_2 + \mu]B_{j,j-1}, \\
2\eta\kappa_2\bar{B}_{j,j-2} + (2\kappa_2 + \eta)\bar{B}_{j,j-1} &= [(2\eta-\Gamma)\kappa_2 + \mu]B_{j,j-2} + (2\kappa_2 + \eta - \Gamma)B_{j,j-1}, \\
2\eta\kappa_2\bar{B}_{j,n} + (2\kappa_2 + \eta)\bar{B}_{j,n+1} + \bar{B}_{j,n+2} &= [(2\eta-\Gamma)\kappa_2 + \mu]B_{j,n} + (2\kappa_2 + \eta - \Gamma)B_{j,n+1} + B_{j,n+2},\quad 0\leq n \leq j-3,
\end{align}
\end{subequations}
which specify all the constants in Eq. \eqref{eq:Fjj+1}. A general solution of Eq. \eqref{eq:pass-first-use-ansatz} can be found by adding a solution of the homogeneous system
\begin{equation}
\left(\partial_r^2 + \eta\partial_r + \alpha\right)f_{j,j}(r) = 0 \label{eq:pass-first-homogeneous}
\end{equation}
to Eq. \eqref{eq:Fjj+1}, but as all non-vanishing solutions of Eq. \eqref{eq:pass-first-homogeneous} diverges for $r \to -\infty$, we find that Eq. \eqref{eq:Fjj+1} is the physical solution.
To find $F_{j+1,j+1}(r)$ we now substitute Eq. \eqref{eq:Fjj+1} into Eq. \eqref{eq:pass-second} using Eq. \eqref{eq:kappa}
\begin{align}
\MoveEqLeft
\left(\partial_r^2 - \eta\partial_r + \alpha\right)F_{j+1,j+1}(r) \nonumber\\
&= (\alpha + \mu)\bar{T}_j + \sum_{n=0}^{j-1}\left[(\mu+\Gamma\kappa_1)\bar{A}_{j,n}e^{\kappa_1 r} + (\mu + \Gamma\kappa_2)\bar{B}_{j,n}e^{\kappa_2 r} \right]\frac{r^n}{n!} \nonumber\\
&\quad+ \sum_{n=1}^{j-1}\left[(2\kappa_1 - \eta + \Gamma)\bar{A}_{j,n}e^{\kappa_1 r} + (2\kappa_2-\eta+\Gamma)\bar{B}_{j,n}e^{\kappa_2 r} \right]\frac{r^{n-1}}{(n-1)!} + \sum_{n=2}^{j-1}\left[\bar{A}_{j,n}e^{\kappa_1 r} + \bar{B}_{j,n}e^{\kappa_2 r} \right]\frac{r^{n-2}}{(n-2)!}.
\end{align}
We solve this differential equation using the ansatz
\begin{equation}
F_{j+1,j+1}(r) = T_{j+1} + \sum_{n=0}^{j}\left[A_{j+1,n}e^{\kappa_1 r} + B_{j+1,n}e^{\kappa_2 r}\right]\frac{r^n}{n!} \label{eq:Fj+1j+1}
\end{equation}
and Eqs. \eqref{eq:kappa} and \eqref{eq:cjj+1}. We find
\begin{equation}
T_{j+1} = \frac{(\alpha + \mu)^2}{\alpha^2}T_j \label{eq:cj+1j+1},
\end{equation}
and
\begin{subequations}
\label{eq:AB}
\begin{align}
(2\kappa_1 - \eta)A_{j+1,j} &= (\mu + \Gamma\kappa_1)\bar{A}_{j,j-1}, \\
(2\kappa_1 - \eta)A_{j+1,j-1} + A_{j+1,j} &= (\mu + \Gamma\kappa_1)\bar{A}_{j,j-2} + (2\kappa_1 - \eta + \Gamma)\bar{A}_{j,j-1}, \\
(2\kappa_1 - \eta)A_{j+1,n} + A_{j+1,n+1} &= (\mu + \Gamma\kappa_1)\bar{A}_{j,n-1} + (2\kappa_1 - \eta + \Gamma)\bar{A}_{j,n} + \bar{A}_{j,n+1}, \quad 1\leq n \leq j-2, \\
(2\kappa_2 - \eta)B_{j+1,j} &= (\mu + \Gamma\kappa_2)\bar{B}_{j,j-1}, \\
(2\kappa_2 - \eta)B_{j+1,j-1} + B_{j+1,j} &= (\mu + \Gamma\kappa_2)\bar{B}_{j,j-2} + (2\kappa_2 - \eta + \Gamma)\bar{B}_{j,j-1}, \\
(2\kappa_2 - \eta)B_{j+1,n} + B_{j+1,n+1} &= (\mu + \Gamma\kappa_2)\bar{B}_{j,n-1} + (2\kappa_2 - \eta + \Gamma)\bar{B}_{j,n} + \bar{B}_{j,n+1}, \quad 1\leq n \leq j-2.
\end{align}
\end{subequations}
These equations specify all the constants in Eq. \eqref{eq:Fj+1j+1} except $A_{j+1,0}$ and $B_{j+1,0}$. This is because $f_{j+1,j+1}(r) = A_{j+1,0}e^{\kappa_1 r} + B_{j+1,0}e^{\kappa_2 r}$ is [by Eq. \eqref{eq:kappa}] the full solution of the homogeneous system
\begin{equation}
\left(\partial_r^2 - \eta\partial_r + \alpha\right)f_{j+1,j+1}(r) = 0.
\end{equation}
Hence the constants $A_{j+1,0}$ and $B_{j+1,0}$ must be determined by the boundary conditions, i.e. by Eqs. \eqref{eq:boundary-F}, which yield
\begin{subequations}
\label{eq:ABj+1,0}
\begin{align}
A_{j+1,0} + B_{j+1,0} &= -\frac{\mu^2}{\alpha^2}T_j + 2(\bar{A}_{j,0} + \bar{B}_{j,0}) - A_{j,0} - B_{j,0}, \\
\kappa_1 A_{j+1,0} + \kappa_2 B_{j+1,0} &= A_{j,1} + B_{j,1} - A_{j+1,1} - B_{j+1,1} + (\kappa_1-\Gamma)A_{j,0} + (\kappa_2 - \Gamma)B_{j,0} + \Gamma(\bar{A}_{j,0} + \bar{B}_{j,0}) + \Gamma\frac{\mu}{\alpha}T_j.
\end{align}
\end{subequations}
Since $F_{j+1,j+1}(r)$, given by Eq. \eqref{eq:Fj+1j+1}, is indeed of the form \eqref{eq:Fjj}, Eqs. \eqref{eq:Fjj}, \eqref{eq:ABbar}, \eqref{eq:cj+1j+1}, \eqref{eq:AB}, and \eqref{eq:ABj+1,0} give a recursive algorithm for $F_{j,j}(r)$. Applying this algorithm successively gives $F_{N,N}(r)$, which gives the output after the full chain.
As noted above, $T_N$ is the transmission coefficient for two uncorrelated photons if we set $T_0 = 1$. The probability that two uncorrelated photons will be transmitted through the chain is then given by $|T_N|^2$, and by Eq. \eqref{eq:cj+1j+1} we have
\begin{equation}
|T_N|^2 = \left|\frac{\alpha + \mu}{\alpha}\right|^{4N}. \label{eq:TNsq}
\end{equation}
Inserting the definitions of $\alpha$ and $\mu$ [Eqs. \eqref{eq:alpha} and \eqref{eq:mu}] into Eq. \eqref{eq:TNsq} gives Eq. (4) of the main text. The corresponding two-photon correlation function is given by $g^{(2)}(r) = |F_{N,N}(r)|^2/|T_N|^2$, which we show in Fig. 3(a) of the main text.
\section{Optimal atom number}
In the main text we describe that the optimal atom number $N_{\textup{opt}}$ to obtain $g^{(2)}(0) \leq 0.1$ increases as a function of the Rabi frequency $\Omega$ in the regime of large $\Omega$. This can be seen in Fig. \ref{fig:Nmin}. The values of $N_{\textup{opt}}$ shown in Fig. \ref{fig:Nmin} are the values underlying the transmission displayed in Fig. 3(c) of the main text.
\begin{figure}
\includegraphics{WNmin_large}
\caption{Minimal atom number $N_{\textup{opt}}$ needed to obtain $g^{(2)}(0) \leq 0.1$ as a function of the Rabi frequency $\Omega$ of the coupling field. The detunings are $\Delta = 0.4\Gamma$ and $\bar{\Delta} = -0.2\Gamma$. \label{fig:Nmin}}
\end{figure}
\twocolumngrid
|
2,877,628,088,638 | arxiv | |
2,877,628,088,639 | arxiv | \section{Introduction}
Einstein manifolds are related with many questions in geometry and physics, for instance: Riemannian functionals and their critical points, Yang-Mills theory, self-dual manifolds of dimension four, exact solutions for the Einstein equation field. Today we already have in our hands many examples of Einstein manifolds, even the Ricci-flat ones (see \cite{besse,Oneil,LeandroPina,Romildo}). However, finding new examples of Einstein metrics is not an easy task. A common tool to make new examples of Einstein spaces is to consider warped product metrics (see \cite{LeandroPina,Romildo}).
In \cite{besse}, a question was made about Einstein warped products:
\begin{eqnarray}\label{question}
\mbox{``Does there exist a compact Einstein warped}\nonumber\\
\mbox{product with nonconstant warping
function?"}
\end{eqnarray}
Inspired by the problem (\ref{question}), several authors explored this subject in an attempt to get examples of such manifolds. Kim and Kim \cite{kimkim} considered a compact Riemannian Einstein warped product with nonpositive scalar curvature. They proved that a manifold like this is just a product manifold. Moreover, in \cite{BRS,Case1}, they considered (\ref{question}) without the compactness assumption. Barros, Batista and Ribeiro Jr \cite{BarrosBatistaRibeiro} also studied (\ref{question}) when the Einstein product manifold is complete and noncompact with nonpositive scalar curvature. It is worth to say that Case, Shu and Wei \cite{Case} proved that a shrinking quasi-Einstein metric has positive scalar curvature. Further, Sousa and Pina \cite{Romildo} were able to classify some structures of Einstein warped product on semi-Riemannian manifolds, they considered, for instance, the case in which the base and the fiber are Ricci-flat semi-Riemannian manifolds. Furthermore, they provided a classification for a noncompact Ricci-flat warped product semi-Riemannian manifold with $1$-dimensional fiber, however the base is not necessarily a Ricci-flat manifold. More recently, Leandro and Pina \cite{LeandroPina} classified the static solutions for the vacuum Einstein equation field with cosmological constant not necessarily identically zero, when the base is invariant under the action
of a translation group. In particular, they provided a necessarily condition for integrability of the system of differential equations given by the invariance of the base for the static metric.
When the base of an Einstein warped product is a compact Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold, we get a partial answer for (\ref{question}). Furthermore, when the base is not compact, we obtain new examples of Einstein warped products.
Now, we state our main results.
\begin{theorem}\label{teo1}
Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (non Ricci-flat), where $M$ is a compact Riemannian manifold and $N$ is a Ricci-flat semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial.
\end{theorem}
It is very natural to consider the next case (see Section \ref{SB}).
\begin{theorem}\label{teo2}
Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$; $\lambda\neq0$), where $M$ is a compact Riemannian manifold with scalar curvature $R\leq\lambda(n-m)$, and $N$ is a semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial. Moreover, if the equality holds, then $N$ is Ricci-flat.
\end{theorem}
Now, we consider that the base is a noncompact Riemannian manifold. The next result was inspired, mainly, by Theorem \ref{teo2} and \cite{LeandroPina}, and gives the relationship between Ricci tensor $\widehat{R}ic$ of the warped metric $\hat{g}$ and the Ricci tensor $Ric$ for the metric of the base $g$.
\begin{theorem}\label{teo3b}
Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$), where $M$ is a noncompact Riemannian manifold with constant scalar curvature $\lambda=\frac{R}{n-1}$, and $N$ is a semi-Riemannian manifold. Then $M$ is Ricci-flat if and only if the scalar curvature $R$ is zero.
\end{theorem}
Considering a conformal structure for the base of an Einstein warped product semi-Riemannian manifold, we have the next results. Furthermore, the following theorem is very technical. We consider that the base for such Einstein warped product manifold is conformal to a pseudo-Euclidean space which is invariant under the action of a $(n-1)$-dimensional translation group, and that the fiber is a Ricci-flat space. In order, for the reader to have a more intimate view of the next results, we recommend a previous reading of Section \ref{CFSI}.
\begin{theorem}\label{teo3a}
Let $(\widehat{M}^{n+m}, \hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an warped product semi-Riemannian manifold such that $N$ is a Ricci-flat semi-Riemannian manifold. Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with coordinates $x =(x_{1},\ldots , x_{n})$ and $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker and $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider smooth functions $\varphi(\xi)$ and $f(\xi)$, where $\xi=\displaystyle\sum_{k=1}^{n}\alpha_{k}x_{k}$, $\alpha_{k}\in\mathbb{R}$, and $\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\kappa$. Then $(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, where $\bar{g}=\frac{1}{\varphi^{2}}g$, is an Einstein warped product semi-Riemannain manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$) such that $f$ and $\varphi$ are given by:
\begin{eqnarray}\label{system2}
\left\{
\begin{array}{lcc}
(n-2)\varphi\varphi''-m\left(G\varphi\right)'=mG^{2}\\\\
\varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\
nG\varphi'-(G\varphi)'-mG^{2}=\kappa\lambda,
\end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}\label{sera3}
f=\Theta\exp\left(\int\frac{G}{\varphi}d\xi\right),
\end{eqnarray}
where $\Theta\in\mathbb{R}_{+}\backslash\{0\}$, $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Here $\bar{R}$ is the scalar curvature for $\bar{g}$.
\end{theorem}
The next result is a consequence of Theorem \ref{teo3a}.
\begin{theorem}\label{teo3}
Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold, where $M$ is conformal to a pseudo-Euclidean space invariant under the action of a $(n-1)$-dimensional translation group with constant scalar curvature (possibly zero), and $N$ is a Ricci-flat semi-Riemannian manifold. Then, $\widehat{M}$ is either
\begin{enumerate}
\item[(1)] a Ricci-flat semi-Riemannain manifold $(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$, such that $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space with warped function $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0,$ $A\neq0$ are nonnull constants, or\\
\item[(2)] conformal to $(\mathbb{R}^{n},g)\times(N^{m},\tilde{g})$, where $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space. The conformal function $\varphi$ is given by
\begin{eqnarray*}
\varphi(\xi)= \frac{1}{(-G\xi+C)^{2}};\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}.
\end{eqnarray*}
\end{enumerate}
Moreover, the conformal function is defined for $\xi\neq\frac{C}{G}$.
\end{theorem}
It is worth mentioning that the first item of Theorem \ref{teo3} was not considered in \cite{Romildo}.
From Theorem \ref{teo3} we can construct examples of complete Einstein warped product Riemannian manifolds.
\begin{corollary}\label{coro1}
Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0$ and $A\neq0$ are constants. Therefore, $(\mathbb{R}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Ricci-flat warped product Riemannian manifold.
\end{corollary}
\begin{corollary}\label{coro2}
Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(x)= \frac{1}{x_{n}}$ with $x_{n}>0$. Therefore, $(\widehat{M},\hat{g})=(\mathbb{H}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Riemannian Einstein warped product such that $$\widehat{R}ic=-\frac{m+n-1}{n(n-1)}\hat{g}.$$
\end{corollary}
The paper is organized as follows. Section \ref{SB} is divided in two subsections, namely, {\it General formulas} and {\it A conformal structure for the warped product with Ricci-flat fiber}, where will be provided the preliminary results. Further, in Section \ref{provas}, we will prove our main results.
\section{Preliminar}\label{SB}
Consider $(M^{n}, g)$ and $(N^{m},\tilde{g})$, with $n\geq3$ and $m\geq2$, semi-Riemannian manifolds, and let $f:M^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $M\times N$ with metric
\begin{eqnarray*}
\hat{g}=g+f^{2}\tilde{g}.
\end{eqnarray*}
From Corollary 43 in \cite{Oneil}, we have that (see also \cite{kimkim})
\begin{eqnarray}\label{test1}
\widehat{R}ic=\lambda\hat{g}\Longleftrightarrow\left\{
\begin{array}{lcc}
Ric-\frac{m}{f}\nabla^{2}f=\lambda g\\
\widetilde{R}ic=\mu\tilde{g}\\
f\Delta f+(m-1)|\nabla f|^{2}+\lambda f^{2}=\mu
\end{array}
,\right.
\end{eqnarray}
where $\lambda$ and $\mu$ are constants. Which means that $\widehat{M}$ is an Einstein warped product if and only if (\ref{test1}) is satisfied. Here $\widehat{R}ic$, $\widetilde{R}ic$ and $Ric$ are, respectively, the Ricci tensor for $\hat{g}$, $\tilde{g}$ and $g$. Moreover, $\nabla^{2}f$, $\Delta f$ and $\nabla f$ are, respectively, the Hessian, The Laplacian and the gradient of $f$ for $g$.
\subsection{General formulas}\label{GF}
We derive some useful formulae from system (\ref{test1}). Contracting the first equation of (\ref{test1}) we get
\begin{eqnarray}\label{01}
Rf^{2}-mf\Delta f=nf^{2}\lambda,
\end{eqnarray}
where $R$ is the scalar curvature for $g$. From the third equation in (\ref{test1}) we have
\begin{eqnarray}\label{02}
mf\Delta f+m(m-1)|\nabla f|^{2}+m\lambda f^{2}=m\mu.
\end{eqnarray}
Then, from (\ref{01}) and (\ref{02}) we obtain
\begin{eqnarray}\label{oi}
|\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=\frac{\mu}{(m-1)}.
\end{eqnarray}
When the base is a Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold (i.e., $\mu=0$), from (\ref{oi}) we obtain
\begin{eqnarray}\label{eqtop}
|\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=0.
\end{eqnarray}
Then, either
\begin{eqnarray*}
R\leq\lambda(n-m)
\end{eqnarray*}
or $f$ is trivial , i.e., $\widehat{M}$ is a product manifold.
\subsection{A conformal structure for the Warped product with Ricci-flat fiber}\label{CFSI}
In what follows, consider $(\mathbb{R}^{n}, g)$ and $(N^{m},\tilde{g})$ semi-Riemannian manifolds, and let $f:\mathbb{R}^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $\mathbb{R}^{n}\times N$ with metric
\begin{eqnarray*}
\hat{g}=g+f^{2}\tilde{g}.
\end{eqnarray*}
Let $(\mathbb{R}^{n}, g)$, $n\geq3$, be the standard pseudo-Euclidean space with metric $g$ and coordinates $(x_{1},\ldots,x_{n})$ with $g_{ij}=\delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$ a warped product, where $\varphi:\mathbb{R}^{n}\rightarrow\mathbb{R}\backslash\{0\}$ is a smooth function such that $\bar{g}=\frac{g}{\varphi^{2}}$. Furthermore, we consider that $\widehat{M}$ is an Einstein semi-Riemannian manifold, i.e., $$\widehat{R}ic=\lambda\hat{g},$$
where $\widehat{R}ic$ is the Ricci tensor for the metric $\hat{g}$ and $\lambda\in\mathbb{R}$.
We use invariants for the group action (or subgroup) to reduce a partial differential equation into a system of ordinary differential equations \cite{olver}. To be more clear, we consider that $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^n,\bar{g})\times_{f}(N^{m},\tilde{g})$ is such that the base is invariant under the action of a $(n-1)$-dimensional translation group (\cite{BarbosaPinaKeti,olver,LeandroPina,Romildo,Tenenblat}). More precisely, let $(\mathbb{R}^{n}, g)$ be the standard pseudo-euclidean space with metric $g$ and coordinates $(x_{1}, \cdots, x_{n})$, with $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i, j\leq n$, where $\delta_{ij}$ is the delta Kronecker,
$\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Let $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, $\alpha_{i}\in\mathbb{R}$, be a basic invariant for a $(n-1)$-dimensional translation group where $\alpha=\displaystyle\sum_{i}\alpha_{i}\frac{\partial}{\partial x_{i}}$ is a timelike, lightlike or spacelike vector, i.e., $\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}=-1,0,$ or $1$, respectively. Then we consider $\varphi(\xi)$ and $f(\xi)$ non-trivial differentiable functions such that
\begin{eqnarray*}
\varphi_{x_{i}}=\varphi'\alpha_{i}\quad\mbox{and}\quad f_{x_{i}}=f'\alpha_{i}.
\end{eqnarray*}
Moreover, it is well known (see \cite{BarbosaPinaKeti,LeandroPina,Romildo}) that if $\bar{g}=\frac{1}{\varphi^{2}}g$, then the Ricci tensor $\bar{R}ic$ for $\bar{g}$ is given by
$$\bar{R}ic=\frac{1}{\varphi^{2}}\{(n-2)\varphi\nabla^{2}\varphi + [\varphi\Delta\varphi - (n-1)|\nabla\varphi|^{2}]g\},$$ where $\nabla^{2}\varphi$, $\Delta\varphi$ and $\nabla\varphi$ are, respectively, the Hessian, the Laplacian and the gradient of $\varphi$ for the metric $g$.
Hence, the scalar curvature of $\bar{g}$ is given by
\begin{eqnarray}\label{scalarcurvature}
\bar{R}&=&\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\varphi^{2}\left(\bar{R}ic\right)_{kk}=(n-1)(2\varphi\Delta\varphi - n|\nabla\varphi|^{2})\nonumber\\
&=&(n-1)[2\varphi\varphi''-n(\varphi)^{2}]\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}.
\end{eqnarray}
In what follows, we denote $\kappa=\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}$.
When the fiber $N$ is a Ricci-flat semi-Riemannian manifold, we already know from Theorem 1.2 in \cite{Romildo} that $\varphi(\xi)$ and $f(\xi)$ satisfy the following system of differential equations
\begin{eqnarray}\label{system}
\left\{
\begin{array}{lcc}
(n-2)f\varphi''-mf''\varphi-2m\varphi'f'=0;\\\\
f\varphi\varphi''-(n-1)f(\varphi')^{2}+m\varphi\varphi'f'=\kappa\lambda f;\\\\
(n-2)f\varphi\varphi'f'-(m-1)\varphi^{2}(f')^{2}-ff''\varphi^{2}=\kappa\lambda f^{2}.
\end{array}
\right.
\end{eqnarray}
Note that the case where $\kappa=0$ was proved in \cite{Romildo}. Therefore, we only consider the case $\kappa=\pm1$.
\
\section{Proof of the main results}\label{provas}
\
\noindent {\bf Proof of Theorem \ref{teo1}:}
In fact, from the third equation of the system (\ref{test1}) we get that
\begin{eqnarray}\label{kimkimeq}
div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=\mu.
\end{eqnarray}
Moreover, if $N$ is Ricci-flat, from (\ref{kimkimeq}) we obtain
\begin{eqnarray}\label{kimkimeq1}
div\left(f\nabla f\right)+\lambda f^{2}\leq div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=0.
\end{eqnarray}
Considering $M$ a compact Riemannian manifold, integrating (\ref{kimkimeq1}) we have
\begin{eqnarray}\label{kimkimeq2}
\int_{M}\lambda f^{2}dv=\int_{M}\left(div\left(f\nabla f\right)+\lambda f^{2}\right)dv\leq 0.
\end{eqnarray}
Therefore, from (\ref{kimkimeq2}) we can infer that
\begin{eqnarray}\label{kimkimeq3}
\lambda\int_{M}f^{2}dv\leq 0.
\end{eqnarray}
This implies that, either $\lambda\leq0$ or $f$ is trivial.
It is worth to point out that compact quasi-Einstein metrics on compact manifolds with $\lambda\leq0$ are trivial (see Remark 6 in \cite{kimkim}).
\hfill $\Box$
\
\noindent {\bf Proof of Theorem \ref{teo2}:}
Let $p$ be a maximum point of $f$ on $M$. Therefore, $f(p)>0$, $(\nabla f)(p)=0$ and $(\Delta f)(p)\geq0$. By hypothesis $R+\lambda(m-n)\leq0$, then from (\ref{oi}) we get
\begin{eqnarray*}
|\nabla f|^{2}\geq\frac{\mu}{m-1}.
\end{eqnarray*}
Whence, in $p\in M$ we obtain
\begin{eqnarray*}
0=|\nabla f|^{2}(p)\geq\frac{\mu}{m-1}.
\end{eqnarray*}
Since $\mu$ is constant, we have that $\mu\leq0$. Moreover, from the third equation in (\ref{test1}) we have
\begin{eqnarray*}
\lambda f^{2}(p)\leq (f\Delta f)(p)+(m-1)|\nabla f|^{2}(p)+\lambda f^{2}(p)=\mu\leq0.
\end{eqnarray*}
Implying that $\lambda\leq0$. Then, from \cite{kimkim} the result follows.
Now, if $R+\lambda(m-n)=0$ from (\ref{oi}) we have that
\begin{eqnarray*}
|\nabla f|^{2}=\frac{\mu}{m-1}.
\end{eqnarray*}
Then, for $p\in M$ we obtain
\begin{eqnarray*}
0=|\nabla f|^{2}(p)=\frac{\mu}{m-1}.
\end{eqnarray*}
Therefore, since $\mu$ is a constant we get that $\mu=0$, i.e., $N$ is Ricci-flat.
\hfill $\Box$
It is worth to say that if $M$ is a compact Riemannian manifold and the scalar curvature $R$ is constant, then $f$ is trivial (see \cite{Case}).
\
\noindent {\bf Proof of Theorem \ref{teo3b}:}
Considering $\lambda=\frac{R}{n-1}$ in equation (\ref{oi}) we obtain
\begin{eqnarray}\label{ooi}
|\nabla f|^{2}+\frac{R}{m(n-1)}f^{2}=\frac{\mu}{m-1}.
\end{eqnarray}
Then, taking the Laplacian we get
\begin{eqnarray}\label{3b1}
\frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}\left(|\nabla f|^{2}+f\Delta f\right)=0.
\end{eqnarray}
Moreover, when we consider that $\lambda=\frac{R}{n-1}$ in (\ref{test1}), and contracting the first equation of the system we have that
\begin{eqnarray}\label{3b2}
-\Delta f=\frac{Rf}{m(n-1)}.
\end{eqnarray}
From (\ref{3b2}), (\ref{3b1}) became
\begin{eqnarray}\label{3b3}
\frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}|\nabla f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}.
\end{eqnarray}
The first equation of (\ref{test1}) and (\ref{ooi}) allow us to infer that
\begin{eqnarray*}
\frac{2f}{m}Ric(\nabla f)&=&\frac{2Rf}{m(n-1)}\nabla f+2\nabla^{2}f(\nabla f)\nonumber\\
&=&\nabla\left(|\nabla f|^{2}+\frac{Rf^{2}}{m(n-1)}\right)=\nabla\left(\frac{\mu}{m-1}\right)=0.
\end{eqnarray*}
And since $f>0$ we get
\begin{eqnarray}\label{3b4}
Ric(\nabla f, \nabla f)=0.
\end{eqnarray}
Remember the Bochner formula
\begin{eqnarray}\label{bochner}
\frac{1}{2}\Delta|\nabla f|^{2}=|\nabla^{2}f|^{2}+Ric(\nabla f,\nabla f)+g(\nabla f,\nabla\Delta f).
\end{eqnarray}
Whence, from (\ref{3b2}), (\ref{3b4}) and (\ref{bochner}) we obtain
\begin{eqnarray}\label{bochner1}
\frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}{|\nabla f|}^{2}=|\nabla^{2}f|^{2}.
\end{eqnarray}
Substituting (\ref{3b3}) in (\ref{bochner1}) we get
\begin{eqnarray}\label{hessiannorm}
|\nabla^{2}f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}.
\end{eqnarray}
From the first equation of (\ref{test1}), a straightforward computation give us
\begin{eqnarray}\label{ricnorm}
|Ric|^{2}=\frac{m^{2}}{f^{2}}|\nabla^{2}f|^{2}+\frac{2mR\Delta f}{(n-1)f}+\frac{nR^{2}}{(n-1)^{2}}.
\end{eqnarray}
Finally, from (\ref{hessiannorm}), (\ref{3b2}) and (\ref{ricnorm}) we have that
\begin{eqnarray*}
|Ric|^{2}=\frac{R^{2}}{n-1}.
\end{eqnarray*}
Then, we get the result.
\hfill $\Box$
\
In what follows, we consider the conformal structure given in Section \ref{CFSI} to prove Theorem \ref{teo3a} and Theorem \ref{teo3}.
\
\noindent {\bf Proof of Theorem \ref{teo3a}:}
From definiton,
\begin{eqnarray}\label{grad}
|\bar{\nabla}f|^{2}=\displaystyle\sum_{i,j}\varphi^{2}\varepsilon_{i}\delta_{ij}f_{x_{i}}f_{x_{j}}=\left(\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}\right)\varphi^{2}(f')^{2}=\kappa\varphi^{2}(f')^{2},
\end{eqnarray}
where $\bar{\nabla}f$ is the gradient of $f$ for $\bar{g}$, and $\kappa\neq0$.
Then, from (\ref{eqtop}) and (\ref{grad}) we have
\begin{eqnarray}\label{sera}
\kappa\varphi^{2}(f')^{2}+\left[\frac{\lambda(m-n)+\bar{R}}{m(m-1)}\right]f^{2}=\frac{\mu}{m-1}.
\end{eqnarray}
Consider that $N$ is a Ricci-flat semi-Riemannian manifold, i.e., $\mu=0$, from (\ref{sera}) we get
\begin{eqnarray}\label{sera1}
\frac{f'}{f}=\frac{G(\bar{R})}{\varphi},
\end{eqnarray}
where $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$.
Which give us (\ref{sera3}).
Now, from (\ref{sera1}) we have
\begin{eqnarray}\label{sera2}
\frac{f''}{f}=\left(\frac{G}{\varphi}\right)'+\left(\frac{G}{\varphi}\right)^{2}=\left(\frac{G}{\varphi}\right)^{2}+\frac{G'}{\varphi}-\frac{G\varphi'}{\varphi^{2}}.
\end{eqnarray}
Therefore, from (\ref{system}), (\ref{sera1}) and (\ref{sera2}) we get (\ref{system2}).
\hfill $\Box$
\
\noindent {\bf Proof of Theorem \ref{teo3}:}
Considering that $\bar{R}$ is constant, from (\ref{system2}) we obtain
\begin{eqnarray}\label{system12}
\left\{
\begin{array}{lcc}
(n-2)\varphi\varphi''-mG\varphi'=mG^{2}\\\\
\varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\
(n-1)G\varphi'-mG^{2}=\kappa\lambda
\end{array}
.\right.
\end{eqnarray}
The third equation in (\ref{system12}) give us that $\varphi$ is an affine function. Moreover, since
\begin{eqnarray}\label{hum}
\varphi'(\xi)=\frac{\kappa\lambda+mG^{2}}{(n-1)G},
\end{eqnarray}
we get $\varphi''=0$. Then, from the first and second equations in (\ref{system12}) we have, respectively,
\begin{eqnarray*}
-mG\varphi'=mG^{2}\quad\mbox{and}\quad -(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda.
\end{eqnarray*}
This implies that
\begin{eqnarray}\label{hdois}
-(\varphi')^{2}=\frac{\kappa\lambda+mG^{2}}{(n-1)}.
\end{eqnarray}
Then, from (\ref{hum}) and (\ref{hdois}) we get
\begin{eqnarray*}
(\varphi')^{2}+G\varphi'=0.
\end{eqnarray*}
That is, $\varphi'=0$ or $\varphi'=-G$.
First consider that $\varphi'=0$. From (\ref{scalarcurvature}) and (\ref{system12}), it is easy to see that $\lambda=\bar{R}=0$. Then, we get the first item of the theorem since, as mentioned, the case $\varphi' = 0$ was not considered in \cite{Romildo}.
Now, we take $\varphi'=-G$. Integrating over $\xi$ we have
\begin{eqnarray}\label{phii}
\varphi(\xi)=-G\xi+C;\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}.
\end{eqnarray}
Then, from (\ref{hum}) we obtain
\begin{eqnarray}\label{htres}
\frac{\kappa\lambda+mG^{2}}{(n-1)G}=-G.
\end{eqnarray}
Since $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$, from (\ref{htres}) we obtain
\begin{eqnarray}\label{scalarcurvature1}
\bar{R}=\frac{n(n-1)\lambda}{(m+n-1)}.
\end{eqnarray}
Considering that $\lambda\neq0$, we can see that $\bar{R}$ is a non-null constant. On the other hand, since $\varphi'=-G$, from (\ref{scalarcurvature}) we get
\begin{eqnarray}\label{anem}
\bar{R}=-n(n-1)\kappa G^{2},
\end{eqnarray}
where $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$. Observe that (\ref{scalarcurvature1}) and (\ref{anem}) are equivalent.
Furthermore, from (\ref{sera3}) and (\ref{phii}) we get
\begin{eqnarray*}
f(\xi)=\frac{\Theta}{-G\xi+C}.
\end{eqnarray*}
Now the demonstration is complete.
\hfill $\Box$
\
\noindent {\bf Proof of Corollary \ref{coro1}:}
It is a direct consequence of Theorem \ref{teo3}-(1).
\hfill $\Box$
\
\noindent {\bf Proof of Corollary \ref{coro2}:}
Remember that $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, where $\alpha_{i}\in\mathbb{R}$ (cf. Section \ref{CFSI}). Consider in Theorem \ref{teo3}-(2) that $\alpha_{n}=\frac{1}{G}$ and $\alpha_{i}=0$ for all $i\neq n$. Moreover, taking $C=0$ we get
\begin{eqnarray}
f(\xi)=\frac{1}{x_{n}^{2}}.
\end{eqnarray}
Moreover, take $\mathbb{R}^{n^{\ast}}_{+}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}; x_{n}>0\}$. Then, $\left(\mathbb{R}^{n^{\ast}}_{+},g_{can}=\frac{\delta_{ij}}{x_{n}^{2}}\right)=(\mathbb{H}^{n},g_{can})$ is the hyperbolic space. We pointed out that $\mathbb{H}^{n}$ with this metric has constant sectional curvature equal to $-1$. Then, from (\ref{scalarcurvature1}) we obtain $\lambda=-\frac{m+n-1}{n(n-1)}$, and the result follows.
\hfill $\Box$
\iffalse
\noindent {\bf Proof of Theorem \ref{teo4}:}
It is a straightforward computation from (\ref{system2}) that
\begin{eqnarray*}\label{eqseggrau}
m(m-1)G^{2}-\left[2m(n-2)\varphi'\right]G+[\lambda(m+n-2)+(n-2)(n-1)(\varphi')^{2}]=0,
\end{eqnarray*}
where $G=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Therefore, from the second order equation we have
\begin{eqnarray*}\label{G}
G=\frac{m(n-2)\varphi'\pm \sqrt{\Delta}}{m(m-1)},
\end{eqnarray*}
where $\Delta=[m^{2}(n-2)^{2}-m(m-1)(n-2)(n-1)](\varphi')^{2}-\lambda m(m-1)(m+n-2)$. Observe that, by hypothesis $m=n-1$, then $\Delta=-\lambda m(m-1)(m+n-2)$. Whence, (\ref{G}) became
\begin{eqnarray*}
G=\varphi'\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}.
\end{eqnarray*}
This implies that $\lambda<0$. Then, taking $\beta=\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}$
\begin{eqnarray}\label{hquatro}
G^{2}=(\varphi')^{2}+2\varphi'\beta+\beta^{2}.
\end{eqnarray}
Since $G^{2}=\frac{\kappa(\lambda-\bar{R})}{(n-1)(n-2)}$ and $\bar{R}=\kappa(n-1)[2\varphi\varphi''-n(\varphi')^{2}]$ from (\ref{hquatro}) we get
\begin{eqnarray}\label{edoboa}
\varphi\varphi''-(\varphi')^{2}+\tilde{\beta}\varphi'+\theta=0,
\end{eqnarray}
where $\tilde{\beta}=\pm\sqrt{\frac{-\lambda(2n-3)(n-2)}{(n-1)}}$ and $\theta=-\lambda\frac{2n+\kappa-3}{2(n-1)}$. Then, from (\ref{edoboa}) we obtain
\begin{eqnarray*}
\varphi(\xi) = \frac{1}{2}\xi(\sqrt{\tilde{\beta}^{2}+4\theta}+\tilde{\beta})+\ln\left(\frac{\sqrt{\tilde{\beta}^{2}+4\theta}}{\theta_{1}\exp(\xi\sqrt{\tilde{\beta}^{2}+4\theta})-\theta_{2}}\right),
\end{eqnarray*}
where $\tilde{\beta}^{2}+4\theta=-\lambda\frac{(2n-3)(n-4)+2\kappa}{(n-1)}$ and $\theta_{1}\neq0$. Observe that, if $n=4$ then $\kappa=1$.
\hfill $\Box$
\fi
\
\begin{acknowledgement}
The authors would like to express their deep thanks to professor Ernani Ribeiro Jr for valuable
suggestions.
\end{acknowledgement}
|
2,877,628,088,640 | arxiv | \section{Introduction}
Differential equations and their solutions in the complex plane have been studied extensively since the 19th century. A main motivation then was that new transcendental functions could be defined and studied as the solutions of certain differential equations. For example, Airy's equation, Bessel's equation, Weber-Hermite equation etc., all of which are important in mathematical physics, have solutions which cannot be expressed in terms of elementary functions. Rather, their solutions can be given e.g.\ in terms of power series expansions around a point, convergent in certain domains, defining analytic functions there, or by asymptotic series. Briot and Bouquet \cite{BriotBouquet} noted that cases where a differential equation can be integrated directly are extremely rare, and one should therefore study the properties of the solutions of a differential equation through the equation itself, as they have demonstrated for elliptic functions. All the equations mentioned above are linear differential equations, with non-constant coefficients. The singularities of their solution are {\it fixed}, i.e.\ they can occur only at those points where either one of the coefficients of the equation becomes singular or where the coefficient multiplying the highest derivative term vanishes. The fixed singularities essentially can be read off from the equation itself, and the nature of these singularities can be determined. The case is more involved for non-linear differential equations, for which singularities can develop somewhat spontaneously, depending on the initial data, and a priori the nature of the singularities cannot be determined by inspecting the differential equation. In particular, the positions of these singularities depend on the initial data prescribed for the equation. Roughly speaking, going from one solution of the equation to a different solution under a small variation in the initial data, the position of the singularities changes in a continuous fashion. Such singularities are thus called {\it movable}. For a detailed discussion and a more exact definition of movable singularities we refer to the article \cite{Murata1988} by Murata.
A main motivation for complex analysts studying differential equations is to find new mathematical functions with properties of interest to solve problems in physics and other areas of mathematics. 'Interesting' or 'good' mathematical functions for function theorists were considered to have no movable critical points. In other words, apart from a finite number of fixed singularities, all other (movable) singularities of any solution are poles. An equation of this kind is said to have the {\it Painlev\'e property}. For example, S. Kovalevskaya \cite{kovalevskaya} identified all the integrable cases of the equations of motion of a heavy top, by demanding that their complex solutions can be expressed by Laurent series expansions, i.e.\ solutions with singularities no worse than poles. Apart from the already known cases, namely the Lagrange and Euler top, she identified one further integrable case, given by certain ratios of the principle moments of inertia of the top, which is now known as the Kovalevskaya top.
P. Painlev\'e \cite{painleve1} and his pupil B. Gambier \cite{gambier1} took on the challenge of classifying second-order ordinary differential equations of the form
\begin{equation}
\label{2ndorderrational}
y'' = R(z,y,y'),
\end{equation}
$R$ a function rational in $y,y'$ with analytic coefficients, with the property now named after Painlev\'e. The result of this classification was a list of $50$ canonical types of equations, in the sense that any equation in the class can be obtained from an equation in the list of $50$ by applying a M\"obius type transformation
\begin{equation*}
Y(Z) = \frac{a(z) y(z) + b(z)}{c(z)y(z) + d(z)}, \quad Z = \phi(z),
\end{equation*}
where $a,b,c,d$ and $\phi$ are analytic functions. Most of the equations in the list were found to be integrable in terms of formerly (at the time of Painlev\'e) known, classical functions, such as Airy functions, Hermite functions, Bessel functions, or other special functions (solutions of certain linear second-order differential equations with non-constant coefficients), elliptic functions, or by quadrature. Only six equations in the list turned out to produce essentially new analytic functions. These nonlinear equations are now known as the six Painlev\'e equations and their non-classical solutions are commonly called Painlev\'e transcendents. (For particular values of the parameters in the Painlev\'e equations these also have classical solutions, but not for generic parameters.)
One way to detect equations with the Painlev\'e property, within a given class, is to first check whether they satisfy certain necessary criteria. Of such criteria, although not the one originally persued by Painlev\'e himself (who used the so-called $\alpha$-method), a very common one is to perform the {\it Painlev\'e test}, which checks whether the equation admits, at every point in the complex plane, certain formal Laurent series expansions. It is then still a much more difficult task to prove whether an equation, which passes the Painlev\'e test, actually possesses the Painlev\'e property. Proofs for the Painlev\'e property of all six Painlev\'e equations were given in \cite{Shimomura2003}, although earlier proofs exist in the literature, e.g.\ \cite{Hinkkanen1999}, \cite{Okamoto2001}, \cite{Steinmetz2000} or \cite{YanHu2003}, see also the book \cite{gromak}. There also exists a completely different approach of proving the Painlev\'e property making use of the so-called isomonodromy method \cite{Fokas}.
In \cite{filipukhalburd1,filipukhalburd2,filipukhalburd3}, Filipuk and Halburd apply a similar test to certain classes of second-order differential equations, but with algebraic series expansions in a fractional power of $z-z_0$,
\begin{equation}
\label{algebraicexpansion}
y(z) = \sum_{j=0}^\infty c_j (z-z_0)^{(j-j_0)/n}, \quad j_0, n \in \mathbb{N},
\end{equation}
instead of Laurent series. The test, which again relies on recursively computing the coefficients of the series expansion, gives rise to certain resonance conditions, which need to be satisfied in order for there to be no obstruction in the recurrence. Furthermore, in the papers cited above, Filipuk and Halburd prove that the conditions, within the given classes of equations, are sufficient for all movable singularities to be algebraic poles of the form (\ref{algebraicexpansion}), with the proviso that these are reachable by analytic continuation along a path of {\it finite length}. The study of this property was continued by one of the authors for other classes of second-order equations \cite{Kecker2012} and certain Hamiltonian systems \cite{Kecker2016}.
Thus, just by inspecting a non-linear differential equation, it is far from obvious to see whether it has the Painlev\'e property, or, more generally, what types of movable singularities its solutions can develop. In this article we are concerned with a method of determining, from a given equation or system of equations, what types of singularities the equation can develop. Although our point of departure are the Painlev\'e equations, we are studying classes of differential equations and Hamiltonian systems which admit different types of movable singularities other than poles, such as algebraic poles and logarithmic singularities. Employing a method originating in algebraic geometry, called a {\it blow-up}, we will resolve certain points of indeterminacy, or {\it base points}, which an equivalent system of equations acquires in an augmented (compact) phase space which includes the points at infinity in the space of dependent variables. We will see that this method essentially gives an algorithmic procedure of determining the possible types of singularities an equation can develop and to give conditions for which certain types of singularities, in particular logarithmic singularities, cannot occur. This can therefore be seen as an alternative to the Painlev\'e test and its generalisation to algebraic series expansions.
In a 1979 paper \cite{Okamoto1979}, K. Okamoto introduced the {\it space of initial values} for each of the Painlev\'e equations. These are extended phase spaces, every point of which defines a regular intial value problem in some coordinate chart of the space for one of the Painlev\'e equations. The space of initial values is obtained by first compactifying the phase space $\mathbb{C}^2$ of $(y,y')$ to some rational surface, such as e.g. $\mathbb{P}^2$ or $\mathbb{P}^1 \times \mathbb{P}^1$ (Okamoto himself started from so-called Hirzebruch surfaces), and then applying a number of blow-ups to resolve certain points of indeterminacy the equivalent system acquires in this augmented space. A blow-up is one of the most fundamental type of bi-rational transformation. In this way, we obtain bi-rational coordinate transformations between the original dependent variable $y$ and its derivative and coordinates covering the points at infinity in which the equation is regular. The space of initial values for a Painlev\'e equation is uniformly foliated by its solutions.
Through the space of initial values, every Painlev\'e equation is thus assigned a geometric meaning. Sakai \cite{Sakai2001} classified rational elliptic surfaces by $9$-point configurations in $\mathbb{P}^2$, which correspond to the geometries of the spaces of initial values of all known discrete and differential Painlev\'e equations. In this picture, it is more appropriate to divide the Painlev\'e differential equations into $8$ different types, as the geometry of some of the Painlev\'e equations is different for certain choices of parameters. This work has been elaborated on also in the extensive article \cite{Kajiwara2017}.
In the present article, we are mainly concerned with equations and systems of equations that are not of Painlev\'e type, but for which it can be shown that all movable singularities of their solutions are algebraic poles, such as studied by Shimomura \cite{Shimomura2007, Shimomura2008}, Filipuk and Halburd \cite{filipukhalburd1,filipukhalburd2, filipukhalburd3}, and one of the authors \cite{Kecker2012,Kecker2016}. Although such equations are in general not integrable, the condition on the singularities to be algebraic, rather than containing e.g.\ logarithmic branch points, guarantees some degree of regularity. After reviewing the construction of the space of initial values for the second Painlev\'e equation in the next section, we will mainly be concerned with equations of the form
\begin{equation*}
y'' = P(z,y),
\end{equation*}
$P$ a polynomial in $y$ with analytic coefficients and, more generally, Hamiltonian systems,
\begin{equation*}
H = H(z,x(z),y(z)), \quad x'(z) = \frac{\partial H}{\partial y}, \quad y'(z) = -\frac{\partial H}{\partial x},
\end{equation*}
where $H(z,x,y)$ is polynomial in the last two arguments, with analytic coefficients in $z$. Extending the phase space to complex projective space $\mathbb{P}^2$, certain points at infinity, where the flow of the Hamiltonian vector field becomes indeterminate, are resolved using the method of blowing up these so-called {\it base points}. This process has to be repeated a number of times, until the indeterminacy disappears, leading to an analogue of the space of initial values in which each point defines a regular initial value problem, but possibly only after a change in the dependent and independent variables. For the equations considered in this article, the described method is a finite procedure resulting in differential systems which allows us to determine directly the local singularity structure of an equation, i.e.\ what types of movable singularities its solutions can exhibit, without having to explicitly construct the solutions. In particular, it is possible to determine when an equation has logarithmic branch points and to give conditions under which these logarithmic singularities are absent. These are the same conditions as the resonances found by applying a Painlev\'e test, testing the system for the existence of formal Laurent series solutions in $z-z_0$, or its generalisation to multi-valued singularities, testing for formal series solutions in fractional powers of $z-z_0$. Moreover, in the case where logarithmic singularities are absent, the procedure allows us to conclude that the algebraic series obtained are the only possible type movable singularities. Namely, employing certain approximate first integrals, we can show that the exceptional lines arising from all except the last blow-up are {\it inaccessible} for the flow of the vector field and the solution, at a singularity, must pass through the exceptional curve from the last blow-up where its behaviour is completely determined.
\section{Okamoto's space of initial values for the Painlev\'e equations}
\label{sec:InitialValueSpace}
The space of initial values was originally constructed by Okamoto \cite{Okamoto1979} for each of the six Painlev\'e equations. The idea in that paper is to consider the respective Hamiltonian systems in an extended phase space that includes all points at infinity, in order to study the behaviour at their singularities. In the case of the Painlev\'e equations, the extended phase space (with a certain exceptional divisor removed) covers all possible points, including points at infinity, at which the system defines a regular initial value problem. One of the main aims of this paper is to show that this construction is also meaningful for a wider class of ordinary differential equations with singularities other than poles, in particular for equations with algebraic poles. The other main point we wish to make is that the process of constructing the space of initial values also serves as an algorithm to single out, from a given class of equations, those equations for which the solutions are free from logarithmic singularities. We will first review the process of constructing the space of initial values here for the case of the second Painlev\'e equation,
\begin{equation}
\label{P2}
P_{\!I\!I}: \quad y''(z) = 2y^3 + z y + \alpha, \quad \alpha \in \mathbb{C}.
\end{equation}
Note that this is a non-autonomous ($z$-dependent) Hamiltonian system, letting $x=y'$ and defining the Hamiltonian,
\begin{equation}
\label{P2_ham}
H = x^2 - y^4 + 2zy + 2\alpha y.
\end{equation}
Okamoto \cite{Okamoto1979} also considered a different Hamiltonian, $H_{\text{Ok}} = \frac{1}{2} x^2 - \left( y^2 + \frac{z}{2} \right) x - \left( \alpha + \frac{1}{2} \right) y$, which, by eliminating $x$, leads to the same equation.
Here, instead of equation (\ref{P2}), we start in fact from the more general class of equations
\begin{equation}
\label{P2general}
y''(z) = 2y^3 + \beta(z) y + \alpha(z),
\end{equation}
where $\alpha$ and $\beta$ are analytic functions. One can easily find necessary conditions for equation (\ref{P2general}) to have the Painlev\'e property. This is the so-called {\it Painlev\'e test}, which is performed by inserting formal Laurent series solution of the form
\begin{equation*}
y(z) = \frac{c_{-1}}{z-z_0} + c_0 + c_1(z-z_0) + c_2(z-z_0)^2 + \cdots
\end{equation*}
into the equation, with $c_{-1} = \pm 1$ being the two possible types of leading-order behaviour in this case. Computing the coefficients $c_0,c_1,c_2,\dots$ recursively leads to certain obstructions for the formal Laurent series to exist. The case when these obstructions are absent is equivalent to the {\it resonance conditions} $\beta''(z) \equiv \alpha'(z) \equiv 0$. Thus, $\beta$ is at most a linear function in $z$ whereas $\alpha$ is a constant. The case when $\beta'(z) \neq 0$ essentially reduces equation (\ref{P2general}) to equation (\ref{P2}), up to a rescaling of $z$. When $\beta'(z) \equiv 0$, this is an equation with constant coefficients which can be integrated directly in terms of elliptic functions. We will see below how we can re-discover the resonance conditions using the method of {\it blowing up the base points}.
At a singularity $z_\ast$ of a solution of equation (\ref{P2general}), where $\alpha(z)$ and $\beta(z)$ are analytic, we have
\begin{equation*}
\lim_{z \to z_\ast} \max\{|y(z)|,|y'(z)|\} = \infty.
\end{equation*}
This is a consequence of the following lemma by Painlev\'e, which in turn follows from Cauchy's local existence and uniqueness theorem for analytic solutions of differential equation, see e.g. \cite{hille}.
\begin{lem}
\label{Painlemma}
Given a system of differential equations,
\begin{equation*}
\mathbf{y}' = \mathbf{F}(z,\mathbf{y}), \quad \mathbf{y} = (y_1,\dots,y_n),
\end{equation*}
suppose that $F$ is analytic in a neighbourhood of a point $(z_\ast,\boldsymbol{\eta})$, $\boldsymbol{\eta} = (\eta_1,\dots,\eta_n) \in \mathbb{C}^n$. If there exists a sequence $(z_i)_{i \in \mathbb{N}}$, $z_i \to z_\ast$ as $i \to \infty$ so that $y_j(z_i) \to \eta_j$ for all $j=1,\dots,n$, then $\mathbf{y}$ is analytic at $z_\ast$.
\end{lem}
Therefore, to analyse the behaviour of the solution at a singularity, it suggests itself to include the points at infinity of the phase space, i.e.\ the line at infinity in our case, as we will start constructing the space of initial values for equation (\ref{P2general}) by extending the phase space of the differential equation to the compact surface $\mathbb{P}^2$. We introduce coordinates on the three standard charts of $\mathbb{P}^2$,
\begin{equation}
\label{homcoords}
[1:y:x] = [u:v:1] = [V:1:U],
\end{equation}
where $y$ and $x=y'$ denote the original phase space variables, and the other two coordinate charts covering $\mathbb{P}^2$ are given by $u=\frac{1}{x}, v= \frac{y}{x}$ and $U=\frac{x}{y}, V=\frac{1}{y}$, respectively. In these coordinates, equation (\ref{P2general}) is expressed as follows:
\begin{equation}
\label{extendedP2}
\begin{aligned}
u'(z) & =-\frac{u^2 v \beta (z)+u^3 \alpha (z)+2 v^3}{u}, & \quad v'(z) & =-\frac{u^2 v^2 \beta (z)+u^3 v \alpha (z)-u^2+2 v^4}{u^2}, \\
U'(z) & =\frac{-U^2 V^2+V^3 \alpha (z)+V^2 \beta (z)+2}{V^2}, & \quad V'(z) & =-U V.
\end{aligned}
\end{equation}
The line at infinity of $\mathbb{P}^2$ is given by the set $I=\{u=0\} \cup \{V=0\}$ in these coordinates. On this line, the vector field defined by (\ref{extendedP2}) is infinite, apart from the point $\mathcal{P}_1: (u,v)=(0,0)$, where it is of the indeterminate form $\frac{0}{0}$. In the vicinity of any point of $I \setminus \{\mathcal{P}_1\}$, the vector field is also 'tangential' (having zero vertical component) to the line $I$, and intuitively can never reach $I \setminus \{\mathcal{P}_1\}$. Below, we will give a formal argument to show that the line at infinity, and subsequently the exceptional lines introduced by various blow-ups, are {\it inaccessible} for the flow of the vector field away from the base points. Therefore, approaching a singularity $z_\ast$ along a curve $\gamma$, there exists at least a sequence $(z_n)_{n \in \mathbb{N}} \subset \gamma$, $z_n \to z_\ast$, such that the corresponding sequence of points in $\mathbb{P}^2$, with coordinates $(u(z_n),v(z_n))$, $(U(z_n),V(z_n))$ in the respective charts, tends to the point $\mathcal{P}_1$. A point $P \in \mathbb{P}^2 \setminus I = \mathbb{C}^2$ cannot be a limit point of the sequence since by Lemma \ref{Painlemma} the solution would be analytic at $z_\ast$ after all.
\subsection{Resolution of base points}
A dynamical systems can be interpreted as the flow of a vector field, an arrow at each point of possible initial values for the system. A solution of the system is visualised by drawing a curve which follows the direction of the arrows in a smooth way. However, there may exist points in the phase space from which vectors emerge or sink into from all possible directions, such as the point $\mathcal{P}_1$ in the preceding paragraph, at which the vector field is a priori ill-defined. In general, we start from a rational system of equations, defined in some coordinates $(u_i,v_i)$,
\begin{equation*}
u_i'(z) = \frac{p_{i,1}(z,u_i,v_i)}{q_{i,1}(z,u_i,v_i)}, \quad v_i'(z) = \frac{p_{i,2}(z,u_i,v_i)}{q_{i,2}(z,u_i,v_i)},
\end{equation*}
where we assume that the polynomials $p_{i,1}$, $q_{i,1}$ and $p_{i,2},q_{i,2}$ are in reduced terms, respectively. (We will let the index $i$ start counting from $0,1,2,\dots$ in the following.) The points of indeterminacy of the vector field are the common zeros $(s,t)$ of either pair of polynomials, $p_{i,1}(z,s,t) = 0 = q_{i,1}(z,s,t)$, or $p_{i,2}(z,s,t) = 0 = q_{i,2}(z,s,t)$. These {\it base points} (which may also depend on $z$), at which the behaviour of the system is a priori unknown, can be resolved using the method of {\it blowing up}, a process familiar from algebraic geometry to resolve singularities of algebraic varieties, see e.g. \cite{Hartshorne} and the work by Hironaka \cite{Hironaka1964}. By a blow-up of a point $\mathcal{P}_{i+1}: (u_i,v_i)=(s,t) \in \mathbb{C}^2$, the phase space is extended by introducing a new projective line $\mathcal{L}_{i+1}$, the points of which are in one-to-one correspondence to the various directions emanating from the base point. The extended space after blowing up $\mathcal{P}_{i+1}$, the {\it centre of the blow-up}, is given by
\begin{equation}
\label{blowupspace}
\text{Bl}_{\mathcal{P}_{i+1}}(\mathbb{C}^2) = \left\{ ((u_i,v_i),[w_0:w_1]) \in \mathbb{C}^2 \times \mathbb{P}^1 : (u_i - s)\cdot w_1 = (v_i - t) \cdot w_0 \right\}.
\end{equation}
To express the differential system in the space obtained after the blow-up, two new coordinate charts are introduced, covering the portions of the space (\ref{blowupspace}) where $w_0=0$ and $w_1=0$, respectively. We denote these coordinates by
\begin{equation*}
\begin{aligned}
u_{i+1} &= u_i - s, \quad v_{i+1} &= \frac{v_i - t}{u_i - s}, \\
U_{i+1} &= \frac{u_i - s}{v_i - t}, \quad V_{i+1} &= v_i - t.
\end{aligned}
\end{equation*}
After each blow-up, we therefore obtain two new rational systems
\begin{equation}
\label{ithbl}
\begin{aligned}
u_{i+1}' &= \frac{p_{i+1,1}(z,u_{i+1},v_{i+1})}{q_{i+1,1}(z,u_{i+1},v_{i+1})} & \quad v_{i+1}' &= \frac{p_{i+1,2}(z,u_{i+1},v_{i+1})}{q_{i+1,2}(z,u_{i+1},v_{i+1})} \\
U_{i+1}' &= \frac{P_{i+1,1}(z,U_{i+1},V_{i+1})}{Q_{i+1,1}(z,U_{i+1},V_{i+1})} & \quad V_{i+1}' &= \frac{P_{i+1,2}(z,U_{i+1},V_{i+1})}{Q_{i+1,2}(z,U_{i+1},V_{i+1})}
\end{aligned}
\end{equation}
where we assume again that the polynomials $p_{i+1},q_{i+1}$ and $P_{i+1},Q_{i+1}$ are already in reduced terms. Here, the relation $U_{i+1} = v_{i+1}^{-1}$ holds where either coordinate is non-zero, since $[v_{i+1} : 1] = [1: U_{i+1}] = [w_0:w_1]$ are homogeneous coordinates on the complex projective line, equivalent to $\mathbb{P}^1$, introduced by the blow-up. This line, $\mathcal{L}_{i+1}$, is also called the exceptional line of the blown-up space $\text{Bl}_{\mathcal{P}_{i+1}}$. The points on $\mathcal{L}_{i+1}$ are said to be {\it infinitely near} to the point $\mathcal{P}_{i+1}$. The canonical projection to the first component
\begin{equation*}
\pi_{i+1} : \text{Bl}_{\mathcal{P}_{i+1}}(\mathbb{C}^2) \to \mathbb{C}^2, \quad ((u_i,v_i),[w_0:w_1]) \mapsto (u_i,v_i),
\end{equation*}
defines a homeomorphism
\begin{equation*}
\pi_{i+1}: \text{Bl}_{\mathcal{P}_{i+1}}(\mathbb{C}^2) \setminus \mathcal{L}_{i+1} \to \mathbb{C}^2 \setminus \{\mathcal{P}_{i+1}\},
\end{equation*}
that is, away from the centre $\mathcal{P}_{i+1}$ and its pre-image $\mathcal{L}_{i+1} = \pi_{i+1}^{-1} (\mathcal{P}_{i+1})$, points in $\mathbb{C}^2$ are in one-to-one correspondence with points in $\text{Bl}_{\mathcal{P}_{i+1}}$.
In the coordinates $(u_{i+1},v_{i+1})$, resp. $(U_{i+1},V_{i+1})$, the exceptional line $\mathcal{L}_{i+1}$ is parametrised by
\begin{equation*}
(u_{i+1},v_{i+1}) = (0,c), \quad c \in \mathbb{C} \quad \text{or} \quad (U_{i+1},V_{i+1}) = (C,0), \quad C \in \mathbb{C},
\end{equation*}
with $C = c^{-1}$ for $c \neq 0$.
After each blow-up, we denote the space $\mathcal{S}_{i+1} = \text{Bl}_{\mathcal{P}_{i+1}}(\mathcal{S}_i)$, obtained by blowing up $\mathcal{S}_i$ at $\mathcal{P}_{i+1}$, where $\mathcal{S}_0 = \mathbb{P}^2$. Later, the location of the points $\mathcal{P}_i$ becomes $z$-dependent, and we therefore denote the blown up spaces by $\mathcal{S}_i(z)$. Furthermore, we define the {\it infinity set} $\mathcal{I}_i(z) \subset \mathcal{S}_i(z)$ as the union of the set $\mathcal{I}_{i-1}(z)$ under the blow-up with $\mathcal{L}_i$, that is $\mathcal{I}_i(z) = \mathcal{I}_{i-1}'(z) \cup \mathcal{L}_i$, where $\mathcal{I}'$ denotes the {\it proper transform} of the set $\mathcal{I}$ under the blow-up. We define $\mathcal{L}_0 = I \setminus \{\mathcal{P}_1 \} \subset \mathbb{P}^2$ as the line at infinity with the initial base point removed.
\subsection{Sequence of blow-ups}
For system (\ref{extendedP2}) we found the initial base point $\mathcal{P}_1: (u_0,v_0) := (u,v) = (0,0)$. This base point can be resolved by a sequence of blow-ups as described in the following. After each blow-up, we have to examine the two resulting systems of equations (\ref{ithbl}) for new base points arising on the exceptional line. The indeterminacies of the system after the blow-up of $\mathcal{P}_{i+1}$ arise as common zeros of either pair of equations,
\begin{equation*}
\begin{aligned}
p_{i+1,1}(z,0,v_{i+1}) &= 0 = q_{i+1,1}(z,0,v_{i+1}), \\
p_{i+1,2}(z,0,v_{i+1}) &= 0 = q_{i+1,2}(z,0,v_{i+1}),
\end{aligned}
\end{equation*}
for the first system, and
\begin{equation*}
\begin{aligned}
P_{i+1,1}(z,U_{i+1},0) &= 0 = Q_{i+1,1}(z,U_{i+1},0), \\
P_{i+1,2}(z,U_{i+1},0) &= 0 = Q_{i+1,2}(z,U_{i+1},0),
\end{aligned}
\end{equation*}
for the second system. However, any indeterminacy at $(u_{i+1},v_{i+1}) = (0,c)$, $c \neq 0$, of the first system is a base point if and only if this indeterminacy also presents itself at $(U_{i+1},V_{i+1}) = (c^{-1},0)$ in the other system, and vice versa, as otherwise the behaviour of the solution is determined. In addition, we can have base points at $(u_{i+1},v_{i+1})=(0,0)$ or $(U_{i+1},V_{i+1})=(0,0)$, which are only visible in one of the charts.
We now give the sequence of blow-ups for equation (\ref{P2general}) which resolves the base point, thus leading to the space of initial values. We do not write out the system of equations after each blow-up, as these expressions soon become very lengthy, and one is advised to use an appropriate computer algebra system to identify the base points in these systems and perform the blow-ups. Here, after the second blow-up, two new base points arise, thus the sequence branches into two cascades, after which we denote the subsequent coordinates with superscripts $\pm$.
\begin{equation*}
\begin{aligned}
{} & \mathcal{P}_1: (u_0,v_0) = \left(\frac{1}{x},\frac{y}{x}\right) = (0,0) \quad \leftarrow \quad \mathcal{P}_2: (U_1,V_1) = \left( \frac{1}{y}, \frac{y}{x} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}^{\pm}_3: (u_2,v_2) = \left( \frac{1}{y}, \frac{y^2}{x} \right) = (0,\pm 1) \quad \leftarrow \quad \mathcal{P}^{\pm}_4: (u_3^\pm,v_3^\pm) = \left( \frac{1}{y}, \frac{y \left(y^2 \mp x \right)}{x} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}^{\pm}_5: (u_4^\pm,v_4^\pm) = \left( \frac{1}{y}, \frac{y^2 \left(y^2 \mp x\right)}{x} \right) = \left( 0, \mp \frac{1}{2} \beta(z) \right) \\
\leftarrow \quad & \mathcal{P}^{\pm}_6: (u_5^\pm,v_5^\pm) = \left( \frac{1}{y}, \frac{y \left(2 y^4 \pm \left( x \beta (z) -2 x y^2 \right) \right)}{2x} \right) = \left( 0, \frac{1}{2} \beta'(z) \mp \alpha(z) \right) \\
\leftarrow \quad & \mathcal{P}^{\pm}_7: (U_6^\pm,V_6^\pm) = \left( \frac{2 x/y}{2 y^5 -x \beta' \pm (2 x \alpha + x y \beta -2 x y^3)}, \frac{2 y^5 -x \beta' \pm (2 x \alpha + x y \beta -2 x y^3)}{2 x} \right) = (0,0).
\end{aligned}
\end{equation*}
After the blow-up of $\mathcal{P}_6^\pm$, the differential system is of the form
\begin{equation}
\label{system6}
\begin{aligned}
u_6^{\pm}{}' &= -\frac{2}{d^\pm(z,u^\pm_6,v^\pm_6)} \\
v_6^{\pm}{}' &= \frac{2\alpha'(z) \mp \beta''(z) + p_{6,2}(z,u_6^\pm,v_6^\pm)}{u_6^\pm \cdot d^\pm(z,u^\pm_6,v^\pm_6)} \\
U_6^{\pm}{}' &= \frac{U_6^\pm (\pm 2\alpha'(z) - \beta''(z)) + P_{6,1}(z,U_6^\pm,V_6^\pm)}{V_6^\pm \cdot D^\pm(z,U^\pm_6,V^\pm_6)} \\
V_6^{\pm}{}' &= \frac{-2 + U_6^\pm (\pm 2\alpha'(z) - \beta''(z)) + P_{6,2}(z,U_6^\pm,V_6^\pm)}{V_6^\pm \cdot D^\pm(z,U^\pm_6,V^\pm_6)}
\end{aligned}
\end{equation}
where $p_{6,2}$ and $P_{6,i}$, $i=1,2$, are polynomials in their second and third arguments. Incidentally, the zero set $d^\pm(z,u_6^\pm,v_6^\pm) = 0 = D^\pm(z,U_6^\pm,V_6^\pm)$ is the set $\mathcal{I}^\pm_5{}'(z)$, the proper transform of the exceptional curves arising from the cascades of blow-ups $\mathcal{P}_1 \leftarrow \mathcal{P}_2 \leftarrow \cdots \leftarrow \mathcal{P}_5^+$ resp. $\mathcal{P}_1 \leftarrow \mathcal{P}_2 \leftarrow \cdots \leftarrow \mathcal{P}_5^-$, as well as the line $\mathcal{L}_0$,
\begin{equation*}
\begin{aligned}
d^\pm(z,u^\pm_6,v^\pm_6) & = \pm (2-2 (u^\pm_6)^3 \alpha (z)- (u^\pm_6)^2 \beta (z)) + (u^\pm_6)^3 \beta '(z) +2 (u^\pm_6)^4 v^\pm_6, \\
D^\pm(z,U^\pm_6,V^\pm_6) &= \pm (2-2 (U^\pm_6 V^\pm_6)^3 \alpha (z)- (U^\pm_6 V^\pm_6)^2 \beta (z)) + (U^\pm_6 V^\pm_6)^3 \beta '(z) +2 (U^\pm_6)^3 (V^\pm_6)^4.
\end{aligned}
\end{equation*}
\begin{rem}
\label{sequence_remark}
After each blow-up we have performed one can check that, for the resulting vector field $(u_i',v_i')$ on the exceptional curve, the $u_i'$-component is zero, whereas the $v_i'$ component becomes infinite in each point on this curve except for the base points, i.e. $\mathcal{L}_1 \setminus \{\mathcal{P}_2\}$, $\mathcal{L}_2 \setminus \{\mathcal{P}_3^+,\mathcal{P}_3^-\}$ and $\mathcal{L}^+_i \setminus \{\mathcal{P}^+_{i+1}\}$, respectively $\mathcal{L}^-_i \setminus \{\mathcal{P}^-_{i+1}\}$, for $i=3,4,5$. In a Real picture this would be understood as the vector field becoming tangent to the exceptional curve. Here, we will show through a more formal argument that the flow of the vector field cannot pass through the exceptional curve except at the base points. Namely, there exists an auxiliary function, or approximate first integral, which remains bounded at any movable singularity. For the second Painlev\'e equation, this function is known to be
\begin{equation}
\label{W_P2}
W = H + \frac{x}{y},
\end{equation}
where $H$ is the Hamiltonian (\ref{P2_ham}). In certain proofs of the Painlev\'e property for the equation this function is needed to show that actually $y \to \infty$ at a movable singularity. In the context of the space of initial values, we can use $W$ to show that the line at infinity of $\mathbb{P}^2$, and subsequently the exceptional curves introduced by the blow-ups are {\it inaccessible} for the flow of the vector field, apart from at the base points. Namely, one can check, that the logarithmic derivative $\frac{W'}{W}$ remains finite on the line at infinity and the subsequent exceptional lines introduced by the cascade of blow-ups, except at the base points, whereas $W$ itself is infinite on these lines away from the base points. We do not write out the expressions for the function $W$ in all the coordinate charts as these become rather lengthy, but we note that this can be done routinely using computer algebra. In section \ref{sec:cubicHam}, we demonstrate this process for the Hamiltonian system given there by explicitly writing out the respective functions $W$ where this is feasible. The following lemma, using a standard integral estimate, then shows that a solution cannot pass through any of the exceptional lines on which $W$ is infinite, i.e.\ they cannot be reached by analytic continuation of a solution along a finite-length curve.
\begin{lem}
\label{log_bounded}
Suppose a function $W(z)$ is defined in the neighbourhood $U$ of a point $z_\ast$ such that the logarithmic derivative $\frac{d}{d z} \log W = \frac{W'}{W}$ is bounded, say by $K$, on $U$. Let $\gamma \subset U$ be a finite-length curve from some point $z_0$ where $W(z_0)$ is finite and non-zero, ending in $z_\ast$. By the estimate,
\begin{equation*}
|\log W(z_\ast)| \leq |\log W(z_0)| + \int_{\gamma} \left| \frac{W'}{W} \right| ds \leq |\log W(z_0)| + K \cdot \text{len}(\gamma),
\end{equation*}
$\log W(z_\ast)$, and hence $W(z_\ast)$, is bounded.
\end{lem}
In other words, a solution continued along a curve $\gamma \subset \mathbb{C}$, ending in a movable singularity $z_\ast$, has to approach a base point, i.e.\ there exists at least a sequence $(z_n)_{n \in \mathbb{N}} \subset \gamma$, $z_n \to z_\ast$, such that the sequence of points $(u_i(z_n),v_i(z_n))$ or $(U_i(z_n),V_i(z_n))$ tends towards one of the base points. Otherwise we would be in the situation where the solution remains entirely in the region of the phase space where the equations define a regular initial value problem, i.e.\ no singularity can develop.
\end{rem}
The base point $\mathcal{P}^\pm_7: (U_6^\pm,V_6^\pm)=(0,0)$ in the second chart of system (\ref{system6}) is only present if the condition
\begin{equation}
\label{P2cond}
2\alpha'(z) \mp \beta''(z) \equiv 0,
\end{equation}
is {\it not} satisfied. This point can be blown up once further, resulting in a system with no further base points. However, the solutions of the resulting system give rise to logarithmic singularities. This behaviour is already visible in the systems $(u_6^{\pm},v_6^{\pm})$: integrating the first equation of system (\ref{system6}) with initial data on the exceptional curve after the last blow-up, $u_6^\pm=0$, and inserting this into the second equations, one obtains
\begin{equation*}
u_6^\pm = \pm (z-z_0) + O((z-z_0)^2), \quad v_6^\pm = (\pm 2\alpha'(z_0) - \beta''(z_0))\log(z-z_0) + O(z-z_0).
\end{equation*}
In case of the conditions (\ref{P2cond}) being satisfied, an additional cancellation of a factor of $u_6^\pm$ and $V_6^\pm$ occurs in the second, respectively third, equation of system (\ref{system6}), rendering this system a regular initial value problem on the exceptional curves $\mathcal{L}^\pm_6$. Also, in this case the vector field is transversal to these lines. With initial data $(u_6^\pm(z_0),v_6^\pm(z_0)) = (0,h)$, one obtains an analytic solution
\begin{equation*}
u_6^\pm(z) = \pm (z-z_0) + O((z-z_0)^2), \quad v_6^\pm = h + O(z-z_0),
\end{equation*}
translating into a simple pole for the original variable $y$.
The conditions (\ref{P2cond}) are exactly the resonance conditions obtained by the Painlev\'e test, combined giving $\beta''(z)=\alpha'(z)=0$. This is the case in which equation (\ref{P2general}) essentially reduces to the second Painlev\'e equation, up to a re-scaling of $z$. We denote by $\mathcal{I}_5(z) = \mathcal{I}_5^+(z) \cup \mathcal{I}_5^-(z) \subset \mathcal{S}_5(z)$ the {\it infinity set}, that is the proper transforms of the line $\mathcal{L}_0 \subset \mathbb{P}^2$ and the exceptional curves $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3^+$, $\mathcal{L}_3^-$, $\mathcal{L}_4^+$, $\mathcal{L}_4^-$, $\mathcal{L}_5^+$ and $\mathcal{L}_5^-$ from the first $5$ blow-ups of both cascades of base points. Then, at any point of the set $\mathcal{S}_6(z) \setminus \mathcal{I}'_5(z)$, the system (\ref{system6}) defines a regular initial value problem, which justifies the name 'space of initial values' for this set.
Suppose now that a solution $y(z)$ of the dynamical system, defined in $\bigcup_{z \in \mathbb{C}} \mathcal{S}_6(z)$, has a movable singularity at some point $z_\ast$ and consider a finite-length path $\gamma \subset \Omega$ with endpoint $z_\ast$, where $\Omega \subset \mathbb{C}$ is a closed neighbourhood of $z_\ast$. We denote the lifted path, i.e.\ the path that the solution along this path traces out in the (extended) phase space, by $\Gamma \subset \bigcup_{z \in \Omega} S_6(z)$. A priori $\Gamma$ can be of finite or infinite length. Let $(z_n)_{n \in \mathbb{N}} \subset \gamma$ be a sequence of points with $z_n \to z_\ast$. Since the phase space (including all the exceptional curves) is compact, there exists a subsequence $(z_{n_k})$ such that $\Gamma(z_{n_k})$ tends to a point $P_\ast \in \mathcal{S}_6(z_\ast)$. By Remark \ref{sequence_remark} and Lemma \ref{log_bounded}, we actually have $P_\ast \in \mathcal{S}_6(z_\ast) \setminus \mathcal{I}'_5(z_\ast)$. Then, by Lemma \ref{Painlemma} we can conclude that the solution, expressed in coordinates of some chart containing $P_\ast$, is analytic at the point $z_\ast$, and therefore in a neighbourhood of $z_\ast$. Thus, the solution converges to the point $P_\ast$ in this chart as $z \to z_\ast$, which corresponds to either an analytic point or a simple pole in the original variable $y$. This also excludes the possibility that $\Gamma$ has infinite length, as the curve $\Gamma$ is the analytic image of the finite-length curve $\gamma$ in this chart.
In summary, the procedure of blowing up the base points allows us to single out, from the class of equations (\ref{P2general}) with general coefficients, those equations for which the solutions are free from movable logarithmic singularities. Furthermore, in the absence of logarithmic singularities, the argument in the preceding paragraph essentially establishes an alternative method of proof for the Painlev\'e property of equation (\ref{P2}).
We mention that for the alternative (Okamoto) Hamiltonian $H_{\text{Ok}}$ for equation (\ref{P2}), a different sequence of base points leads to a related space of initial values. Here, there are originally two base points in $\mathbb{P}^2$, one at $(u,v)=(0,0)$, the other at $(U,V) = (0,0)$. One of them can be resolved by $3$ successive blow-ups, the other by blowing up $6$ times, the resulting resonance conditions being equivalent to the ones obtained above.
The procedure also works for the other Painlev\'e equations. For the equation $y''= 6y^2 + \alpha(z)$, $\alpha$ analytic in $z$, one finds, after compactifying the equation on $\mathbb{P}^2$ and blowing up a sequence of $9$ base points, the condition $\alpha''(z) \equiv 0$. If this condition is satisfied, the system defines a regular initial value problem on the exceptional curve from the $9$th blow-up, and the equation essentially reduces to the first Painlev\'e equation $P_I$. Moreover, in this case there is an analytic solution around each point of the space of initial values, which, in the original variable $y(z)$ corresponds to a point where the solution is either analytic or has a double pole. For detailed blow-up calculations see also the work by Duistermaat and Joshi \cite{joshi1} for the first Painlev\'e equation and Howes and Joshi \cite{joshi2} for the second Painlev\'e equation, both performed in so-called Boutroux coordinates.
\section{Differential equations with movable algebraic singularities}
In the papers \cite{Shimomura2007,Shimomura2008}, Shimomura studied certain classes of differential equations with what he called the {\it quasi-Painlev\'e property}. This is a generalisation of the Painlev\'e property in the sense that the solutions of the equations considered may have at worst {\it algebraic poles} as movable singularities.
\begin{defn}
By an algebraic pole we denote a singularity $z_\ast$ of $y(z)$, which, in a cut neighbourhood of $z_\ast$, can be represented by a convergent Puiseux series,
\begin{equation}
\label{algebraicpole}
y(z) = \sum_{j=0}^\infty c_j (z-z_\ast)^{(j-j_0)/n}, \quad j_0,n \in \mathbb{N}.
\end{equation}
For $n=1$ this includes the notion of an ordinary pole. If the number $n$ is chosen minimal and $n>1$, we say that $y$ has an $n$th-root type algebraic pole at $z_\ast$.
\end{defn}
Shimomura proved that, for the families of equations,
\begin{equation}
\label{ShimomuraEqns}
\begin{aligned}
P_I^{(k)}:& \quad y'' = \frac{2(2k+1)}{(2k-1)^2} y^{2k} +z \quad (k \in \mathbb{N}), \\
\qquad P_{\,I\!I}^{(k)}: & \quad y'' = \frac{k+1}{k^2} y^{2k+1} + zy + \alpha \quad (k \in \mathbb{N} \setminus \{2\}, \quad \alpha \in \mathbb{C}),
\end{aligned}
\end{equation}
the only types of movable singularities that can occur, by analytic continuation of a local solution along {\it finite-length paths}, are of the algebraic form (\ref{algebraicpole}). For $P_I^{(k)}$,
\begin{equation}
\label{PIk}
y(z) = (z-z_\ast)^{-\frac{2}{2k-1}} - \frac{(2k-1)^2}{2(6k-1)} z_\ast (z-z_\ast)^2 + h (z-z_\ast)^{\frac{4k}{2k-1}} + \sum_{j}^\infty c_j (z-z_\ast)^{\frac{j}{2k-1}},
\end{equation}
where $h \in \mathbb{C}$ is an integration constant, and for $P_{\,I\!I}^{(k)}$,
\begin{equation}
\label{PIIk}
y(z) = \omega_k (z-z_\ast)^{-\frac{1}{k}} - \frac{k \omega_k z_\ast}{6} (z-z_\ast)^{2-\frac{1}{k}} - \frac{k^2 \alpha}{3k+1} (z-z_\ast)^2 + h (z-z_\ast)^{2+\frac{1}{k}} + \sum_{j}^\infty c_j (z-z_\ast)^{\frac{j}{k}},
\end{equation}
where again $h$ is an integration constant and $\omega_k \in \{1,e^{i\pi / k}\}$, i.e.\ in this case there are two essentially different types of leading-order behaviour at the singularities. The proofs in \cite{Shimomura2007,Shimomura2008} for the quasi-Painlev\'e property of these equations rely on similar methods as the proofs of the Painlev\'e property for the Painlev\'e equations in \cite{Shimomura2003}. In fact, for $k=1$, the equations $P_I^{(k)}$ and $P_{\,I\!I}^{(k)}$ reduce to the first and second Painlev\'e equations, respectively.
Already in an earlier (1953) paper, R.A. Smith considered the class of equations
\begin{equation}
\label{SmithEqn}
y''(z) + f(y) y'(z) + g(y) = h(z),
\end{equation}
where $f$ and $g$ are polynomials in $y$. He showed that, under the condition $\deg(g) < \deg(f)$, the only types of movable singularities that can occur by analytic continuation along finite-length paths are algebraic poles of the form
\begin{equation}
\label{smithexpansion}
y(z) = \sum_{j=0}^\infty c_j (z-z_0)^{(j-1)/n}, \quad n = \deg(f).
\end{equation}
Here, as in the cases of equations $P_I^{(k)}$ and $P_{\,I\!I}^{(k)}$, it is easy to verify the existence of formal series solutions of the form (\ref{smithexpansion}), (\ref{PIk}) or (\ref{PIIk}), respectively. Namely, inserting a formal series into the respective equation, one can determine the coefficients recursively without obstruction. A harder problem is to show that {\it all} movable singularities are of this form. As mentioned above, this is similar to the difference in difficulty of showing that an equation passes the Painlev\'e test and showing that the equation has the Painlev\'e property (if it has). The problem we pose is, for a given differential equation, to determine a list of possible types of movable singularities that can occur in solutions of the equation and show that these are the only ones. In the cases of the equations by Smith (\ref{SmithEqn}) and Shimomura (\ref{ShimomuraEqns}), this was shown under the proviso that paths along which we obtain a singularity through analytically continuation, are of finite length. In \cite{smith1}, Smith gave an example of a solution with a singularity not of the form (\ref{smithexpansion}), which can be obtained only by analytic continuation of a certain solution along a path of infinite length. This singularity, at which the solution behaves very differently, is an accumulation point of algebraic singularities of the form (\ref{smithexpansion}).
Departing from the works by Smith and Shimomura, Filipuk and Halburd \cite{filipukhalburd1,filipukhalburd2,filipukhalburd3} studied more general classes of differential equations with movable algebraic poles. In \cite{filipukhalburd1}, the following class of second-order equations is studied,
\begin{equation}
\label{2ndorderPoly}
y''(z) = \sum_{n=0}^N a_n(z) y^n,
\end{equation}
where the right-hand side is a polynomial in $y$ with analytic coefficients in some domain $\Omega \subset \mathbb{C}$. After a simple transformation, this equation can be brought into the normalised form
\begin{equation}
\label{polyNormal}
y''(z) = \tilde{a}_N y^N + \sum_{n=0}^{N-2} \tilde{a}_n(z) y^n,
\end{equation}
with a conveniently chosen constant $\tilde{a}_N \in \mathbb{C}$, and where the $y^{N-1}$ term is now absent. By inserting into equation (\ref{polyNormal}) a formal series expansion of the form
\begin{equation}
\label{formalseries}
y(x) = \sum_{j=0}^\infty c_j (z-z_0)^{(j-2)/N},
\end{equation}
and recursively computing the coefficients $c_j$, one finds a necessary condition for the singularities of the solution to be algebraic. Namely, the recurrence relation is of the form
\begin{equation}
\label{Nrecurrence}
(j+N-1)(j-2N-2) c_j = P_j(c_{0},c_{1},\dots ,c_{j-1}), \quad j=1,2,\dots,
\end{equation}
where each $P_j$, $j=1,2,\dots$ is a polynomial in all the previous coefficients $c_0,\dots,c_{j-1}$. The coefficient $c_{2N+2}$ cannot be determined in this way and the recurrence relation (\ref{Nrecurrence}) is satisfied if and only if $P_{2N+2}$ is identically zero, in which case $c_{2N+2}$ is a free parameter. This {\it resonance condition}, $P_{2N+2} \equiv 0$, is necessary for the existence of the formal algebraic series solutions (\ref{formalseries}). Note that each formal series solution (\ref{formalseries}), with distinct leading-order behaviour, gives rise to one resonance condition. By expanding the coefficients $\tilde{a}_n(z)$, $n=0,\dots,N-2$ in Taylor series, one can show that the resonance conditions are equivalent to $\tilde{a}_{N-2}''(z)=0$ for even $N$, plus an additional differential relation between the coefficient functions in the case when $N$ is odd. The main result in \cite{filipukhalburd1} is that all resonance conditions being satisfied is also {\it sufficient} for all movable singularities of any solution of the equation, reachable by analytic continuation along finite length curves, to be algebraic poles of the form (\ref{formalseries}). This is essentially achieved in two steps. First, by constructing a certain auxiliary function, or approximate first integral for the equation, similar to the function $W$ in (\ref{W_P2}), which remains bounded in the vicinity of any movable singularity. Secondly, by formally constructing regular initial value problems from these bounded quantities in certain transformed variables. Regarding the second step, we show in this article how these regular initial value problems can be obtained directly by constructing the space of initial values for the equation. Although resulting in lengthy expressions, best dealt with using computer algebra, this process yields explicit equations, thus almost automating the process of finding the regular initial value problems. Although the auxiliary functions from the first step above are not required to compute the space of initial values, we will still need them to show that certain lines in this space cannot be reached by any solution.
In the following sections, we will construct the analogue of the space of initial values for some of the equations in the class (\ref{polyNormal}), namely the cases $N=4$ and $N=5$, explicitly computing the regular initial value problem at each point of this compact space, away from the exceptional divisors introduced by the blow-ups. We will need the auxiliary functions mentioned above to show that the exceptional divisors are inaccessible for the solution, using Lemma \ref{log_bounded}. To obtain a regular initial value problem, an additional change of the dependent and independent variables is necessary after the ultimate blow-up. Furthermore, with the approach in this article we can show that, for these equations and also for the Hamiltonian systems considered in Section \ref{sec:HamiltonianSystems}, all finitely reachable movable singularities are algebraic poles, i.e.\ these equations indeed have the quasi-Painlev\'e property. This is due to the fact that for these equations, blowing up the base points is a finite procedure, i.e.\ the sequence of base points terminates and the indeterminacies can be resolved completely. We will see that, in the resulting compact space, a solution approaching the singularity has a limit point somewhere on the exceptional curve after the last blow-up, where the system defines a regular initial value problem, after a change in dependent and independent variable. By Lemma \ref{Painlemma} we can conclude that there exists an analytic solution near this point, which, transformed back into the original variables, results in an algebraic pole.
The class of second-order equations (\ref{2ndorderPoly}) is contained in a wider class of polynomial Hamiltonian systems studied by one of the authors \cite{Kecker2016},
\begin{equation*}
H(z,x,y) = x^M + y^N + \sum_{0 < i N + j M < MN} \alpha_{ij}(z) x^i y^j,
\end{equation*}
where the coefficient functions $\alpha_{i,j}(z)$ are analytic in some common domain $\Omega \subset \mathbb{C}$. Also here, under a number of resonance conditions, which can be obtained either through a Painlev\'e test involving algebraic series, or through constructing the analogue of the space of initial values, the solutions of a system in this class can be shown to have only certain algebraic poles as movable singularities.
In the case where some of the resonance conditions are not satisfied, a formal algebraic series expansion with the corresponding leading-order behaviour does not exist. This can be remedied only by the introduction of logarithmic terms $\log(z-z_0)$ in the series expansions of the solutions. In this case, performing the sequence of blow-ups leads to a space in which, although the indeterminacies of the vector field defined by the equation have been resolved, the system in general does not define regular initial value problems at any point of the infinity set. With the procedure described in this article we can recover the conditions under which the respective classes of equations are free from logarithmic branch points.
\section{Second-order equation with polynomial right-hand side of degree $4$}
\label{sec:degree4}
We will now apply the procedure outlined in Section \ref{sec:InitialValueSpace} to the class of equations
\begin{equation}
\label{degree4eqn}
y''(z) = \frac{5}{2} y^4 + \alpha(z) y^2 + \beta(z) y + \gamma(z),
\end{equation}
extending the phase space of $(y,y')=(y,x)$ from $\mathbb{C}^2$ to $\mathbb{P}^2$ and resolving the base points by successive blow-ups.
The factor of $\frac{5}{2}$ is chosen for convenience here to avoid large numerical constants in the calculations, and any $y^3$ term has been transformed away. As shown in \cite{filipukhalburd1}, a necessary and sufficient condition for all singularities of this equation to be algebraic poles is $\alpha''(z) \equiv 0$, i.e.\ $\alpha$ is either a linear function in $z$ or constant. This result was obtained by introducing an auxiliary function, which in our normalisation of the equation is given by
\begin{equation}
\label{deg4_aux}
W = \frac{1}{2} (y')^2 - \frac{1}{2} y^5 - \frac{\alpha(z)}{3} y^3 - \frac{\beta(z)}{2} y^2 - \gamma(z) y + \left( \sum_{k=1}^3 \frac{\xi_k(z)}{y^k} \right) y',
\end{equation}
which is essentially the Hamiltonian of the equation plus corrections given in terms the functions $\xi_1,\xi_2,\xi_3$. By the process described in \cite{filipukhalburd1}, these can be computed as
\begin{equation*}
\xi_1(z) = \frac{2}{9} \alpha'(z), \quad \xi_2(z) = \beta'(z), \quad \xi_3(z) = \frac{4}{27} \alpha(z) \alpha'(z) - 2\gamma'(z),
\end{equation*}
in which case $W$ is shown to remain bounded at any movable singularity, which is established by showing that $W$ satisfies a first-order differential equation of the form
\begin{equation*}
W' = P(z,1/y) W + Q(z,1/y)y' + R(z,1/y),
\end{equation*}
where $P$, $Q$ and $R$ are polynomials in their last argument.
We will now recover the condition $\alpha''(z)=0$ for the existence of algebraic singularities using an appropriate cascade of blow-ups. After that, we will use the function $W$ defined in (\ref{deg4_aux}), in conjunction with Lemma \ref{log_bounded}, to show that certain exceptional curves arising from the blow-ups cannot be reached by the solution. This will allow us to conclude that the algebraic singularities are the only ones that can occur in the solutions of the equation.
To perform the blow-ups, the equation is first extended to complex projective space $\mathbb{P}^2$ by introducing homogeneous coordinates as in (\ref{homcoords}) above. The system of equations in the new coordinates is presented as follows:
\begin{align*}
u'(z) &=-\frac{2 u^2 v^2 \alpha (z)+2 u^3 v \beta (z)+2 u^4 \gamma (z)+5 v^4}{2 u^2}, \\
v'(z) &=-\frac{2 u^2 v^3 \alpha(z)+2 u^3 v^2 \beta (z)+2 u^4 v \gamma (z)-2 u^3+5 v^5}{2 u^3}, \\
U'(z) &=-\frac{2 U^2 V^3-2 V^2 \alpha (z)-2 V^3 \beta (z)-2 V^4 \gamma (z)-5}{2 V^3}, \\
V'(z) &=-U V.
\end{align*}
We see that there is an initial base point in the first chart at $(u,v) = (0,0)$. This indeterminacy can be resolved by a cascade of $14$ blow-ups, after which one finds regular initial value problems on the exceptional curve introduced by the last blow-up, but only after an additional change of dependent and independent variables. The cascade of base points is as follows:
\begin{align*}
{} & \mathcal{P}_1: (u,v) = \left(\frac{1}{x},\frac{y}{x} \right) = (0,0) \quad \leftarrow \quad \mathcal{P}_2: (U_1,V_1) = \left( \frac{1}{y}, \frac{y}{x} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_3: (u_2,v_2) = \left( \frac{1}{y}, \frac{y^2}{x}\right) = (0,0) \quad
\leftarrow \quad \mathcal{P}_4: (U_3,V_3) = \left( \frac{x}{y^3}, \frac{y^2}{x} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_5: (u_4,v_4) = \left( \frac{x}{y^3} , \frac{y^5}{x^2} \right) = (0,1) \quad \leftarrow \quad \mathcal{P}_6: (u_5,v_5) = \left( \frac{x}{y^3}, \frac{y^3 \left(y^5 - x^2 \right)}{x^3} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_7: (u_6,v_6) = \left( \frac{x}{y^3}, \frac{y^6 \left(y^5- x^2\right)}{x^4} \right) = (0,0) \quad \leftarrow \quad \mathcal{P}_8: (u_7,v_7) = \left( \frac{x}{y^3}, \frac{y^9 \left( y^5 - x^2 \right)}{x^5} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_9: (u_8,v_8) = \left( \frac{x}{y^3}, \frac{y^{12} \left(y^5-x^2\right)}{x^6} \right) = \left( 0, -\frac{2}{3}\alpha(z) \right) \\
\leftarrow \quad & \mathcal{P}_{10}: (u_9,v_9) = \left( \frac{x}{y^3} , \frac{y^3 \left(2 x^6 \alpha (z)-3 x^2 y^{12}+3 y^{17}\right)}{3 x^7} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_{11}: (u_{10},v_{10}) = \left( \frac{x}{y^3}, \frac{y^6 \left(2 x^6 \alpha (z)-3 x^2 y^{12}+3 y^{17}\right)}{3 x^8} \right) = (0,-\beta(z)) \\
\leftarrow \quad & \mathcal{P}_{12}: (u_{11},v_{11}) = \left( \frac{x}{y^3}, \frac{y^3 \left(3 x^8 \beta (z)+2 x^6 y^6 \alpha (z)-3 x^2 y^{18}+3 y^{23}\right)}{3 x^9} \right) = \left( 0, \frac{4}{9}\alpha'(z) \right) \\
\leftarrow \quad & \mathcal{P}_{13}: (u_{12},v_{12}) = \left( 0, \frac{4}{3}\alpha(z)^2 - 2 \gamma(z) \right) \\ & = \left( \frac{x}{y^3}, \frac{y^3 \left(6 x^6 y^9 \alpha (z)+9 x^8 y^3 \beta (z)-4 x^9 \alpha '(z)-9 x^2 y^{21}+9 y^{26}\right)}{9 x^{10}} \right) \\
\leftarrow \quad & \mathcal{P}_{14}: (u_{13},v_{13}) = (0,2\beta'(z)) \\ & = \left( \frac{x}{y^3}, -\frac{y^3 \left(12 x^{10} \alpha (z)^2-18 x^{10} \gamma (z)+4 x^9 y^3 \alpha '(z)-6 x^6 y^{12} \alpha (z)-9 x^8 y^6 \beta
(z)+9 x^2 y^{24}-9 y^{29}\right)}{9 x^{11}} \right).
\end{align*}
After blowing up $\mathcal{P}_{14}$, the differential system is of the form
\begin{equation}
\label{system14}
\begin{aligned}
u_{14}' &= \frac{-81 + p_{14,1}(z,u_{14},v_{14})}{2 u_{14}^2 \cdot d(z,u_{14},v_{14})^2}, \\
v_{14}' &= \frac{-36 \alpha''(z) + p_{14,2}(z,u_{14},v_{14})}{u_{14}^3 \cdot d(z,u_{14},v_{14})^2}, \\
U_{14}' &= \frac{36 \alpha''(z) + P_{14,1}(z,U_{14},V_{14})}{U_{14} V_{14}^3 \cdot D(z,U_{14},V_{14})^2}, \\
V_{14}' &= \frac{-81 + P_{14,2}(z,U_{14},V_{14})}{2 U_{14}^3 V_{14}^2 \cdot D(z,U_{14},V_{14})^2},
\end{aligned}
\end{equation}
where $p_{14,i}$ and $P_{14,i}$, $i=1,2$ are polynomials in the variables $u_{14},v_{14}$ and $U_{14},V_{14}$, respectively, so that on the exceptional curve $L_{14}: \{u_{14}=0\} \cup \{V_{14}=0\}$, introduced by the last blow-up, we have $p_{14,i}(z,0,v_{14}) = 0 = P_{14,i}(z,U_{14},0)$.
The zero set of the denominators $d(z,u_{14},v_{14})$ and $D(z,U_{14},V_{14})$ of (\ref{system14}) is also called the {\it exceptional divisor}, representing the set $\mathcal{I}_{13}'(z)$ in these coordinates, that is the union of the proper transforms of the exceptional curves $\mathcal{L}_1,\dots,\mathcal{L}_{13}$ introduced by the first $13$ blow-ups together with the line at infinity $\mathcal{L}_0 = I \setminus{\mathcal{P}_1} \subset \mathbb{P}^2$,
\begin{equation*}
\begin{aligned}
d &= 9+9 u_{14}^{10} v_{14}-6 u_{14}^4 \alpha +12 u_{14}^8 \alpha ^2-9 u_{14}^6 \beta -18 u_{14}^8 \gamma +4
u_{14}^7 \alpha '+18 u_{14}^9 \beta', \\
D &= 9+9 U_{14}^9 V_{14}^{10}-6 U_{14}^4 V_{14}^4 \alpha +12 U_{14}^8 V_{14}^8 \alpha ^2-9 U_{14}^6 V_{14}^6 \beta -18 U_{14}^8 V_{14}^8 \gamma +4 U_{14}^7 V_{14}^7 \alpha '+18 U_{14}^9 V_{14}^9 \beta'.
\end{aligned}
\end{equation*}
Since all the blow-ups are bi-rational transformations, one can always solve for the original coordinates, so we can give the dependence of $y$ on $u_{14},v_{14}$, as follows:
\begin{equation}
\label{coord14}
y = u_{14}^{-2} \left(1-\frac{2}{3} u_{14}^4 \alpha +u_{14}^6 \left(-\beta +u_{14} \left(\frac{4 \alpha'}{9}+\frac{1}{3} u_{14} \left(4 \alpha^2-6 \gamma +3 u_{14} \left(u_{14} v_{14}+2 \beta'\right)\right)\right)\right)\right)^{-1}.
\end{equation}
Integrating the system (\ref{system14}) when $\alpha''(z) \neq 0$ would result in logarithmic behaviour for $v_{14}$, since, to leading order,
\begin{equation*}
u_{14} = \sqrt[3]{-\frac{3}{2}}(z-z_0)^{1/3} + O \left( (z-z_0)^{2/3} \right),
\end{equation*}
and inserting this into the second equation of would result in
\begin{equation*}
v_{14}' = \frac{8}{27} \frac{\alpha''(z)}{z-z_0} + O\left( (z-z_0)^{-2/3} \right),
\end{equation*}
from which the logarithmic behaviour $v_{14} = \frac{8}{27} \alpha''(z_0) \log(z-z_0) + O(z-z_0)$ follows. As discussed above, $\alpha''(z) \equiv 0$ is the resonance condition, where the system admits algebraic series expansions. In this case, a cancellation of one factor of $u_{14}$ resp. $V_{14}$ occurs in the second and third equation of system (\ref{system14}), which becomes
\begin{equation}
\label{afterP14blowup_res}
\begin{aligned}
u_{14}' &= \frac{-81 + p_{14,1}(z,u_{14},v_{14})}{2 u_{14}^2 \cdot d(z,u_{14},v_{14})^2}, \\
v_{14}' &= \frac{72 \alpha(z) \alpha'(z) + 162 \gamma'(z) + \tilde{p}_{14,2}(z,u_{14},v_{14})}{u_{14}^2 \cdot d(z,u_{14},v_{14})^2},\\
U_{14}' &= \frac{72 \alpha(z) \alpha'(z) + 162 \gamma'(z) + \tilde{P}_{14,1}(z,U_{14},V_{14})}{U_{14} V_{14}^2 \cdot D(z,U_{14},V_{14})^2}, \\
V_{14}' &= \frac{-81 + P_{14,2}(z,U_{14},V_{14})}{2 U_{14}^3 V_{14}^2 \cdot D(z,U_{14},V_{14})^2},
\end{aligned}
\end{equation}
where $\tilde{p}_{14,2}$ and $\tilde{P}_{14,1}$ are polynomials in $u_{14},v_{14}$ resp. $U_{14},V_{14}$ with $\tilde{p}_{14,2}(z,0,v_{14})=0=\tilde{P}_{14,1}(z,U_{14},0)$. In this case, the vector field becomes transversal to the exceptional line $\mathcal{L}_{14} : \{u_{14}=0\} \cup \{V_{14}=0\}$, and the system can be integrated, to leading order, e.g.\ in the coordinates $u_{14},v_{14}$ as follows,
\begin{equation*}
\begin{aligned}
u_{14} &= \sqrt[3]{-\frac{3}{2}}(z-z_0)^{1/3} + O\left( (z-z_0)^{2/3} \right), \\
v_{14} &= h + \sqrt[3]{12} \left(\frac{8}{9} \alpha(z_0)\alpha'(z_0) + 2\gamma'(z_0)\right) (z-z_0)^{1/3} + O((z-z_0)^{2/3}),
\end{aligned}
\end{equation*}
where $h$ is the second integration constant (besides $z_0$). In this way, every point on the line $\mathcal{L}_{14}$ introduced by the last blow-up, parametrised by $(u_{14},v_{14})=(0,h)$, gives rise to an algebraic series solution. Denoting by $\mathcal{S}_{14}(z)$ the space obtained by blowing up the sequence of $14$ base points, which are themselves $z$-dependent, and the set $\mathcal{I}_{13}(z)$ as above, the analogue of the space of initial values can be defined as $\mathcal{S}_{14}(z) \setminus \mathcal{I}_{13}'(z)$.
Thus, away from the set $\mathcal{I}_{13}'(z)$, every point in the space we have constructed gives rise to an initial value problem with either analytic solutions or power series solutions in $(z-z_0)^{1/3}$. The latter solutions are transversal to the exceptional curve $\mathcal{L}_{14}$ from the last blow-up.
\begin{rem}
\label{system14remark}
In addition to the blow-up calculations for the vector field, it is important to show that, in each step, the solution cannot pass through the exceptional curve $\mathcal{L}_i$ at any point other than the base point $\mathcal{P}_{i+1}$. This is achieved by re-expressing the auxiliary function $W$ from (\ref{deg4_aux}) in the blown-up coordinates and verifying that the logarithmic derivative $\frac{W'}{W}$ is bounded in the neighbourhood of any point on the exceptional curve $\mathcal{L}_i \setminus \{ \mathcal{P}_{i+1} \}$, whereas $W$ itself is infinite there. Lemma \ref{log_bounded} then shows that the exceptional curve is inaccessible for the flow of the vector field other than at the base point. This is ascertainment for the intuitive notion that after each blow-up, the resulting vector field is infinite on the exceptional curve $\mathcal{L}_i$ and becomes tangent to this curve $\mathcal{L}_i$ when approached away from the base point $\mathcal{P}_{i+1}$. Although we do not give the detailed (and lengthy) expressions for $\frac{d}{dz} \log(W)$ here, we note that the above mentioned properties can be checked routinely using computer algebra.
\end{rem}
We can now conclude with the statement that, in the case of the condition $\alpha''(z)=0$ being satisfied, the only singularities are algebraic.
\begin{prop}
\label{degree4prop}
The class of equations
\begin{equation*}
y'' = y^4 + (az+b) y^2 + \beta(z) y + \gamma(z),
\end{equation*}
where $\beta$ and $\gamma$ are analytic functions and $a,b \in \mathbb{C}$, has the quasi-Painlev\'e property, with cubic-root type algebraic poles.
\end{prop}
\begin{proof}
Making a change in dependent and independent variables, the system (\ref{afterP14blowup_res}) becomes
\begin{equation}
\label{system14_z}
\begin{aligned}
\frac{d z}{d u_{14}} &= \frac{2 u_{14}^2 \cdot d(z,u_{14},v_{14})^2}{-81 + p_{14,1}(z,u_{14},v_{14})}, \\
\frac{d v_{14}}{d u_{14}} &= 2 \cdot \frac{72 \alpha(z) \alpha'(z) + 162 \gamma'(z) + \tilde{p}_{14,2}(z,u_{14},v_{14})}{-81 + p_{14,1}(z,u_{14},v_{14})}, \\
\frac{d z}{d V_{14}} &= \frac{2 U_{14}^3 V_{14}^2 \cdot D(z,U_{14},V_{14})^2}{-81 + P_{14,2}(z,U_{14},V_{14})}, \\
\frac{d U_{14}}{d V_{14}} &= 2U_{14}^2 \cdot \frac{72 \alpha(z) \alpha'(z) + 162 \gamma'(z) + \tilde{P}_{14,1}(z,U_{14},V_{14})}{-81 + P_{14,2}(z,U_{14},V_{14})},
\end{aligned}
\end{equation}
which, for initial data $(z,u_{14},v_{14})=(z_0,0,h)$ resp. $(z,U_{14},V_{14}) = (z_0,H,0)$ on the exceptional curve $\mathcal{L}_{14}$, defines a regular initial value problem of $(z,v_{14})$ in $u_{14}$ and of $(z,U_{14})$ in $V_{14}$, respectively. Let $\gamma \subset \mathbb{C}$ be a finite-length curve ending in a movable singularity $z_\ast$. The lifted curve in the phase space is denoted by $\Gamma(z)$. Let $(z_n) \subset \gamma$, $z_n \to z_\ast$ be a sequence along the curve $\gamma$. Due to the extended phase space (with all the exceptional curves) being compact, there exists a subsequence $(z_{n_k})$ such that the lifted sequence $\Gamma(z_{n_k})$ converges to a point $P_\ast \in S_{14}(z_\ast)$. By Remark \ref{system14remark}, we actually have $P_\ast \in S_{14} \setminus I'_{13}(z_\ast)$. If $P_\ast \notin \mathcal{L}_{14}$, we would be in the situation where the original system defines a regular initial value problem, and thus would be analytic, contradicting the assumption of a singularity at $z_\ast$. Hence, we must have $P_\ast \in \mathcal{L}_{14}$. But here system (\ref{system14_z}) has an analytic solution $(z,v_{14})$ of the form
\begin{equation*}
z(u_{14}) = z_\ast - \frac{2}{3} u_{14}^3 + O(u_{14}^4), \quad v_{14}(u_{14}) = h + 2\left(\frac{8}{9} \alpha(z_\ast)\alpha'(z_\ast) + 2\gamma'(z_\ast)\right) u_{14} + O(u_{14}^2),
\end{equation*}
or similar for $(z,U_{14})$. Inverting these power series one obtains an algebraic series expansion for $(u_{14},v_{14})$ in terms of $(z-z_\ast)^{1/3}$, which by (\ref{coord14}) corresponds to a cubic-root type algebraic pole in the original variable $y(z)$.
\end{proof}
\section{Second-order equation with polynomial right-hand side of degree $5$}
\label{sec:degree5}
As the lowest degree example of the equation $y'' = a_N y^N + \sum_{n=0}^{N-1} a_n(z) y^n$ with odd $N>3$ we consider the case $N=5$,
\begin{equation}
\label{degree5eqn}
y''(z) = 3y^5 + \alpha(z)y^3 + \beta(z)y^2 + \gamma(z)y + \delta(z),
\end{equation}
where the coefficient $a_5=3$ is chosen for computational convenience. As was shown in \cite{filipukhalburd1}, in the odd $N$ case, two resonance conditions are necessary and sufficient for the solutions of the equation to have algebraic poles as movable singularities. These can be found by inserting the formal series expansion
\begin{equation}
\label{degree5sol}
y(z) = \sum_{j=0}^\infty c_j (z-z_0)^{(j-1)/2}
\end{equation}
into equation (\ref{degree5eqn}) and computing, for each possible leading coefficient $c_0$, the obstruction in the recurrence relation (\ref{Nrecurrence}) to determine the coefficients $c_j$, $j=1,2,\dots$. In the odd $N$ case, there are two essentially different leading-order behaviours corresponding to the initial coefficients $c_{0} \in \{1,-1\}$, yielding two distinct resonances. In this case, these conditions are equivalent to $\alpha''(z) \equiv 0$ and $(\gamma(z)^2 + 4\alpha(z))'\equiv 0$. We will now show that we can recover these conditions through the construction of the analogue of the space of initial values for equation (\ref{degree5eqn}), and moreover, that the singularities of the form (\ref{degree5sol}) are the only type of movable singularity for equation (\ref{degree5eqn}).
Extending the phase space of the equation in the variables $(y,x)=(y,y')$ to $\mathbb{P}^2$ via the relations $[1:y:x] = [u:v:1] = [V:1:U]$, one finds the following systems of equations:
\begin{equation*}
\begin{aligned}
u'(z)& =-\frac{u^2 v^3 \alpha (z)+u^3 v^2 \beta (z)+u^4 v \gamma (z)+u^5 \delta (z)+3 v^5}{u^3}, \\
v'(z) &=-\frac{u^2v^4 \alpha (z)+u^3 v^3 \beta (z)+u^4 v^2 \gamma (z)+u^5 v \delta (z)-u^4+3 v^6}{u^4}, \\
U'(z) &= \frac{3-U^2 V^4+V^2 \alpha (z)+V^3 \beta (z)+V^4 \gamma (z)+V^5 \delta (z)}{V^4}, \\
V'(z) &= -U V.
\end{aligned}
\end{equation*}
There is a single base point in the chart $u,v$ at $(u,v)=(0,0)$. Here we describe the sequence of blow-ups needed to completely resolve this base point. Similar to the case of the second Painlev\'e equation in Section \ref{sec:InitialValueSpace}, the sequence of base points branches into two cascades after the third blow-up, so that a total of $15$ blow-ups is required. We denote coordinates and points of the subsequent blow-ups with superscripts $\pm$, the complete cascade being as follows:
\begin{equation*}
\begin{aligned}
{} & \mathcal{P}_1: (u,v) = \left(\frac{1}{x},\frac{y}{x} \right) = (0,0) \quad \leftarrow \quad \mathcal{P}_2: (U_1,V_1) = \left( \frac{1}{y}, \frac{y}{x} \right) = (0,0) \\
\leftarrow \quad & \mathcal{P}_3: (u_2,v_2) = \left( \frac{1}{y}, \frac{y^2}{x} \right) = (0,0) \quad \leftarrow \quad \mathcal{P}^{\pm}_4: (u_3,v_3) = \left( \frac{1}{y}, \frac{y^3}{x} \right) = (0, \pm 1) \\
\leftarrow \quad & \mathcal{P}^{\pm}_5: (u_4^\pm,v_4^\pm) = \left( \frac{1}{y} , \frac{y \left( y^3 \mp x \right)}{x} \right) = (0,0) \quad \leftarrow \quad \mathcal{P}^{\pm}_6: (u_5^\pm,v_5^\pm) = \left( \frac{1}{y} , \frac{y^2 \left( y^3 \mp x \right)}{x} \right) = \left(0, \mp \frac{\alpha}{4} \right) \\
\leftarrow \quad & \mathcal{P}^{\pm}_7: (u_6^\pm,v_6^\pm) = \left( \frac{1}{y} , \frac{4 y^6 \pm y (\alpha x - 4 y^2 x)}{4 x} \right) = \left( 0, \mp \frac{\beta}{3} \right) \\
\leftarrow \quad & \mathcal{P}^{\pm}_8: (u_7^\pm,v_7^\pm) = \left( \frac{1}{y}, \frac{12 y^7 \pm y \left( 3 x y \alpha + 4 x \beta - 12 x y^3 \right)}{12 x} \right) = \left( 0, \frac{1}{32} \left(4 \alpha' \pm \left( 3 \alpha^2-16 \gamma \right)\right) \right) \\
\leftarrow \quad & \mathcal{P}^{\pm}_9: (u_8^\pm,v_8^\pm) = \left( \frac{1}{y} , \frac{y \left(96 y^7 -12 x\alpha' \mp \left( 96 x y^4 -24 x y^2 \alpha +9 x \alpha^2-32 x y \beta -48 x \gamma \right) \right)}{96 x} \right) \\
& \qquad \qquad \qquad = \left(0, \frac{1}{3} \beta' \pm \left( \frac{1}{4} \alpha \beta - \delta \right) \right).
\end{aligned}
\end{equation*}
Due to the bi-rational nature of the blow-ups, the collected coordinate transformations in each cascade of blow-ups can be inverted, resulting in
\begin{equation}
\label{coord9inv}
\begin{aligned}
y &= \frac{1}{u^\pm_9}, \qquad x = y' = (u^\pm_9)^{-3} \left(1+ (u^\pm_9)^2 \left(-\frac{\alpha}{4} +u^\pm_9 \left(-\frac{\beta}{3} + u^\pm_9 \left(\frac{1}{32} \left(3 \alpha^2-16 \gamma +4 \alpha'\right) \right. \right. \right. \right. \\ & \qquad \qquad \qquad \left. \left. \left. \left. +u^\pm_9 \left(u^\pm_9 v^\pm_9+\frac{1}{12} \left(3 \alpha \beta-12 \delta+4\beta'\right)\right)\right)\right)\right)\right)^{-1}.
\end{aligned}
\end{equation}
In the coordinates after blowing up $\mathcal{P}^\pm_9$, the system is of the following form:
\begin{equation}
\label{system9}
\begin{aligned}
u_9^\pm{}' & = \frac{-96}{u_9^\pm \cdot d^\pm(z,u_9^\pm,v_9^\pm)}, \\
v_9^\pm{}' & = \frac{\mp 12\alpha''(z) -6\alpha(z)\alpha'(z) + 48\gamma(z) + p^\pm_{9,2}(z,u_9^\pm,v_9^\pm) }{(u_9^\pm)^2 \cdot d^\pm(z,u_9^\pm,v_9^\pm)}, \\
U_9^\pm{}' & = \frac{\pm 12\alpha''(z) + 6\alpha(z)\alpha'(z) -48 \gamma'(z) + P^\pm_{9,1}(z,U_9^\pm,V_9^\pm)}{(V_9^\pm)^2 \cdot D^\pm(z,U_9^\pm,V_9^\pm)}, \\
V_9^\pm{}' & = \frac{-96-6 U_9 \left( \alpha (z) \alpha '(z)+8 \gamma '(z)-2 \alpha''(z) \right) + P^\pm_{9,2}(z,U_9^\pm,V_9^\pm)}{(U_9^\pm)^2 V_9^\pm \cdot D^\pm(z,U_9^\pm,V_9^\pm)},
\end{aligned}
\end{equation}
where $p_{9,2}$ and $P_{9,i}$, $i=1,2$ are polynomials that are zero on the exceptional curve from the last blow-up, that is, $p^\pm_{9,2}(z,0,c) = 0$ and $P^\pm_{9,i}(z,C,0) = 0$. The zero set of $d^\pm$, resp. $D^\pm$ is the exceptional divisor, i.e.\ the proper transform of the line at infinity $\mathcal{L}_0 = I \setminus \{ \mathcal{P}_1 \} \in \mathbb{P}^2$ and the exceptional curves $\mathcal{L}_1,\dots,\mathcal{L}_8^\pm$ from the blow-ups of the two cascades $\mathcal{P}_1 \leftarrow \cdots \leftarrow \mathcal{P}_8^+$ and $\mathcal{P}_1 \leftarrow \cdots \leftarrow \mathcal{P}_8^-$, respectively:
\begin{equation*}
\begin{aligned}
d^\pm =& \pm \left( 96-24 (u^\pm_9)^2 \alpha +9 (u^\pm_9)^4 \alpha^2-32 (u^\pm_9)^3 \beta +24 (u^\pm_9)^5 \alpha \beta -48 (u^\pm_9)^4 \gamma -96 u_9^5 \delta \right) \\ & +12 (u^\pm_9)^4 \alpha'+32 (u^\pm_9)^5 \beta' +96 (u^\pm_9)^6 v^\pm_9, \\
D^\pm =& \pm \left( 96-24 (U^\pm_9 V^\pm_9)^2 \alpha+9 (U^\pm_9 V^\pm_9)^4 \alpha^2-32 (U^\pm_9 V^\pm_9)^3 \beta +24 (U^\pm_9 V^\pm_9)^5 \alpha \beta -48 (U^\pm_9 V^\pm_9)^4 \gamma \right. \\ & \left. -96 (U^\pm_9 V^\pm_9)^5 \delta \right) +12 (U^\pm_9 V^\pm_9)^4 \alpha'+32 (U^\pm_9 V^\pm_9)^5 \beta' +96 (U^\pm_9)^5 (V^\pm_9)^6. \\
\end{aligned}
\end{equation*}
Integrating the first equation in (\ref{system9}) yields
\begin{equation*}
u_9^\pm = i\sqrt{2}(z-z_0)^{1/2} + O\left( (z-z_0) \right),
\end{equation*}
where the sign of the square root can be absorbed into the choice of branch for $(z-z_0)^{\frac{1}{2}}$. Inserting this result into the second equation of (\ref{system9}), we see that $v_9^\pm$ has a logarithmic singularity,
\begin{equation*}
v_9^\pm(z) = \frac{1}{96} \left( \pm 6 \alpha''(z) +3 \alpha(z) \alpha'(z) -24\gamma'(z) \right) \log(z-z_0) + O\left((z-z_0)^{1/2}\right),
\end{equation*}
unless the condition
\begin{equation*}
\pm 2\alpha''(z) + \alpha(z) \alpha'(z) - 8\gamma(z) =0,
\end{equation*}
is satisfied. This condition, for both signs, amounts to the relations
\begin{equation}
\label{condn=5}
\alpha''(z) \equiv 0, \quad \left( \alpha(z)^2 - 16\gamma(z) \right)' \equiv 0.
\end{equation}
In this case, a cancellation of one factor of $u_9^\pm$ resp. $V_9^\pm$ occurs in the second and third equation of system (\ref{system9}). Then, by changing the role of dependent and independent variables, the system is of the following form:
\begin{equation}
\label{system9regular}
\begin{aligned}
\frac{d z}{d u_9^\pm} & = -\frac{u_9^\pm \cdot d^\pm(z,u_9^\pm,v_9^\pm)}{96}, \\
\frac{d v_9^\pm}{d u_9^\pm} & = -\frac{\tilde{p}_{9,2}^\pm(z,u_9^\pm,v_9^\pm)}{96}, \\
\frac{d z}{d V^\pm_9} & = \frac{(U_9^\pm)^2 V_9^\pm \cdot D^\pm(z,U_9^\pm,V_9^\pm)}{-96 +P^\pm_{9,2}(z,U_9^\pm,V_9^\pm)}, \\
\frac{d U^\pm_9}{d V^\pm_9} & = \frac{\tilde{P}^\pm_{9,1}(z,U^\pm_9,V^\pm_9)}{-96 + P^\pm_{9,2}(z,U^\pm_9,V^\pm_9)},
\end{aligned}
\end{equation}
where $\tilde{p}^\pm_{9,2} = \frac{1}{u^\pm_9} p^\pm_{9,2}$ and $\tilde{P}^\pm_{9,1} = \frac{1}{V^\pm_9} P_{9,1}$ are polynomials in $u_{9}^\pm,v_{9}^\pm$ and $U^\pm_9,V^\pm_0$, respectively.
For initial values $(z,u_9^\pm,v_9^\pm)=(z_0,0,h)$, respectively $(z,U^\pm_9,V^\pm_9)=(z_0,H,0)$, on the exceptional curve $\mathcal{L}_9^\pm : \{u_9^\pm = 0\} \cup \{V^\pm_9=0\}$, this defines a regular initial value problem with analytic solutions, e.g.
\begin{equation}
\label{system9solution}
\begin{aligned}
z(u_9^\pm) &= z_0 -\frac{1}{2} (u_9^\pm)^2 + O\left( (u_9^\pm)^3 \right), \\
v_9^\pm(u_9^\pm) &= h + O(u_9^\pm).
\end{aligned}
\end{equation}
Inverting these expansions we find the algebraic series solutions
\begin{equation}
\label{syst9sol}
u_9^\pm(z) = i\sqrt{2} (z-z_0)^{1/2} + O(z-z_0), \quad v_9^\pm(z) = h + O((z-z_0)^{1/2}),
\end{equation}
which by (\ref{coord9inv}) correspond to square-root type algebraic poles in the variable $y$.
To show that these are the only types of behaviour that can occur, we have to show that any solution actually traverses either the line $\mathcal{L}_9^+$ or $\mathcal{L}_9^-$. This is achieved by considering the following auxiliary function,
\begin{equation*}
W = \frac{1}{2}(y'(z))^2 - \frac{1}{2} y(z)^6 - \frac{\alpha(z)}{4} y(z)^4 - \frac{\beta(z)}{3} y(z)^3 - \frac{\gamma(z)}{2} y(z)^2 - \delta(z) y(z) + \left( \sum_{k=1}^4 \frac{\xi_k(z)}{y(z)^k} \right) y'(z),
\end{equation*}
where we impose the conditions (\ref{condn=5}) and the functions $\xi_k$ can be determined to be
\begin{equation*}
\xi_1 = \frac{1}{8} \alpha', \quad \xi_2 = \frac{1}{3} \beta', \quad \xi_3 = 0, \quad \xi_4 = \frac{1}{24} \beta \alpha' + \frac{1}{3} \beta'' - \delta'.
\end{equation*}
Here, $\xi_3$ turns out to be arbitrary and has been set to $0$.
After each blow-up one can check that, away from the base point, the logarithmic derivative $\frac{W'}{W}$ is bounded, whereas $W$ itself is infinite. Although the expressions for the logarithmic derivative become lengthy and are omitted here, this can be checked easily using a computer algebra system. Lemma \ref{log_bounded} then shows that the lines $\mathcal{L}_0$, $\mathcal{L}_i \setminus \mathcal{P}_{i+1}$, $i=1,2,3$ and $\mathcal{L}^\pm_{i} \setminus \mathcal{P}^\pm_{i+1}$, $i \in \{4,5,6,7,8\}$ are inaccessible for the flow of the vector field.
Denoting by $\mathcal{S}_9(z)$ the space obtained by blowing up $\mathbb{P}^2$ along the two cascades $\mathcal{P}_1 \leftarrow \cdots \leftarrow \mathcal{P}_9^+$ and $\mathcal{P}_1 \leftarrow \cdots \leftarrow \mathcal{P}_9^-$, and $\mathcal{I}'_8(z)$ the proper transform of the set $\mathcal{I}_8(z) = \mathcal{L}_0 \cup \mathcal{L}_1 \cup \mathcal{L}_2 \cup \mathcal{L}_3 \cup \bigcup_{i=4}^8 \mathcal{L}_i^+ \cup \bigcup_{i=4}^8 \mathcal{L}_i^-$ in $\mathcal{S}_9(z)$, we obtain $\mathcal{S}_9(z) \setminus \mathcal{I}'_8(z)$ as the analogue of the space of initial values for equation (\ref{degree5_reseqn}).
We can now prove, by similar arguments as in Proposition \ref{degree4prop}, that the algebraic series (\ref{syst9sol}) are the only possible types of movable singularities that can occur by analytic continuation of a solution along finite-length curves.
\begin{prop}
The class of equations
\begin{equation}
\label{degree5_reseqn}
y'' = y^5 + (az+b) y^3 + \beta(z) y^2 + \left( \frac{1}{16}(az+b)^2 + c \right) y + \delta(z),
\end{equation}
where $\beta(z)$ and $\delta(z)$ are analytic in $z$ and $a,b,c \in \mathbb{C}$ constants, has the quasi-Painlev\'e property, with square-root type algebraic poles.
\end{prop}
\begin{proof}
The proof proceeds similar to the proof of Proposition \ref{degree4prop}. Suppose that a solution, analytically continued along a finite-length curve $\gamma \subset \mathbb{C}$, ends in a movable singularity $z_\ast \in \mathbb{C}$. The lifted curve in the extended phase space is denoted $\Gamma(z)$. Let $(z_n) \subset \gamma$, $z_n \to z_\ast$ be a sequence along $\gamma$. Due to the compactness of the phase space (with all exceptional curves), there exists a subsequence $(z_{n_k})$ such that the lifted sequence $\Gamma(z_{n_k})$ converges to a point $P_\ast \in S_9(z_\ast)$. By Lemma \ref{log_bounded}, and the existence of a function $W(z)$ that is infinite on the set $\mathcal{I}'_8(z)$, with bounded logarithmic derivative $\frac{W'}{W}$, we actually have $P_\ast \in S_9(z_\ast) \setminus \mathcal{I}'_8(z_\ast)$. Since the solution has a singularity at $z_\ast$, we must have $P_\ast \in \mathcal{L}^+_9 \cup \mathcal{L}^-_9$. For, if this was not the case, the sequence of points in the phase space would have an accumulation point away from the exceptional curves, where the original equation has a regular initial value problem, and, by Lemma \ref{Painlemma} has an analytic solution, which is contrary to the assumption of a singularity at $z_\ast$.
Now suppose that $P_\ast \in \mathcal{L}^+_9$ (the case for $P_\ast \in \mathcal{L}^-_9$ is similar). The sequence $(z_{n_k}, u^+_9(z_{n_k}),v^+_9(z_{n_k}))$ converges to the point $P_\ast \in \mathcal{L}^+_9$, on which the system (\ref{system9regular}) defines a regular initial value problem for $(z,v^+_9)$ in the variable $u_9^\pm$, resp. $(z,U^+_9)$ in the variable $V^+_9$. Therefore, by Lemma \ref{Painlemma}, system (\ref{system9regular}) has an analytic solution of the form (\ref{system9solution}), with $z_0 = z_\ast$, which translates into a square-root type branch point for $(u^+_9(z),v^+_9(z))$ and therefore by (\ref{coord9inv}) into a square-root type algebraic pole for $y(z)$.
\end{proof}
\section{Hamiltonian systems with algebraic singularities}
\label{sec:HamiltonianSystems}
In the previous section we have seen how to resolve the base points of the second-order equations of the form $y'' = P(z,y)$, extending the phase space of $(y,y')$. These equations are themselves Hamiltonian systems by letting
\begin{equation}
\label{2nd-order-Hamil}
H(z,x,y) = \frac{1}{2} x^2 - \tilde{P}(z,y), \quad \frac{\partial \tilde{P}}{\partial y} = P(z,y),
\end{equation}
where $x=y'$ and we let $N = \deg_y P$. In the previous sections we considered the cases $N=4$ and $N=5$, whereas the case $N=3$ was discussed in section \ref{sec:InitialValueSpace}, leading to the second Painlev\'e equation. Furthermore, the case $N=2$ leads to the first Painleve equation. In fact, all six Painlev\'e equations can be written as polynomial Hamiltonian systems $H(z,x,y)$ with rational coefficients in $z$. The blow-ups leading to the space of initial values for all Painlev\'e Hamiltonian systems where performed by Okamoto \cite{Okamoto1979}.
In \cite{Kecker2016}, one of the authors studied the class of polynomial Hamiltonian systems,
\begin{equation}
\label{generalMN_Hamiltonian}
\begin{aligned}
H(z,x(z),y(z)) &= \sum_{i=0}^M \sum_{j=0}^N \alpha_{ij}(z) x(z)^i y(z)^j, \\
x'(z) = \frac{\partial H}{\partial y}, & \quad y'(z) = - \frac{\partial H}{\partial x},
\end{aligned}
\end{equation}
with $i,j$ constrained by $iN+jM \leq MN$, so that $x^M$ and $y^N$ are the dominant terms in the equations which, similar to the equations $y'' = P(z,y)$, under certain resonance conditions, have the property that all their movable singularities are algebraic poles. We consider here the case where the coefficients of the dominant terms are constant, which amounts to saying the system (\ref{generalMN_Hamiltonian}) has no fixed singularities. By a suitable scaling, these constants can take any (non-zero) numerical value. Furthermore, the terms $x^{M-1}$ and $y^{N-1}$ can be transformed away, leaving us with the Hamiltonian
\begin{equation}
\label{MNHamiltonian}
H = \frac{1}{N} y^N - \frac{1}{M} x^M + \sum_{0 < iM+jN<MN} \alpha_{ij}(z) x^i y^j.
\end{equation}
Under these assumptions, leading order behaviour for series solutions $(x(z),y(z)$ is of the form
\begin{equation}
\label{HamExpansions}
x(z) = c_0 (z-z_0)^{-\frac{N}{MN-M-N}} + \cdots , \quad y(z) = d_0 (z-z_0)^{-\frac{M}{MN-M-N}} + \cdots .
\end{equation}
Using the method of compactifying the phase space and blowing up the base points, we will see how to obtain the conditions by which the expansions (\ref{HamExpansions}) yield algebraic poles, i.e.\ when they are free from logarithmic singularities.
The case $\min\{M,N\} = 2$ can be reduced essentially to the case (\ref{2nd-order-Hamil}), representing second-order equations. We will thus look at some examples with $M,N \geq 3$. The case $M=N=3$, discussed in the next paragraph, is interesting as it leads to a system of equations related to the fourth Painlev\'e equation, i.e. in this case the singularities are simple (ordinary) poles. The space of initial values for this system was already computed in \cite{Kecker2019} and is reproduced here for completeness. In the following two sections we then consider the cases $M=N=4$ and $M=3,N=4$, which have square-root and $5$th-root type algebraic poles, respectively. Constructing the analogue of the space of initial values for these systems and using an appropriate auxiliary function in conjunction with Lemma \ref{log_bounded} to show that the exceptional curves from the intermediate blow-ups are inaccessible for the flow of the vector field, allows us to conclude that these are the only possible types of movable singularities under analytic continuation along finite-length curves, i.e.\ these systems have the quasi-Painlev\'e property. The forms of the auxiliary functions are taken from the article \cite{Kecker2016}, where they are derived as a quantity that is bounded at all movable singularities.
\subsection{Case $M=N=3$: a system with the Painlev\'e property. \\}
\label{sec:cubicHam}
We consider the cubic Hamiltonian system
\begin{equation}
\label{cubicHam}
H(z,x(z),y(z)) = \frac{1}{3} \left( y^3 - x^3 \right) + \gamma(z)xy +\beta(z) x + \alpha(z) y,
\end{equation}
which was introduced in \cite{Kecker2015}. If $\alpha$ and $\beta$ are constants and $\gamma(z)$ a function at most linear in $z$, it was shown that the system of equations derived from (\ref{cubicHam}) has the Painlev\'e property. Below we will see that, by applying the procedure of compactifying the system (\ref{cubicHam}) with general analytic functions $\alpha(z)$, $\beta(z)$, $\gamma(z)$, after blowing up and resolving the base points of the system, these conditions can be recovered by requiring that the system has no logarithmic singularities.
Extending the system to projective space we obtain, in the three standard coordinate charts of $\mathbb{P}^2$, $[1:y:x] = [u:v:1] = [V:1:U]$,
\begin{equation*}
\begin{aligned}
x'(x) &= y^2 + \gamma(z) x + \alpha(z), & \quad
y'(z) &= x^2 - \gamma(z) y - \beta(z), \\
u'(z) &=-v^2-u^2 \alpha (z)-u \gamma (z), & \quad
v'(z) &=-\frac{-1+v^3+u^2 v \alpha (z)+u^2 \beta (z)+2 u v \gamma(z)}{u},\\
V'(z) &=-U^2+V^2 \beta (z)+V \gamma(z), & \quad U'(z) &=-\frac{-1+U^3-V^2 \alpha (z)-U V^2 \beta (z)-2 U V \gamma(z)}{V}.
\end{aligned}
\end{equation*}
We can see that initially there are three base points on the line at infinity of $\mathbb{P}^2$, given by
\begin{equation*}
\mathcal{P}_1^\rho: (u,v) = (0,\rho) \quad \leftrightarrow \quad (U,V) = (\rho^{-1},0), \quad \rho \in \{1,\omega,\bar{\omega}\},
\end{equation*}
where $\omega = \frac{-1 + i \sqrt{3}}{2}$ is a third root of unity and $\bar{\omega}$ its complex conjugate. Keeping $\rho$ as a symbol representing either of the three roots of unity, each base point is resolved by a cascade of three blow-ups. We denote the coordinates of the three respective sequences of blow-ups with superscripts $\rho \in \{1,\omega,\bar{\omega}\}$:
\begin{equation*}
\begin{aligned}
{} & \mathcal{P}_1^\rho: (u^\rho,v^\rho) = \left(\frac{1}{x},\frac{y}{x}\right) = (0,\rho) \quad \leftarrow \quad \mathcal{P}_2^\rho: (u_1^\rho,v_1^\rho) = \left( \frac{1}{x}, y - \rho x \right) = (0,-\bar{\rho} \gamma(z)) \\
& \leftarrow \quad \mathcal{P}_3^\rho: (u_2^\rho,v_2^\rho) = \left( \frac{1}{x}, x \left( y - \rho x + \bar{\rho} \gamma(z) \right) \right) = \left( 0, \gamma'(z) - \rho \beta(z) - \bar{\rho} \alpha(z) \right).
\end{aligned}
\end{equation*}
After blowing up $\mathcal{P}_3^\rho$, the system of equations takes the following form:
\begin{equation}
\label{cubicsystem3}
\begin{aligned}
u_3^\rho{}' &= p^{\rho}_{3,1}(z,u_3^\rho,v_3^\rho), \\
v_3^\rho{}' &= \frac{\bar{\rho} \alpha'(z) + \rho \beta'(z) - \gamma''(z) + p_{3,2}^\rho(z,u_3^\rho,v_3^\rho)}{u_3^\rho}, \\
U_3^\rho{}' &= \frac{U_3^\rho (\gamma''(z) - \bar{\rho} \alpha'(z) - \rho \beta'(z)) + P_{3,1}(z,U_3^\rho,V_3^\rho)}{V_3^\rho},\\
V_3^\rho{}' &= \frac{-\bar{\rho} + P_{3,2}(z,U_3^\rho,V_3^\rho)}{U_3^\rho}.
\end{aligned}
\end{equation}
We see that there is an additional base point at $\mathcal{P}^\rho_4: U_3^\rho = V_3^\rho = 0$. This point can be blown up once more, rendering a system free from base points (but not regular). However, the point $\mathcal{P}_4^\rho$ is only present if the condition
\begin{equation}
\label{cubicHam_cond}
\bar{\rho} \alpha'(z) + \rho \beta'(z) - \gamma''(z) \equiv 0,
\end{equation}
is {\it not} satisfied, in which case the system (\ref{cubicsystem3}) exhibits solutions with logarithmic singularities. If, however, condition (\ref{cubicHam_cond}) is satisfied, factors of $u_3^\rho$ and $V_3^\rho$ cancel in the second resp. third equation of system (\ref{cubicsystem3}), which then defines a regular initial value problem at every point on the exceptional curves $\mathcal{L}_3^\rho: (u_3^\rho,v_3^\rho) = (0,h)$ of the third blow-up in each cascade. Denoting by $\mathcal{S}_3(z)$ the space obtained by blowing up $\mathbb{P}^2$ along the three cascades of base points $\mathcal{P}_1^\rho \leftarrow \mathcal{P}_2^\rho \leftarrow \mathcal{P}_3^\rho$, $\rho \in \{1,\omega,\bar{\omega}\}$, the space of initial values is $\mathcal{S}_3(z) \setminus \left( \bigcup_{\rho \in \{1,\omega,\bar{\omega}\}} \mathcal{I}^{\rho}_2{}'(z) \right)$, where $\mathcal{I}^\rho_2{}'(z)$ is the union of the proper transforms of the line at infinity $\mathcal{L}_0 = I \setminus \{ \mathcal{P}^1_1, \mathcal{P}^\omega_1, \mathcal{P}^{\bar{\omega}}_1 \} \subset \mathbb{P}^2$ and the exceptional curves $\mathcal{L}_i^\rho$, $i=1,2$, from the first two blow-ups in each cascade of base points.
Together, the three conditions (\ref{cubicHam_cond}), for $\rho \in \{1,\omega,\bar{\omega}\}$, are required for the absence of logarithms in the solutions, which result in $\gamma'' = \beta' = \alpha' \equiv 0$, that is, $\alpha$ and $\beta$ are constant and $\gamma(z) = az+b$ is at most linear in $z$. In case $a=0$, the Hamiltonian system is autonomous and can be integrated directly using the Hamiltonian as first integral. When $a \neq 0$, by a re-scaling of $z$, $x$ and $y$ the system can be normalised to the form
\begin{equation}
\label{cubicPainlevesystem}
H = \frac{1}{3}(y^3 - x^3) + zxy + \alpha y + \beta x, \quad x' = y^2 + zx + \alpha, \quad y' = x^2 - zy - \beta.
\end{equation}
This system is in fact closely related to the Hamiltonian system defining the fourth Painlev\'e equation, and was introduced in \cite{Kecker2015} and investigated further in \cite{Steinmetz2018}. By similar arguments as in section \ref{sec:InitialValueSpace}, constructing the space of initial values gives an alternative method of proof for the Painlev\'e property of system (\ref{cubicPainlevesystem}). The proof makes use of the following auxiliary function,
\begin{equation}
\label{MN3_W}
W = H - \frac{y^2}{x}.
\end{equation}
The correction term $\frac{y^2}{x}$ is chosen to compensate for the divergence of $H' = \frac{\partial H}{\partial z} = x y$ at any singularity (one could alternatively have chosen $\frac{x^2}{y}$ due to the symmetry in $x$ and $y$). In \cite{Kecker2015} it is shown that $W$ satisfies a first-order differential equation of the form $W' = P W + Q$, where $P$ and $Q$ are bounded functions, and hence that $W$ itself is bounded. In the context of the space of initial values, we need to check that the logarithmic derivative $\frac{W'}{W}$ is bounded on the exceptional curve, whereas $W$ itself is infinite there, to apply Lemma \ref{log_bounded}. We demonstrate the process for this example, re-writing the function $W$ in terms of the other coordinate charts of $\mathbb{P}^2$.
In the chart $[u:v:1]$, we have
\begin{equation}
W_0(z,u(z),v(z)) = (v^3-1-3 u^2 v^2+3 u v z+3 u^2 v \alpha +3 u^2 \beta)/(3 u^3).
\end{equation}
The logarithmic derivative is
\begin{equation}
\frac{d \log W_0}{d z} = \frac{W_0'}{W_0} = \frac{3 u \left(-v+v^4+3 u v^2 z+u^2 v^2 \alpha +2 u^2 v \beta \right)}{v^3-1-3 u^2 v^2+3 u v z+3 u^2 v \alpha +3 u^2 \beta },
\end{equation}
which is bounded in a neighbourhood of any point on the line $u=0$, apart from the base points where $v^3=1$. Therefore, by Lemma \ref{log_bounded}, the line at infinity is inaccessible for the solution away from the points $(u,v)=(0,\rho)$, $\rho \in \{1,\omega,\bar{\omega}\}$. After the first blow-up, in the coordinates $(u_1,v_1) = (u^\rho_1,v^\rho_1)$, where for simplicity we let $\rho =1$, we have
\begin{equation*}
W_1(z,u_1,v_1) = \frac{-1+3 \beta u_1^2+ (3 z u_1 + 3 \alpha u_1^2) \left(1+u_1 v_1\right) -3 u_1^2 \left(1+u_1 v_1\right)^2+\left(1+u_1 v_1\right)^3}{3u_1^3},
\end{equation*}
and
\begin{equation*}
\frac{W_1'}{W_1} = \frac{3 u_1 P_1(u_1,v_1,z)}{3 v_1+3 z +u_1 Q_1(u_1,v_1,z)},
\end{equation*}
where
\begin{equation*}
\begin{aligned}
P_1 &= 3 z+\alpha u_1+2 \beta u_1+3 v_1+6 z u_1 v_1+2 \alpha u_1^2 v_1+2 \beta u_1^2 v_1+6 u_1 v_1^2+3 z u_1^2 v_1^2+\alpha u_1^3 v_1^2+4 u_1^2 v_1^3+u_1^3 v_1^4, \\
Q_1 &= -3 +3 \alpha +3 \beta +3 z v_1-6 u_1 v_1+3 \alpha u_1 v_1+3 v_1^2-3 u_1^2 v_1^2+u_1 v_1^3,
\end{aligned}
\end{equation*}
the calculations for $\rho = \omega,\bar{\omega}$ being similar. $\frac{W_1'}{W_1}$ is bounded in a neighbourhood of any point on the exceptional line $u_1=0$, other than the base point $(u_1,v_1)=(0,-z)$. Supposing that we are analytically continuing a solution leading up to a singularity at $z_\ast$, the point $(u_1,v_1)=(0,-z_\ast)$ is the only point on the exceptional curve where Lemma \ref{log_bounded} cannot be applied.
Performing the second blow-up we have, in the coordinates $(u_2,v_2)$,
\begin{equation*}
W_2(z,u_2,v_2) = \frac{-1-3 u^2 (1+u (u v-z))^2+(1+u (u v-z))^3+ (3 u +3 u^2 \alpha) (1+u (u v-z)) z +3 u^2 \beta }{3 u^3},
\end{equation*}
and
\begin{equation*}
\frac{W_2'}{W_2} = \frac{3 u P_2(u,v,z)}{3 v - 3 + 3 \alpha + 3 \beta + u Q_2(u,v,z)},
\end{equation*}
where
\begin{equation*}
\begin{aligned}
P_2 & = \left(3 v+6 u^2 v^2+4 u^4 v^3+u^6 v^4-6 u v z-9 u^3 v^2 z-4 u^5 v^3 z+6 u^2 v z^2+6 u^4 v^2 z^2-u z^3 \right. \\ & \left. -4 u^3 v z^3+u^2 z^4+\alpha +2 u^2 v \alpha +u^4 v^2 \alpha -2 u z \alpha -2 u^3 v z \alpha +u^2 z^2 \alpha +2 \beta +2 u^2 v \beta -2 u z \beta \right), \\
Q_2 & = -6 u v+3 u v^2-3 u^3 v^2+u^3 v^3+6 z-3 v z+6 u^2 v z-3 u^2 v^2 z-3 u z^2+3 u v z^2- z^3 +3 u v \alpha -3 z \alpha,
\end{aligned}
\end{equation*}
so that $\frac{W_2'}{W_2}$ is bounded in a neighbourhood of any point on the exceptional curve $u_2=0$, other than $(u_2,v_2) = (0, 1-\alpha -\beta)$, whereas $W_2$ itself is infinite on this line. By Lemma \ref{log_bounded} we can conclude that, if a solution approaches a singularity $z_\ast$ along some finite-length path $\gamma$ ending in $z_\ast$, it must pass through one of the exceptional lines $\mathcal{L}^\rho_3$ introduced by the third blow-ups in each cascade. On the lines $\mathcal{L}^\rho_3$ the function $W$, re-written in the appropriate coordinates, is in fact finite. Furthermore, any solution approaching the lines $\mathcal{L}^{\rho}_3$ can be analytically continued across these lines where the system defines a regular initial value problem. The solutions on the lines $\mathcal{L}^{\rho}_3$, when transformed back into the original coordinates, result in simple poles for $x(z),y(z)$ with residues $-\rho$ and $\bar{\rho}$, respectively. This provides an alternative proof that the Hamiltonian system defined by (\ref{cubicPainlevesystem}) has the Painlev\'e property.
\subsection{Case $M=N=4$.}
The differential system studied in this section arises from the Hamiltonian
\begin{equation*}
H(z,x,y) = \frac{1}{4} \left(y^4 - x^4 \right) + \sum_{0<i+j\leq 3} \alpha_{i,j}(z) x^i y^j,
\end{equation*}
where the $\alpha_{i,j}(z)$ are analytic functions in some common domain $\Omega \subset \mathbb{C}$. Similar as in the case $M=N=3$, the geometry of the analogue of the space of initial values of this system is much more symmetric than in the case of the second-order equations discussed in sections \ref{sec:degree4} and \ref{sec:degree5}. Although $16$ blow-ups are required to regularise the system, these decompose into $4$ separate cascades of $4$ blow-ups. As before, we write down the extended system of equations in the three standard charts of $\mathbb{P}^2$:
\begin{equation*}
\begin{aligned}
x' &= y^3+\alpha _{0,1}+2 y \alpha _{0,2}+x \alpha _{1,1}+2 x y \alpha _{1,2}+x^2 \alpha_{2,1}, \\
y' &= x^3-\alpha _{1,0}-y \alpha _{1,1}-y^2 \alpha _{1,2}-2 x \alpha _{2,0}-2 x y \alpha_{2,1}, \\
u' &= -\frac{v^3+u^3 \alpha _{0,1}+2 u^2 v \alpha _{0,2}+u^2 \alpha _{1,1}+2 u v \alpha _{1,2}+u \alpha_{2,1}}{u}, \\
v' &= -\frac{-1+v^4+u^3 v \alpha _{0,1}+2 u^2 v^2 \alpha _{0,2}+u^3 \alpha _{1,0}+2 u^2 v \alpha _{1,1}+3 u v^2 \alpha _{1,2}+2 u^2 \alpha _{2,0}+3 u v \alpha _{2,1}}{u^2}, \\
U' &= -\frac{-1+U^4-V^3 \alpha _{0,1}-2 V^2 \alpha _{0,2}-U V^3 \alpha _{1,0}-2 U V^2 \alpha _{1,1}-3 U V \alpha _{1,2}-2 U^2 V^2 \alpha _{2,0}-3 U^2 V \alpha _{2,1}}{V^2}, \\
V' &= -\frac{U^3-V^3 \alpha_{1,0}-V^2 \alpha _{1,1}-V \alpha _{1,2}-2 U V^2 \alpha _{2,0}-2 U V \alpha _{2,1}}{V}.
\end{aligned}
\end{equation*}
We observe that there are six base points (denoted with superscripts), namely
\begin{equation*}
\begin{aligned}
\mathcal{P}^0_1 &: (u,v) = (0,0) \\
\tilde{\mathcal{P}}^0_1 &: (U,V) = (0,0) \\
\mathcal{P}_1^1 &: (u,v) = (0,1) & \leftrightarrow & \quad (U,V) = (1,0) \\
\mathcal{P}_1^i &: (u,v) = (0,i) & \leftrightarrow & \quad (U,V) = (-i,0) \\
\mathcal{P}_1^{-1} &: (u,v) = (0,-1) & \leftrightarrow & \quad (U,V) = (-1,0) \\
\mathcal{P}_1^{-i} &: (u,v) = (0,-i) & \leftrightarrow & \quad (U,V) = (i,0). \\
\end{aligned}
\end{equation*}
We note that the points $\mathcal{P}^0_1$ and $\tilde{\mathcal{P}}^1_0$ can be resolved by one blow-up each, only the transforms of the points $\mathcal{P}_1^\rho$, $\rho \in \{1,i,-1,-i\}$ are still visible in the charts obtained after blowing up $\mathcal{P}^0_1$ and $\tilde{\mathcal{P}}^0_1$.
We now resolve the four base points $\mathcal{P}_1^\rho$, $\rho \in \{1,i,-1,-i\}$, in the coordinates $(u,v)$, where we use the superscript $\rho$ to denote the coordinates after blowing up. For each point $\mathcal{P}_1^\rho$, we find the following cascade of four blow-ups:
\begin{equation*}
\begin{aligned}
{} & \mathcal{P}_1^\rho: (u,v) = \left(\frac{1}{x},\frac{y}{x}\right) = (0,\rho) \quad \leftarrow \quad \mathcal{P}_2^\rho: (u_1^\rho,v_1^\rho) = \left( \frac{1}{x}, y - \rho x \right) = (0,\alpha_{2,1} + \rho \alpha_{1,2}) \\
& \leftarrow \quad \mathcal{P}_3^\rho: (u_2^\rho,v_2^\rho) = \left( \frac{1}{x}, x \left(y - \rho x + \bar{\rho} \alpha_{1,2} + \rho^2 \alpha_{2,1} \right) \right) = \left( 0, \frac{\rho}{2} \alpha_{1,2}^2 - \frac{\bar{\rho}}{2} \alpha_{2,1} -\rho^2 \alpha_{1,1} - \bar{\rho} \alpha_{0,2} - \rho \alpha_{2,0} \right) \\
& \leftarrow \quad \mathcal{P}_4^\rho: (u_3^\rho,v_3^\rho) = \left( \frac{1}{x}, x \left(- \rho x^2 + x y + \bar{\rho} \alpha _{0,2} + \rho^2 \alpha_{1,1} + \bar{\rho} x \alpha _{1,2} - \frac{\rho}{2} \alpha_{1,2}^2 + \rho \alpha_{2,0} + \rho^2 \alpha _{2,1} + \frac{\bar{\rho}}{2} \alpha _{2,1}^2\right) \right) \\
& \qquad \qquad = \left(0, -\frac{i}{2} \left(2 i \alpha _{0,1}+2 \alpha _{1,0}-2 \alpha _{0,2} \alpha _{1,2}+\alpha _{1,2}^3-2
\alpha _{1,2} \alpha _{2,0}-2 \alpha _{1,1} \alpha _{2,1} -4 i \alpha _{1,2}^2 \alpha_{2,1} +4 i \alpha _{2,0} \alpha_{2,1} \right. \right. \\ & \qquad \qquad \qquad \left. \left. -3 \alpha _{1,2} \alpha _{2,1}^2+2 i \alpha _{1,2}'+2 \alpha_{2,1}'\right) \right).
\end{aligned}
\end{equation*}
After blowing up $\mathcal{P}_4^\rho$, the system of equations takes the following form:
\begin{equation}
\label{system4}
\begin{aligned}
u_4^\rho{}' & = \frac{- \bar{\rho} + p_{4,1}(z,u_4^\rho,v_4^\rho)}{u_4^\rho}, \\
v_4^\rho{}' & = \frac{\rho^2 \alpha_{1,1}'(z) + \rho \left( \alpha_{2,0}'(z) - \alpha _{1,2}(z) \alpha _{1,2}'(z) \right) + \bar{\rho} \left( \alpha _{0,2}'(z) + \alpha _{2,1}(z) \alpha _{2,1}'(z) \right) + p_{4,2}(z,u_4^\rho,v_4^\rho) }{\left( u_4^\rho \right)^2}, \\
U_4^\rho{}' &= \frac{-\rho^2 \alpha_{1,1}'(z) - \rho \left( \alpha_{2,0}'(z) - \alpha _{1,2}(z) \alpha _{1,2}'(z) \right) - \bar{\rho} \left( \alpha _{0,2}'(z) + \alpha _{2,1}(z) \alpha _{2,1}'(z) \right) + P_{4,1}(z,U_4^\rho,V_4^\rho)}{(V_4^\rho)^2}, \\
V_4^\rho{}' &= \frac{- \bar{\rho} + P_{4,2}(z,U_4^\rho,V_4^\rho)}{(U_4^\rho)^2 V_4^\rho}.
\end{aligned}
\end{equation}
Thus, unless the condition
\begin{equation}
\label{mn4cond}
\rho^2 \alpha_{1,1}'(z) + \rho \left( \alpha_{2,0}'(z) - \alpha _{1,2}(z) \alpha _{1,2}'(z) \right) + \bar{\rho} \left( \alpha _{0,2}'(z) + \alpha _{2,1}(z) \alpha _{2,1}'(z) \right) = 0
\end{equation}
is satisfied, the system admits logarithmic singularities,
\begin{equation*}
\begin{aligned}
u_4^\rho =& (-2\bar{\rho})^{1/2} (z-z_0)^{1/2} + O(z-z_0), \\
v_4^\rho =& \left( \rho^2 \alpha_{1,1}'(z_0) + \rho \left( \alpha_{2,0}'(z_0) - \alpha _{1,2}(z_0) \alpha _{1,2}'(z_0) \right) + \bar{\rho} \left( \alpha _{0,2}'(z_0) + \alpha _{2,1}(z_0) \alpha _{2,1}'(z_0) \right) \right) \log(z-z_0) \\ & + O((z-z_0)^{1/2}).
\end{aligned}
\end{equation*}
For the solutions of the system to be free from logarithmic branch points, condition (\ref{mn4cond}) must be satisfied for all $\rho \in \{1,i,-1,-i\}$. Then, one factor of $u_4^\rho$ and $V_4^\rho$ cancel in the second resp.\ third equation of system (\ref{system4}).
To see that the singularities $z_\ast$ of the solution are all of this form, we write the system in the form
\begin{equation*}
\begin{aligned}
\frac{d z}{d u_4^\rho} &= \frac{u_4^\rho}{- \bar{\rho} + p_{4,1}(z,u_4^\rho,v_4^\rho)}, \\
\frac{d v_4^\rho}{d u_4^\rho} &= \frac{p_{4,2}(z,u_4^\rho,v_4^\rho)}{- \bar{\rho} + p_{4,1}(z,u_4^\rho,v_4^\rho)},
\end{aligned}
\end{equation*}
where we have interchanged the role of the dependent and independent variables. This system has analytic solutions for initial values $(z,u_4^\rho,v_4^\rho) = (z_\ast,0,h)$ on the exceptional curve $\mathcal{L}_4^\rho$,
\begin{equation*}
z = z_\ast - \rho (u_4^\rho)^2 + O((u_4^\rho)^3), \quad v_4^\rho = h + O(u_4^\rho),
\end{equation*}
which can be inverted to find square-root type algebraic series expansions for $(u_4^\rho,v_4^\rho)$:
\begin{equation*}
u_4^\rho = (z-z_\ast)^{1/2} + O(z-z_\ast), \quad v_4^\rho = h + (z-z_\ast)^{1/2} + O(z-z_\ast).
\end{equation*}
The conditions (\ref{mn4cond}), for $\rho \in \{1,i,-1,-i\}$, decouple into three linearly independent conditions among the $\alpha_{i,j}(z)$ and their derivatives, namely
\begin{equation*}
(2 \alpha_{2,0}(z) - \alpha_{1,2}(z)^2)' = \alpha_{1,1}'(z) = (2 \alpha_{0,2}(z) + \alpha_{2,1}(z)^2)' \equiv 0,
\end{equation*}
that is, the functions $2 \alpha_{2,0}(z) - \alpha_{1,2}(z)^2$, $\alpha_{1,1}(z)$ and $2 \alpha_{0,2}(z) + \alpha_{2,1}(z)^2$ each have to be equal to a constant. This is in agreement with the resonance conditions found in \cite{Kecker2016} for this Hamiltonian system. Furthermore, we introduce the following auxiliary function,
\begin{equation}
W = H - \alpha_{2,1}'(z) \frac{y^2}{x} - \alpha_{1,2}'(z) \frac{y^3}{x^2}.
\end{equation}
Using computer algebra, after each blow-up one can routinely check that the logarithmic derivative of $W$ is bounded in a neighbourhood of any point on the exceptional curves away from the base points, while $W$ is infinite. Lemma \ref{log_bounded} then guarantees that the exceptional curves introduced by the first three blow-ups of each cascade are inaccessible for the flow of the vector field. Let $\mathcal{S}_4(z)$ denote the space obtained by blowing up $\mathbb{P}^2$ along the four cascades of base points, $\mathcal{P}_1^\rho \leftarrow \mathcal{P}_2^\rho \leftarrow \mathcal{P}_3^\rho \leftarrow \mathcal{P}_4^\rho$, $\rho \in \{1,i,-1,-i\}$, and $\mathcal{I}_3(z) = \mathcal{L}_0 \cup \bigcup_{i=1}^3 \mathcal{L}_i^1 \cup \bigcup_{i=1}^3 \mathcal{L}_i^i \cup \bigcup_{i=1}^3 \mathcal{L}_i^{-1} \cup \bigcup_{i=1}^3 \mathcal{L}_i^{-i}$. The space of initial values for the Hamiltonian system (\ref{MN4_resHam}) is $\mathcal{S}_4(z) \setminus \mathcal{I}_3'(z)$, at each point of which the system either defines an analytic solution or a solution with square-root type algebraic branch point. Using similar arguments as in the previous sections we can thus show:
\begin{prop}
\label{MN4_prop}
Given the Hamiltonian
\begin{equation}
\label{MN4_resHam}
H = \frac{1}{4}\left( y^4 - x^4 \right) + \alpha_{2,1} x^2 y + \alpha_{1,2} x y^2 + (a + \frac{1}{2} \alpha_{2,1}^2) x^2 + (b - \frac{1}{2} \alpha_{1,2}^2)y^2 + c x y + \alpha_{1,0} x + \alpha_{0,1} y,
\end{equation}
where $\alpha_{1,2}(z),\alpha_{2,1}(z),\alpha_{1,0}(z),\alpha_{0,1}(z)$ are analytic functions and $a,b,c \in \mathbb{C}$ are constants, the system derived from this Hamiltonian has the quasi-Painlev\'e property, with square-root type algebraic poles.
\end{prop}
\subsection{Case $M=3,N=4$.}
With a slightly different normalisation as given in (\ref{MNHamiltonian}) we consider the class of Hamiltonians
\begin{equation}
\label{M3N4Hamiltonian}
H(z,x(z),y(z)) = y^4 - x^3 + \sum_{0<i+j\leq 3} \alpha_{ij}(z) x(z)^i y(z)^j.
\end{equation}
This only differs from the preceding case by the power of $x$ being one less. However since the system is no longer symmetric in $x$ and $y$, the blow-up structure in this case is very different. We will see that a single cascade of $16$ blow-ups is necessary to resolve an initial base point. In fact, this example is more similar to the second-order equation in section \ref{sec:degree4}. Extending the Hamiltonian system derived from (\ref{M3N4Hamiltonian}) to $\mathbb{P}^2$ yields the three systems of equations
\begin{align*}
x'(z) &= 4 y^3 + a_{21} x^2 + 2 a_{12} x y + a_{11} x + 2 a_{02} y + a_{01}, \\
y'(z) &= 3 x^2 - 2 a_{21} x y - a_{12} y^2 - 2 a_{20} x - a_{11} y - a_{10}, \\
u'(z) &= -\frac{2 u^2 v \alpha_{02}+u^3 \alpha_{01}+u^2 \alpha _{11}+2 u v \alpha _{12}+u \alpha_{21}+4 v^3}{u}, \\
v'(z) &= -\frac{2 u^2 v^2 \alpha_{02} +u^3 v \alpha_{01}+2 u^2 v \alpha_{11}+u^3 \alpha_{10}+2 u^2 \alpha _{20}+3 u v^2 \alpha_{12}+3 u v \alpha_{21}-3 u+4 v^4}{u^2}, \\
U'(z) &= -\frac{-2 U^2 V^2 \alpha_{20}-3 U^2 V \alpha _{21}+3 U^3 V-U V^3 \alpha_{10}-2 U V^2 \alpha_{11}-3 U V \alpha _{12}-V^3 \alpha_{01}-2 V^2 \alpha_{02}-4}{V^2}, \\
V'(z) &= -3 U^2+2 U V \alpha_{20}+2 U\alpha_{21}+V^2 \alpha_{10}+V \alpha_{11}+\alpha_{12},
\end{align*}
which have a single base point at $(u,v) = (0,0)$. This indeterminacy of the vector field can be removed by a cascade of $16$ blow-ups, obtained using computer algebra, which we give in the following. Since the expressions for the coordinate transformation become very long in this case, we only give the locations of base points to be blown up:
\begin{align*}
\mathcal{P}_1:& \quad (u,v) = (0,0) \quad \leftarrow \quad \mathcal{P}_2: \quad (U_1,V_1) = (0,0) \quad \leftarrow \quad \mathcal{P}_3: \quad (U_2,V_2) = (0,0) \\
\leftarrow \quad \mathcal{P}_4:& \quad (U_3,V_3) = (0,0) \quad \leftarrow \quad \mathcal{P}_5: \quad (U_4,V_4) = (1,0) \quad \leftarrow \quad \mathcal{P}_6: \quad (U_5,V_5) = (\alpha_{21},0)\\
\leftarrow \quad \mathcal{P}_7:& \quad (U_6,V_6) = \left(\alpha_{12} + \alpha_{21}^2, 0 \right) \quad \leftarrow \quad \mathcal{P}_8: \quad (U_7,V_7) = \left( 2 \alpha_{12} \alpha_{21} + \alpha_{21}^3, 0 \right) \\
\leftarrow \quad \mathcal{P}_9: & \quad (U_8,V_8) = \left( \alpha _{21}^4+3 \alpha _{12} \alpha _{21}^2+\alpha _{12}^2+\alpha _{20}, 0 \right) \\
\leftarrow \quad \mathcal{P}_{10}:& \quad (U_9,V_9) = \left( \alpha _{21}^5+4 \alpha_{12} \alpha_{21}^3+3 \alpha_{12}^2 \alpha_{21}+3 \alpha _{20} \alpha_{21}+\alpha_{11}, 0 \right) \\
\leftarrow \quad \mathcal{P}_{11}:& \quad (U_{10},V_{10}) = \left( -\frac{1}{6} \alpha _{12}'+\alpha _{21}^6 + 5 \alpha_{21}^4 \alpha_{12} + 6 \alpha_{12}^2 \alpha_{21}^2 +\alpha_{12}^3 + 6 \alpha_{20} \alpha_{21}^2 +3 \alpha _{11} \alpha _{21} \right. \\ & \qquad \qquad \left. + 3\alpha_{12} \alpha_{20} +\alpha_{02}, 0 \right) \\
\leftarrow \quad \mathcal{P}_{12}:& \quad (U_{11},V_{11}) = \left( -\frac{19}{30} \alpha _{21} \alpha_{12}'-\frac{1}{5} \alpha _{12}'+\alpha _{21}^7+6 \alpha _{12} \alpha _{21}^5+10 \alpha_{12}^2 \alpha _{21}^3 +10 \alpha_{20} \alpha_{21}^3 \right. \\ & \qquad \qquad \left. +4 \alpha_{12}^3 \alpha _{21} +3 \alpha_{02} \alpha _{21}+12 \alpha_{12} \alpha_{20} \alpha_{21} +3 \alpha _{11} + 6 \alpha_{11} \alpha_{21}^2 + 3 \alpha_{11} \alpha_{12} , 0 \right) \\
\leftarrow \quad \mathcal{P}_{13}:& \quad (U_{12},V_{12}) = \left( -\frac{3}{4} \alpha _{21} \alpha_{12}' -\frac{3}{2} \alpha _{21}^2 \alpha _{12}'-\frac{7}{12} \alpha_{12} \alpha _{12}'+\alpha _{21}^8 +7 \alpha _{12} \alpha _{21}^6+15 \alpha _{12}^2 \alpha_{21}^4 \right. \\ & \qquad \qquad +15 \alpha _{20} \alpha _{21}^4+10 \alpha _{11} \alpha _{21}^3 +10 \alpha_{12}^3 \alpha _{21}^2+30 \alpha _{12} \alpha_{20} \alpha_{21}^2+12 \alpha _{11} \alpha _{12} \alpha_{21} + \alpha_{12}^4 \\ & \qquad \qquad \left. +2 \alpha _{20}^2+\alpha _{10}+6 \alpha _{12}^2 \alpha_{20}+3 \alpha_{02} \left( 2 \alpha_{21}^2+\alpha _{12}\right), 0 \right) \\
\leftarrow \quad \mathcal{P}_{14}:& \quad (U_{13},V_{13}) = \left( -\frac{17}{6} \alpha _{21}^3 \alpha _{12}'-\frac{7}{4} \alpha _{21}^2 \alpha _{12}'-\frac{11}{4} \alpha_{12} \alpha _{21} \alpha _{12}'-\frac{2}{3} \alpha _{12} \alpha_{12}' -\frac{1}{3} \alpha_{20}' \right. \\ & \qquad \qquad +\alpha _{21}^9+8 \alpha _{12} \alpha _{21}^7+21 \alpha _{12}^2 \alpha _{21}^5 + 21\alpha _{20} \alpha _{21}^5 +20 \alpha _{12}^3 \alpha _{21}^3+10 \alpha _2 \alpha _{21}^3 \\ & \qquad \qquad +60 \alpha_{12} \alpha_{20} \alpha _{21}^3+5 \alpha _{12}^4 \alpha _{21} +10 \alpha _{20}^2 \alpha_{21}+4 \alpha_{10} \alpha _{21}+12 \alpha _2 \alpha _{12} \alpha _{21} \\ & \qquad \qquad \left. +30 \alpha_{12}^2 \alpha _{20} \alpha _{21} +\alpha _{11} \left(15 \alpha _{21}^4+30 \alpha_{12} \alpha _{21}^2+6 \alpha _{12}^2+4 \alpha _{20}\right) +\alpha_{01}, 0\right) \\
\leftarrow \quad \mathcal{P}_{15}:& \quad (U_{14},V_{14}) = \left( -\frac{14}{3} \alpha _{21}^4 \alpha _{12}'-\frac{13}{4} \alpha _{21}^3 \alpha _{12}'-\frac{31}{4} \alpha_{12} \alpha _{21}^2 \alpha _{12}' -\frac{37}{12} \alpha _{12} \alpha _{21} \alpha
_{12}' \right. \\ & \qquad \qquad -\frac{5}{3} \alpha _{21} \alpha _{20}' -\frac{1}{2} \alpha _{11}'-\frac{5}{4} \alpha_{12}^2 \alpha _{12}' -\alpha_{20} \alpha_{12}'+\alpha _{21}^{10}+9 \alpha _{12} \alpha _{21}^8+28 \alpha _{12}^2 \alpha _{21}^6 \\ & \qquad \qquad +28 \alpha _{20} \alpha _{21}^6 +35 \alpha _{12}^3 \alpha_{21}^4+15 \alpha _2 \alpha _{21}^4+105 \alpha _{12} \alpha _{20} \alpha _{21}^4+15 \alpha_{12}^4 \alpha _{21}^2 \\ & \qquad \qquad +30 \alpha _{20}^2 \alpha _{21}^2 +30 \alpha _2 \alpha _{12} \alpha_{21}^2+90 \alpha _{12}^2 \alpha _{20} \alpha _{21}^2+4 \alpha_{01} \alpha _{21} +\alpha _{12}^5 \\ & \qquad \qquad +\alpha_{11} \left(21 \alpha _{21}^4+60 \alpha_{12} \alpha _{21}^2+30 \alpha _{12}^2+20 \alpha_{20}\right) \alpha _{21} +2 \alpha_{11}^2+6 \alpha_{02} \alpha _{12}^2 \\& \qquad \qquad \left. +10 \alpha _{12} \alpha _{20}^2+10 \alpha _{12}^3 \alpha _{20}+4 \alpha _2 \alpha _{20} +2 \alpha_{10} \left(5 \alpha _{21}^2+2 \alpha _{12}\right), 0 \right) \displaybreak \\
\leftarrow \quad \mathcal{P}_{16}:& \quad (U_{15},V_{15}) = \left( -7 \alpha _{21}^5 \alpha _{12}'-\frac{21}{4} \alpha_{21}^4 \alpha _{12}'-\frac{203}{12} \alpha_{12} \alpha _{21}^3 \alpha_{12}' -\frac{17}{2} \alpha _{12} \alpha _{21}^2 \alpha _{12}'-5 \alpha_{21}^2 \alpha _{20}' \right. \\ & \qquad \qquad-\frac{5}{2} \alpha_{21} \alpha _{11}'-7 \alpha _{12}^2 \alpha_{21} \alpha_{12}' -\frac{29}{5} \alpha _{20} \alpha _{21} \alpha_{12}'-\alpha _2'-\frac{4}{3} \alpha_{12}^2 \alpha _{12}'-\alpha _{11} \alpha_{12}'-\frac{6}{5} \alpha _{20} \alpha_{12}' \\ & \qquad \qquad -\frac{5}{3} \alpha _{12} \alpha_{20}'+\frac{1}{6} \alpha _{21}''+\alpha _{21}^{11}+10 \alpha_{12} \alpha _{21}^9 +36 \alpha _{12}^2 \alpha _{21}^7 +36 \alpha _{20} \alpha _{21}^7 +28 \alpha _{11} \alpha _{21}^6 \\ & \qquad \qquad +56 \alpha _{12}^3 \alpha _{21}^5+168 \alpha _{12} \alpha _{20} \alpha _{21}^5 +105 \alpha _{11} \alpha _{12} \alpha _{21}^4+35 \alpha _{12}^4 \alpha
_{21}^3+70 \alpha _{20}^2 \alpha _{21}^3 +20 \alpha _{10} \alpha _{21}^3 \\ & \qquad \qquad +210 \alpha_{12}^2 \alpha _{20} \alpha_{21}^3+90 \alpha_{11} \alpha_{12}^2 \alpha_{21}^2+60 \alpha_{11} \alpha_{20} \alpha _{21}^2 +6 \alpha_{12}^5 \alpha_{21} +10 \alpha _{11}^2 \alpha_{21} \\ & \qquad \qquad +60 \alpha _{12} \alpha _{20}^2 \alpha_{21}+20 \alpha_{10} \alpha_{12} \alpha_{21} +60\alpha_{12}^3 \alpha_{20} \alpha_{21}+10 \alpha_{11} \alpha _{12}^3+20 \alpha _{11} \alpha_{12} \alpha_{20} \\ & \qquad \qquad \left. +2 \alpha_{01} \left(5 \alpha _{21}^2+2 \alpha _{12}\right)+\alpha_{02} \left(4\alpha_{11}+\alpha_{21} \left(21 \alpha _{21}^4+60 \alpha_{12} \alpha _{21}^2+30 \alpha_{12}^2+20 \alpha _{20}\right)\right), 0 \right).
\end{align*}
After the $16$th blow-up, the system of equations takes the following form:
\begin{equation}
\label{system16}
\begin{aligned}
u_{16}' &= \frac{-3600 + p_{16,1}(z,u_{16},v_{16})}{u_{16}^4 v_{16}^5 \cdot d(z,u_{16},v_{16})^2}, \\
v_{16}' &= \frac{-240(2\alpha_{21}'(z)^2 + 2\alpha_{21}(z)^2 \alpha_{21}''(z) + 3\alpha_{12}''(z)) + p_{16,2}(z,u_{16},v_{16})}{u_{16}^5 v_{16}^3 \cdot d(z,u_{16},v_{16})^2}, \\
U_{16}' &= \frac{240(2\alpha_{21}'(z)^2 + 2\alpha_{21}(z)^2 \alpha_{21}''(z) + 3\alpha_{12}''(z)) + P_{16,1}(z,U_{16},V_{16})}{V_{16}^5 \cdot D(z,U_{16},V_{16})^2}, \\
V_{16}' &= \frac{-3600 + P_{16,2}(z,U_{16},V_{16})}{V_{16}^4 \cdot D(z,U_{16},V_{16})^2},
\end{aligned}
\end{equation}
where $p_{16,i}$ and $P_{16,i}$, $i=1,2$, are polynomials in $u_{16},v_{16}$ and $U_{16},V_{16}$, respectively, such that $p_{16,i}(z,0,v_{16}) = 0 = P_{16,i}(z,U_{16},0)$ on the exceptional curve $\mathcal{L}_{16}: \{u_{16}=0\} \cup \{V_{16}=0\}$. The polynomial expressions $d(z,u_{16},v_{16})$ and $D(z,U_{16},V_{16})$, whose zero sets are the proper transforms in these coordinates of the exceptional curves $\mathcal{L}_i$, $i=1,\dots,15$, of all previous blow-ups, satisfy $d(z,0,v_{16}) = 60 = D(z,U_{16},0)$ on the curve $\mathcal{L}_{16}$. Due to their lengthy nature we omit writing down the full expressions.
Thus, unless the condition
\begin{equation}
\label{M3N4cond}
2\alpha_{21}'(z)^2 + 2 \alpha_{21}(z) \alpha_{21}''(z) + 3 \alpha_{12}''(z) = (\alpha_{21}^2 + 3\alpha_{12})'' \equiv 0
\end{equation}
is satisfied, we can see that the solutions of system (\ref{system16}) admit logarithmic singularities. Indeed, integrating the fourth equation in system (\ref{system16}) gives the leading order behaviour $V_{16} \sim (z-z_0)^{1/5}$. Inserting this into the third equation, $U_{16}$ has a logarithmic branch point. Thus, for the absence of logarithmic singularities we require condition (\ref{M3N4cond}) to be satisfied, which amounts to the function $\alpha_{21}(z)^2 + 3\alpha_{12}(z)$ being at most linear in $z$. In this case, one factor of $u_{16}$ and $V_{16}$ cancel in the second resp.\ third equation of system (\ref{system16}), and by interchanging the role of dependent and independent variables, we can write the system of equations in the form
\begin{equation*}
\begin{aligned}
\frac{d z}{d V_{16}} & = \frac{V_{16}^4 \cdot D(z,U_{16},V_{16})^2}{-3600 + P_{16,2}(z,U_{16},V_{16})}, \\
\frac{d U_{16}}{d V_{16}} & = \frac{\tilde{P}_{16,1}(z,U_{16},V_{16})}{-3600 + P_{16,2}(z,U_{16},V_{16})},
\end{aligned}
\end{equation*}
which, for initial data $(z,U_{16},V_{16}) = (z_0,h,0)$ on the exceptional curve $\mathcal{L}_{16}$ from the last blow-up, becomes a regular initial value problem with analytic solutions
\begin{equation*}
z(V_{16}) = z_0 -\frac{1}{5} V_{16}^5 + O(V_{16}^6), \quad U_{16}(V_{16}) = h + O(V_{16}).
\end{equation*}
Inverting the power series for $z-z_0$ leads to series expansions for $U_{16}$ and $V_{16}$ in $(z-z_0)^{1/5}$, which translate to $5$th-root type algebraic poles in the original variables $x,y$.
We still need to show that, after each blow-up, the exceptional line $\mathcal{L}_i$ is inaccessible for the flow of the vector field, apart from at the newly introduced base point $\mathcal{P}_{i+1}$. For this we introduce the following auxiliary function of the form (obtained in \cite{Kecker2016})
\begin{equation*}
W = H - \beta_{2,0}(z) y^2 - \beta_{3,1}(z) \frac{y^3}{x} - \beta_{4,2}(z) \frac{y^4}{x^2} - \beta_{1,0}(z) y - \beta_{2,1}(z) \frac{y^2}{x} - \beta_{3,2}(z) \frac{y^3}{x^2}.
\end{equation*}
The functions $\beta_{kl}(z)$ can be determined using the procedure described in \cite{Kecker2016}, where they are fixed so that $W$ satisfies a first-order equation with bounded coefficients. On the other hand, we can also obtain the $\beta_{kl}(z)$ in the blow-up process itself by the requirement that after each blow-up the logarithmic derivative $\frac{W'}{W}$ remains bounded on the exceptional curves, away from the base points. Using the latter method we have found (with the condition (\ref{M3N4cond}) imposed):
\begin{equation*}
\begin{aligned}
\beta_{2,0} &=\frac{\alpha_{2,1}'}{6}, \\
\beta_{3,1} &=\frac{1}{15} \left(3 \alpha_{1,2}'+2 \alpha_{2,1} \alpha_{2,1}'\right), \\
\beta_{4,2} &=\frac{1}{60} \left(9 \alpha_{2,1} \alpha_{1,2}'+5 \alpha_{1,2} \alpha_{2,1}'+6 \alpha_{2,1}^2 \alpha_{2,1}'\right), \\
\beta_{1,0} &= \frac{1}{30} \left(2 \alpha_{1,2} \alpha_{1,2}'+3 \alpha_{2,1}^2 \alpha_{1,2}'+10 \alpha_{2,0}'+3 \alpha_{1,2} \alpha_{2,1} \alpha_{2,1}'+2 \alpha_{2,1}^3 \alpha_{2,1}'\right), \\
\beta_{2,1} &=\frac{1}{60} \left(30 \alpha_{1,1}'-2 \alpha_{1,2} \alpha_{2,1} \alpha_{1,2}'-3 \alpha_{2,1}^3 \alpha_{1,2}'+20 \alpha_{2,1} \alpha_{2,0}'+20 \alpha_{2,0} \alpha_{2,1}'-3 \alpha_{1,2} \alpha_{2,1}^2\alpha_{2,1}' -2 \alpha_{2,1}^4 \alpha_{2,1}'\right), \\
\beta_{3,2} &=\frac{1}{60} \left(60 \alpha_{0,2}'+30 \alpha_{2,1} \alpha_{1,1}'-8 \alpha_{1,2}^2 \alpha_{1,2}'+24 \alpha_{2,0} \alpha_{1,2}'-14 \alpha_{1,2} \alpha_{2,1}^2 \alpha_{1,2}'-3 \alpha_{2,1}^4 \alpha_{1,2}'+20 \alpha_{1,2} \alpha_{2,0}' \right. \\ & \left. +20 \alpha_{2,1}^2 \alpha_{2,0}'+20 \alpha_{1,1} \alpha_{2,1}'-12 \alpha_{1,2}^2 \alpha_{2,1} \alpha_{2,1}'+36 \alpha_{2,0} \alpha_{2,1} \alpha_{2,1}'-11 \alpha_{1,2} \alpha_{2,1}^3 \alpha_{2,1}'-2 \alpha_{2,1}^5 \alpha_{2,1}'-10 \alpha_{2,1}'' \right).
\end{aligned}
\end{equation*}
Denoting by $\mathcal{S}_{16}(z)$ the extended space obtained from $\mathbb{P}^2$ by blowing up the cascade of base points $\mathcal{P}_1 \leftarrow \cdots \leftarrow \mathcal{P}_{16}$, we define the analogue of the space of initial values for the system by $\mathcal{S}_{16}(z) \setminus \mathcal{I}_{15}'(z)$, where $\mathcal{I}_{15}(z) = I \cup \bigcup_{i=1}^{15} \mathcal{L}_i$. In each point of this space, the system is either regular or defines solutions with a $5$th-root type singularity, whereas the exceptional curves $\mathcal{L}_i$, $i=1,\dots15$, remain inaccessible. Using similar arguments as in Proposition \ref{degree4prop}, we can thus show:
\begin{prop}
Under the condition $\alpha_{21}^2(z) + 3\alpha_{12}(z) = az + b$, $a,b \in \mathbb{C}$, the Hamiltonian system derived from (\ref{M3N4Hamiltonian}) has the quasi-Painlev\'e property, i.e.\ all movable singularities obtained by analytic continuation along finite-length curves are $5$th-root type algebraic poles.
\end{prop}
\section{Discussion}
For the examples of second-order equations in Sections \ref{sec:degree4} and \ref{sec:degree5}, as well as the Hamiltonian systems in Section \ref{sec:HamiltonianSystems} we have constructed, under the conditions by which these systems do not admit logarithmic singularities, the analogue of the space of initial values in the sense of Okamoto's space for the Painlev\'e equations. In this case, the solutions are transversal to the exceptional curve introduced by the last blow-up for any cascade of base points. The difference to the Painlev\'e case is that, in order to obtain regular initial value problems in the coordinates of the extended phase space, an additional change of dependent and independent variable is needed. The existence of these regular systems allows us to conclude, using Lemma \ref{log_bounded} with an appropriate auxiliary function and together with Painlev\'e's lemma (Lemma \ref{Painlemma}), that the only movable singularities that can occur in these equations, by analytic continuation along finite-length paths, are algebraic poles.
This procedure thus firstly serves as an algorithm to determine, for a given second-order equation or system of two equations, what types of singularities their solutions can develop and give conditions under which there are no logarithmic singularities. In the latter case, the construction of the space of initial values allows us to show that these equations have the quasi-Painlev\'e property.
In the examples considered in this article, it is crucial that the cascades of blow-ups required to resolve the base points terminate. By a powerful theorem by Hironaka \cite{Hironaka1964} for singular algebraic varieties, the singularities of an arbitrary algebraic variety can always be resolved by a finite number of blow-ups. This is not the case, however, for flows of vector fields. An example where the sequence of blow-ups does not terminate is given by Smith's equation $y'' + 4y^3 y' + y = 0$, which is not of Hamiltonian form. This was noted by the authors in \cite{KeckerFilipuk} and is a hint that for this equation more complicated movable singularities exist than considered in this article. In fact, as noted earlier, Smith himself showed that there do exist singularities besides the algebraic poles (\ref{smithexpansion}), which are are known to be accumulation points of such algebraic poles.
It would be an important step to find (necessary and / or sufficient) conditions for a differential equation to decide whether such behaviour is possible or not. We believe that, at least for the second-order equations in \cite{filipukhalburd1} and the Hamiltonian systems in \cite{Kecker2016} such behaviour is not possible, although we cannot show this in general.
In the Hamiltonian setting, the level sets $H(z,x,y) = c$ define, for generic $z$ and analytic functions $\alpha_{i,j}(z)$ in the Hamiltonian $H(z,x,y;\alpha_{ij})$, algebraic curves in $\mathbb{P}^2$. For the Painlev\'e Hamiltonians, and also the system with cubic Hamiltonian in section \ref{sec:cubicHam}, these level sets have genus $g=1$, i.e. represent elliptic curves. This is also expressed in the fact that the Painlev\'e transcendents, in general, are asymptotic to elliptic functions in certain sectors of the complex plane.
For the Hamiltonians of the second-order equations in sections \ref{sec:degree4} and \ref{sec:degree5}, the level sets $H(z,x,y) = c$ are algebraic curves of hyper-elliptic type, with genus $g=2$, as is the case for the other examples of Hamiltonians considered in this article. The number of blow-ups required, ranging from $14$ to $16$, is substantially larger than $9$ in the Painlev\'e case. We would like to propose several questions which will require further investigation.
Can one predict, from the form of the Hamiltonian $H(z,x,y)$, how many blow-ups will be required to completely resolve all base points? In particular, can one give conditions under which the cascades of blow-ups terminate?
Can a classification, similar to Sakai's classification \cite{Sakai2001} for the Painlev\'e equations in terms of point configurations on rational surfaces, be given for Hamiltonians defining algebraic curve of genus $g \geq 2$, and, do there exist difference equations with a similar meaning as the discrete Painlev\'e equations?
\section*{Declarations}
\paragraph{\bf Funding}
GF acknowledges the support of the National Science Center (Poland) through the grant OPUS 2017/25/B/BST1/00931. TK acknowledges support of the London Mathematical Society (LMS) and the Faculty of Mathematics, Informatics and Mechanics at the University of Warsaw (MIMUW) for travel grants to visit Warsaw in the years 2014, 2015 and 2016; these visits, were this research was initiated, were essential for the success of the project. \\
\paragraph{\bf Data availability}
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. \\
\paragraph{\bf Conflict of interest}
The authors declare that there are no conflicts of interest.
\bibliographystyle{plain}
|
2,877,628,088,641 | arxiv | \section{Introduction}
One of the most appealing and natural mechanisms for generating the
tiny neutrino masses experimentally allowed is the (type I) see-saw
mechanism \cite{ss}. This simple extension of the Standard Model (SM)
with two or three right-handed Majorana neutrinos provides also
a very attractive origin for the observed Baryon Asymmetry of the
Universe, via leptogenesis \cite{fy,review}.
A lepton asymmetry is dynamically generated in
the out of equilibrium decay of the heavy Majorana neutrinos, and
then partially converted into a baryon asymmetry due to
($B+L$)-violating non-perturbative sphaleron interactions \cite{krs}.
Unfortunately, a direct test of the see-saw mechanism is in general
not possible. If the Dirac neutrino masses are similar to the other
SM fermion masses, as naturally expected, the Majorana neutrino masses
turn out to be of order $10^8-10^{16}$ GeV, so the
right-handed (RH) neutrinos can
not be produced in the LHC or future colliders, neither lead to other
observable effects such as lepton flavour violating processes.
On the other hand,
although the see-saw mechanism can also work with Majorana masses as low as
100 GeV, which in principle are within the energy reach of the LHC,
the smallness of the light neutrino masses generically implies in
this case tiny neutrino Yukawa couplings and as consequence negligible
mixing with the active neutrinos, leading also to unobservable
effects.
In order to obtain a large active-sterile neutrino mixing, some
cancellations must occur in the light neutrino mass matrix.
In this context, the small neutrino masses are not explained
by a see-saw mechanism, but rather by an ``inverse see-saw'' \cite{iss},
in which the global lepton number symmetry $U(1)_L$ is slightly
broken by a small parameter, $\mu$. In other words, the
fine-tuned cancellations required in the light neutrino
masses are not accidental, but due to an approximate symmetry.
The smallness of the parameter $\mu$ is
protected from radiative corrections (even without supersymmetry),
since in the limit $\mu \rightarrow 0$ a larger symmetry is
realized \cite{tHooft}.
Much attention has been devoted recently to this possibility,
both in the context of LHC phenomenology \cite{LHC} and
in leptogenesis \cite{resonant,majoron,ab}. A consequence of the
slightly broken $U(1)_L$ symmetry is the existence of two
strongly degenerate RH neutrinos, which combine to form
a quasi-Dirac fermion. This is interesting for leptogenesis,
because it leads to an enhancement of the CP asymmetry,
avoiding the strong bounds which apply to hierarchical
RH neutrinos \cite{di},
and allowing for successful leptogenesis at
much lower temperatures, $T \sim {\cal O}(1 \;\rm{TeV})$
\footnote{See also \cite{marta,hambye} for alternative
(extended) models of leptogenesis with RH Majorana
neutrinos at the
TeV scale and without resonant enhancement.}.
We focus here on a different scenario: we assume that the small lepton
number violating effects responsible of light neutrino masses
are negligible during the leptogenesis epoch, so $B-L$ is
effectively conserved. This can be the case if global lepton number is
broken spontaneously at a scale well below the electroweak
phase transition \cite{cv1} or, even if lepton number is
broken at high scales, it leads to a CP asymmetry too small to
account for the observed baryon asymmetry.
The main difference with previous approaches is that in
this framework the
RH neutrinos combine exactly into Dirac fermions,
and the total CP asymmetry vanishes.
As a consequence, in order to generate a baryon
asymmetry we have to rely on
i) flavour effects and ii) sphaleron departure from
thermal equilibrium during the leptogenesis epoch.
Leptogenesis without neutrino Majorana masses, so-called
``Dirac leptogenesis'', has already been considered in the
literature \cite{Diraclepto}, but in a completely
different set-up.
In Dirac leptogenesis global lepton
number remains exactly unbroken (except for the SM sphaleron
interactions), so the light neutrinos are Dirac fermions
made of the SU(2) doublet $\nu_L$ and the singlet
RH neutrino $\nu_R$.
Realistic models contain not only the SM plus
RH neutrinos, but also additional heavy
particles to generate
a non-zero lepton number for left-handed particles and
and an equal and opposite lepton number for
right-handed particles
in their CP violating decay.
This paper is organized as follows. In Sec.~2 we describe our
leptogenesis framework, the CP asymmetries produced in the
heavy Dirac neutrino decay and the basic requirements to
generate the baryon asymmetry. In Sec.~3 we write the
network of Boltzmann equations relevant for leptogenesis
without Majorana masses. In Sec.~4 we present our results, both
in the resonant and non-resonant regimes, and we conclude in Sec.~5.
\section{The Framework}
\label{sec:iss}
Our starting point is that above the electroweak phase transition
the relevant particle contents for leptogenesis is that of the SM
plus a number of SM singlet Dirac fermions $N_i$.
Without loss of generality we can work on a basis in which
$N_i$ are mass eigenstates. In this basis
the Lagrangian above the electroweak phase
transition can be written as:
\begin{equation}
\mathcal{L} = \mathcal{L}_{\text{SM}} + i \overline{N}_i
\mislash{\partial} N_i - M_i \overline{N}_i
N_i
- \lambda_{\alpha i}\,{\widetilde h}^\dag\, \overline{P_R N_i} \ell_\alpha
- \lambda^*_{\alpha i } \overline{\ell}_\alpha P_R N_i {\widetilde h},
\label{eq:lag}
\end{equation}
where $\alpha,i$ are family indices ($\alpha=e, \mu, \tau$ and
$i=1,2,3, \dots$), $\ell_\alpha$ are the leptonic $SU(2)$ doublets,
$h=(h^+,h^0)^T$ is the Higgs field ($\widetilde h =i\tau_2 h^*$, with $\tau_2$
Pauli's second matrix) and $P_{R,L}$ are the chirality projectors.
A quantitative illustration of the proposed scenario which can
also account for the low energy neutrino phenomenology can
be found in the context of the inverse see-saw mechanism.
In this type of models \cite{iss},
the lepton sector of the
Standard Model is extended with
two electroweak singlet two-component leptons per generation, i.e.,
\begin{equation}
\ell_i =
\left( \begin{array}{c}
\nu_{ L i} \\
e_{L i}
\end{array} \right),e_{R i}, \nu_{R i}, s_{L i} \; .
\end{equation}
In the original formulation, the singlets $s_{L i}$ were
superstring inspired E(6) singlets, in contrast to the right-handed
neutrinos $\nu_{R i}$, which are in the spinorial representation.
More recently this mechanism has also arisen in the context of
left-right symmetry \cite{alsv} and SO(10) unified models \cite{barr}.
At zero temperature the (9 $\times$ 9) mass matrix of the neutral
lepton sector in the $\nu_L, \nu_R^c, s_L$ basis is given by
\begin{equation}
\label{eq:M}
\cal{M} =
\left( \begin{array}{ccc}
0 & m_D & 0 \\
m_D^T & 0 & M \\
0 & M^T & \mu \\
\end{array} \right) \; ,
\end{equation}
where $m_D$ and $M$ are arbitrary 3 $\times$ 3 complex matrices in flavour
space with
\begin{equation}
m_D\equiv \lambda_{\alpha i} \, v \; ,
\end{equation}
and $v=174$ GeV being the Higgs vacuum expectation value. Moreover
$\mu$ is a $3\times 3$ complex symmetric matrix.
The matrix $\cal{M}$ can be diagonalized
by a unitary transformation, leading to nine mass eigenstates:
three of them correspond to the observed light neutrinos,
and the other six are heavy Majorana neutrinos.
In this ``inverse see-saw'' scheme, assuming $m_D,\mu \ll M$
the effective Majorana mass matrix for the light neutrinos is
approximately given by
\begin{equation}
\label{mnu}
m_\nu = m_D {M^T}^{-1}\mu M^{-1}m_D^T \ ,
\end{equation}
while the three pairs of two--component heavy neutrinos combine
to form three quasi-Dirac fermions with masses of order $M$. The
admixture among singlet and doublet $SU(2)$ states
(and the corresponding violation of unitarity in the light lepton sector)
is of order $m_D/M$ and can be large \cite{bsvv,cv2,mfss}.
This is so because although $M$ is a large mass scale suppressing
the light neutrino masses,
it can be much smaller than in the standard type-I see-saw scenario
since light neutrino masses in Eq.~(\ref{mnu})
are further suppressed by the small ratio $\mu/M$.
Thus, in this scenario, for $M$ as low as the electroweak scale
the only bounds on the Yukawa couplings are those arising from
constraints on violation of weak universality, lepton flavour violating
processes and collider signatures \cite{unitlim}.
It is important to notice that in the $\mu \rightarrow 0$ limit a
conserved total lepton number can be defined. This can be easily seen
if, together with the standard lepton number $L_{SM}=1$ for the SM leptons
we assign a lepton number $L_N = 1$ to the singlets
$\nu_{R i}$ and $s_{L i}$.
With this assignment the mass matrix \eqref{eq:M} with $\mu=0$
conserves $L \equiv L_{SM}+L_N$.
Then, the three light neutrinos are
massless Weyl particles
while
the six heavy neutral leptons combine exactly into three Dirac fermions,
$N_i$, which above the electroweak scale are given by:
\begin{equation}
\label{Diracn}
N_i=s_{L i} + \nu_{R i} \;.
\end{equation}
The smallness of the $\mu$ term can be easily understood if
the total lepton number is spontaneously broken by a vacuum
expectation value $\langle \sigma \rangle$, with
$\mu = f \langle \sigma \rangle$ \cite{cv1}. In this case,
light neutrino masses are a consequence of
total lepton number being broken at an energy scale much lower than
the electroweak scale $\langle \sigma\rangle \ll v$,
and $\mu$ vanishes exactly at the heavy neutrino decay epoch.
This scenario introduces one extra scalar singlet which couples
with $s_L$ and the SM Higgs as
\begin{equation}
\mathcal{L}_{int} = - \frac 1 2 f_{ij} \overline{s^c_{L_i}} \sigma s_{L_j}
+ \lambda |h|^2 |\sigma|^2 \;,
\end{equation}
and therefore can affect our results when
considered in the framework of the inverse see-saw.
In principle, there is a new Dirac neutrino decay channel,
$N_i \rightarrow N_j \sigma$, which could be relevant for leptogenesis;
in practice,
as we will see the present mechanism only works for very degenerate
heavy singlets, $M_1 \simeq M_2$, therefore this channel is
phase-space suppressed and our analysis will remain valid.
In this framework, if $\mu$ is effectively zero at the leptogenesis
epoch, all processes conserve $B$ and $L$ at the perturbative level.
On the other hand, the sphalerons violate $B+L_{SM}$ but conserve
$B-L_{SM}$ and,
since the new heavy leptons are SM singlets, they
do not change $L_N$. Therefore the SM sphaleron processes also conserve
$B-L$. In brief $B-L$ is conserved by all the interactions
of the model on scales above $\langle\sigma\rangle$.
Thus, one is effectively working in the limit in which
the three light neutrinos are
massless, the heavy ones combine into three Dirac fermions
given by Eq.~\eqref{Diracn}
and the relevant interactions are those in Eq.~\eqref{eq:lag}.
Indeed, if leptogenesis occurs via the decay of heavy Standard Model
singlets in a framework without Majorana masses above the electroweak scale,
Eq.~\eqref{eq:lag} has the relevant information. Thus our results will hold
whether light neutrinos acquire masses by the
inverse see-saw mechanism with the $\mu$ term generated at low
scales as described above or by some other mechanism, as long as
it does not imply the presence of new states relevant for leptogenesis.
\subsection{The CP Asymmetries}
\label{sec:CP}
One important ingredient that determines the baryon asymmetry generated
in thermal leptogenesis in this scenario is the CP asymmetry
produced in the decays of the heavy Dirac neutrinos $N_i$ into leptons
of flavour $\alpha$, $\epsilon_{\alpha i}$:
\begin{equation}
\epsilon_{\alpha i} \equiv
\frac{\displaystyle \Gamma(\proname{N_i}{\ell_\alpha h})
- \Gamma(\proname{\bar N_i}{\bar \ell_\alpha \bar h})}
{\displaystyle \sum _\alpha
\Gamma(\proname{N_i}{\ell_\alpha h} )+ \Gamma(\proname{\bar N_i}{\bar
\ell_\alpha \bar h})} \; .
\end{equation}
\FIGURE[t]{
\centering
\includegraphics[width=0.8\textwidth]{diagrams.eps}
\caption[]{The tree and one loop diagrams that contribute to the CP
asymmetry in decays when the heavy neutrinos are of Dirac type.
\label{fig:loop}}}
Because of the Dirac nature of $N_i$,
the only one-loop contribution to the CP asymmetry arises
from the interference of the tree-level and the self-energy
one-loop diagrams displayed in Fig.~\ref{fig:loop}, and it is given
by \cite{crv,resonant}
\footnote{We have already regularized the divergence at
$M_i = M_j$ using the resummation procedure of \cite{resonant}.
In \cite{abp} a different calculation was performed, obtaining that the
regulator of the singularity is $M_j \Gamma_j - M_i \Gamma_i$
instead of $M_i \Gamma_j$, but for the values of the widths
that we consider ($\Gamma_j \gg \Gamma_i$), both results coincide.}
\begin{eqnarray}
\epsilon_ {\alpha i} &=& \frac{-1}{8 \pi (\lambda^\dag \lambda)_{ii}}
\sum_{j \neq i} \frac{a_j-1}{(a_j-1)^2+g_j^2}
\miim{\lambda_{\alpha j }^* \lambda_{\alpha i}
(\lambda^\dag \lambda)_{i j}} \nonumber \\
&=&
\frac{-1}{8 \pi}\sum_{j \neq i} \frac{a_j-1}{(a_j-1)^2+g_j^2}
(\lambda^\dag \lambda)_{jj}
\sqrt{K_{\alpha i}} \sqrt{K_{\alpha j}} \sum_{\beta \neq
\alpha} \sqrt{K_{\beta i}}
\sqrt{K_{\beta j }} \, p_{\alpha\beta}^{ij} \; ,
\label{eq:epsi}
\end{eqnarray}
where $a_j \equiv M_j^2/M_i^2$ ,
$g_j\equiv \Gamma_j/M_i$ and
\begin{equation}
\Gamma_i=\frac{M_i}{8\pi} (\lambda^\dagger \lambda)_{ii}
\equiv \frac{1}{8\pi} \frac{\tilde m_i}{v^2}{M_i^2}
\end{equation}
is the decay width of $N_i$.
In the last equation we have also introduced the effective mass $\tilde m_i$.
Notice that the above CP asymmetry is only
part of the usual wave function contribution for Majorana $N_i$,
so that in the Dirac case
the total CP asymmetry exactly vanishes
\begin{equation}
\epsilon_i \equiv \sum_\alpha \epsilon_{\alpha i}=0 \:,
\end{equation}
by CPT invariance and unitarity.
Thus in order for leptogenesis to occur we must be in a temperature
regime in which flavour effects~\cite{flavour, flavour2} are important.
In writing Eq.~\eqref{eq:epsi} we have expressed
the Yukawa couplings in terms of the
projection coefficients $K_{\alpha i}$
(related to the absolute values of the Yukawas)
and some phases $\phi_{\alpha i}$ as
\begin{equation}
\lambda_{\alpha i} = \sqrt{K_{\alpha i}} \sqrt{(\lambda^\dag \lambda)
_{ii}} e^{i \phi_{\alpha i}} \: ,
\end{equation}
where
\begin{eqnarray}
K_{\alpha i } &=& \frac{\lambda _{\alpha i}
\lambda_{\alpha i}^*}{(\lambda^\dag \lambda)_{ii}} \; , \\
p_{\alpha\beta}^{ij}&=&-p_{\beta\alpha}^{ij}=
-p_{\alpha\beta}^{ji}=\sin(\phi_{\alpha i}- \phi_{\alpha j}+
\phi_{\beta j}- \phi_{\beta i}) \; .
\end{eqnarray}
The factor $\frac{a_j-1}{(a_j-1)^2+g_j^2}$ can be resonantly enhanced to
$M_j/2 \Gamma_j$ if $M_j-M_i\sim \Gamma_j/2$ \cite{resonant}.
In this regime the resonant contribution to the asymmetry becomes
\begin{equation}
\epsilon^{res}_ {\alpha i}= - \frac{1}{2}
\sqrt{K_{\alpha i}} \sqrt{K_{\alpha j}} \sum_{\beta \neq
\alpha} \sqrt{K_{\beta i}}
\sqrt{K_{\beta j }} \, p_{\alpha\beta}^{ij} \; .
\end{equation}
\subsection{The Generation of $B$: Basic Requirements}
\label{sec:requirements}
In the scenario that we are presenting $B - L_{SM} - L_N$ is
conserved by all processes. Since we want to explore the possibility
of generating the cosmological baryon asymmetry exclusively during the
production and decay of the heavy Dirac neutrinos, we assume that at
the beginning of the leptogenesis era $B-L_{SM} =0$ and $L_N=0$ (same
abundance of $N_i$ and $\overline N_i$)\footnote{In this work we will
always consider the case in which the heavy neutrinos are produced
thermally exclusively through their Yukawa interactions, so that their
abundance is null at the beginning of Leptogenesis. However we have
also studied cases in which their initial abundance is that of a
relativistic particle in equilibrium with the thermal bath (still
satisfying $L_N (\text{initial})=0$) and found no significant
differences.}. Therefore it is clear that
$B-L_{SM} = 0$ after the heavy neutrinos have disappeared. If the
sphalerons were still active after the decay epoch, the final
baryon asymmetry, being proportional to $B-L_{SM}$, would be zero.
Then, in order for leptogenesis to occur, one must be in the regime in
which the sphalerons depart from equilibrium (which occurs at $T \sim
T_f$) {\sl during the decay epoch}. In this case, as described in next
section, the baryon asymmetry freezes at a value $Y_B \propto
Y_{B-L_{SM}} (T = T_f)$, which in general is not null.
To go on we assume that the observed baryon asymmetry is generated
during the decay epoch of the lightest heavy neutrino, $N_1$. For the
sake of concreteness we will assume that $M_3\gg M_2\geq M_1$ so the
maximum contribution to the CP asymmetry in $N_1$ decays is due to
$N_2$. Note that, in this scenario, for hierarchical masses the CP
asymmetry is suppressed as $(M_1/M_2)^2$, instead of
$M_1/M_2$ in the standard case with heavy Majorana neutrinos.
This contributes to the
impossibility of generating enough $B$
with hierarchical heavy neutrinos.
Moreover, if $N_1$ and $N_2$ have similar masses, they will coexist
during the leptogenesis era, and there will be
lepton flavour violating (but total lepton number conserving) washouts
processes involving real $N_2$ as well as real $N_1$.
In what follows we will refer to the washout processes involving real or virtual heavy neutrinos as
lepton flavour violating washouts (LFVW).
This implies that, compared to high scale leptogenesis
models with $M_i \gtrsim 10^{9}$~GeV, this electroweak scenario
typically suffers from very strong LFVW
\footnote{ Similarly strong $\Delta L \neq 0$ washouts are also expected in
standard (non-resonant)
leptogenesis with Majorana neutrino masses at the weak scale
\cite{marta,hambye}.}.
The argument goes as follows:
the intensity of the LFVW
associated with processes involving real or virtual $N_2$ is determined by
the adimensional parameter $\tilde m_2/m_*$
\footnote{The
quantity $m_*$ is the {\it equilibrium mass}, which is defined by the
condition $\tfrac{\Gamma_{N_i}}{H(T=M_i)}=\frac{\tilde m_i}{m_*}$,
where $H$ is the Hubble expansion rate, so that $m_*=\tfrac{16}{3
\sqrt{5}} \pi^{5/2} \sqrt{g_{*SM}} \frac{v^2}{m_{pl}} \simeq 1,08
\times 10^{-3}$~eV ($g_{*SM}$ is the number of Standard Model
relativistic degrees of freedom at temperature $T$ and $m_{pl}$ is the
Planck mass).} .
For fixed values of the Yukawas of $N_2$ -- basically
for a fixed value of the CP asymmetry in $N_1$
decays-- $\tilde m_2/m_*$ scales as
$m_{pl}/M_2$. Therefore the LFVW will be larger the lower $M_2$.
Consequently in order to avoid a
complete erasure of the asymmetry generated in the processes involving
$N_1$, either the CP asymmetry has to be resonantly enhanced so that
the Yukawas of $N_2$ can be small while still having enough CP
asymmetry, or some of the projectors $K_{\alpha 2}$ have to be very
small to reduce the LFVW in some of the lepton flavours.
Altogether we find the following {\sl minimum} requirements:
\begin{itemize}
\item The mass of $N_1$ must be not far from the
sphaleron freeze-out temperature $M_i\lesssim {\cal O}(TeV)$.
\item Either the CP asymmetry is resonantly enhanced or
the Yukawa couplings of $N_2$ have a strong flavour hierarchy
(i.e. $K_{\alpha 2} \ll 1$ for $\alpha = e, \mu$ or $\tau$).
\item Even in the non-resonant case there cannot be a large hierarchy
between $M_1$ and $M_2$ in order to
have as little suppression as possible
from the $\frac{1}{a_2-1}$ factor.
\end{itemize}
With these requirements we conclude that to quantitatively
determine the viability of this scenario we need to consider
the evolution of the abundances of both $N_1$ and $N_2$ (as well
as the corresponding $\overline N_i's$) and the lepton flavour
asymmetries. In order to do so we solve the set of Boltzmann equations
which we describe next.
\section{The Boltzmann Equations}
\label{be}
In writing the relevant Boltzmann Equations we first notice that,
contrary to the case in which the heavy neutrinos are of Majorana
nature, the densities of $N_i$ and $\bar N_i$ can be different and
enter separately in the Boltzmann equations. Thus,
in general there is an asymmetry between these two degrees of
freedom which induces additional washout of the lepton asymmetry.
Moreover, the usual $\Delta L_{SM} =2$ processes
mediated by the heavy neutrinos (like $\ell_\alpha h \rightarrow
\bar{\ell}_\beta \bar{h}$, etc.) are absent, since total lepton
number is perturbatively conserved.
Therefore, the washout of the lepton asymmetries is due to
$\Delta L_{SM}=0$ lepton flavour violating scatterings
mediated by the $N_i$
($\ell_\alpha h \rightarrow \ell_\beta h$, etc.),
and
$\Delta L_{SM} = -\Delta L_N = \pm 1$ reactions with one external $N_i$.
Considering all the $1 \leftrightarrow 2$ and $2 \leftrightarrow
2$ processes resulting from the Yukawa interactions of the heavy
neutrinos and the Yukawa interaction of the top quark,
the evolution of the different densities is given by:
\begin{eqnarray}
- zHs \frac{\dif Y_{N_i+\bar N_i}}{\dif z} &=& \sum_\alpha
\{\lrproname{N_i}{\ell_\alpha h}\} + \{\lrproname{N_i \bar
\ell_\alpha}{\bar Q_3 t}\} + \{\lrproname{N_i \bar t}{\bar Q_3
\ell_\alpha}\} + \{\lrproname{N_i Q_3}{t \ell_\alpha}\} \nonumber \\
&& +
\sum_{\alpha, \beta, j \neq i} \{\lrproname{N_i \bar N_j}{\ell_\alpha
\bar \ell_\beta}\} + \{\lrproname{N_i \ell_\beta}{N_j \ell_\alpha}
\}' + \{\lrproname{N_i \bar \ell_\alpha}{N_j \bar \ell_\beta} \}
\label{eq:be1-1} \\
&& + \sum_{j \neq i} \{\lrproname{N_i \bar N_j}{h \bar h}\} +
\{\lrproname{N_i h}{N_j h }\}' + \{\lrproname{N_i \bar h}{N_j \bar
h}\} \; , \nonumber \\
- zHs \frac{\dif Y_{N_i-\bar N_i}}{\dif z} &=& \sum_\alpha
(\lrproname{N_i}{\ell_\alpha h})+ (\lrproname{N_i \bar
\ell_\alpha}{\bar Q_3 t}) + (\lrproname{N_i \bar t}{\bar Q_3
\ell_\alpha}) + (\lrproname{N_i Q_3}{t \ell_\alpha}) \nonumber\\
& &+
\sum_{\alpha, \beta, j \neq i} (\lrproname{N_i \bar N_j}{\ell_\alpha
\bar \ell_\beta}) + (\lrproname{N_i \ell_\beta}{N_j \ell_\alpha })'
+ (\lrproname{N_i \bar \ell_\alpha}{N_j \bar \ell_\beta}) \label{eq:be1-2}\\
& & +
\sum_{j \neq i} (\lrproname{N_i \bar N_j}{h \bar h}) + (\lrproname{N_i
h}{N_j h })' + (\lrproname{N_i \bar h}{N_j \bar h}) \; , \nonumber\\
- zHs
\frac{\dif Y_{\Delta_\alpha}}{\dif z} &=& \sum_i
(\lrproname{N_i}{\ell_\alpha h}) + (\lrproname{N_i \bar
\ell_\alpha}{\bar Q_3 t}) + (\lrproname{N_i \bar t}{\bar Q_3
\ell_\alpha}) + (\lrproname{N_i Q_3}{t \ell_\alpha}) \nonumber\\
& & +
\sum_{i, j, \beta \neq \alpha} (\lrproname{N_i \bar N_j}{\ell_\alpha
\bar \ell_\beta}) + (\lrproname{N_i \ell_\beta}{N_j \ell_\alpha })'
+ (\lrproname{N_i \bar \ell_\alpha}{N_j \bar \ell_\beta})
\label{eq:be1-3}\\
& &+
\sum_{\beta \neq \alpha} (\lrproname{ h \bar h}{\ell_\alpha \bar
\ell_\beta}) + (\lrproname{\ell_\beta \bar h}{\ell_\alpha \bar h}) +
(\lrproname{\ell_\beta h}{\ell_\alpha h})' \; ,
\nonumber
\end{eqnarray}
where $Y_X \equiv n_X / s$ is the number density
of a single degree of freedom of the particle specie
$X$ normalized to the entropy density
and $y_X \equiv (Y_X - Y_{\bar X})/Y_X^{eq}$
(to be used below) is the asymmetry density
normalized to the equilibrium density. With $Q_3$ and $t$ we denote
respectively the third generation quark doublet and the top $SU(2)$ singlet.
We have also defined $Y_{\Delta_\alpha} \equiv Y_B/3 - Y_{L_\alpha}$,
where $Y_B$ is the baryon asymmetry and
$Y_{L_\alpha}=(2y_{\ell_\alpha}+y_{e_{R\alpha}})Y^{eq}$
is the total lepton asymmetry in the flavour $\alpha$
(with $Y^{eq} \equiv Y_{\ell_\alpha}^{eq}
= Y_{e_{R\alpha}}^{eq}$). Moreover
$Y_{N_i + \bar N_i} \equiv Y_{N_i} + Y_{{\bar N_i}}$ is the total
normalized density of the heavy neutrino $N_i$
and $Y_{N_i - \bar N_i}
\equiv Y_{N_i} - Y_{\bar N_i}$
is the corresponding $L_N$ asymmetry.
To write the Eqs.~\eqref{eq:be1-1}--\eqref{eq:be1-3} we have also defined the
following combinations of reaction densities:
\begin{eqnarray}
[a,b,...\leftrightarrow i,j,...]&=& \frac{n_a}{n_a^{eq}}
\frac{n_b}{n_b^{eq}} \gamma^{eq}(a,b,...\rightarrow i,j,...) -
\frac{n_i}{n_i^{eq}} \frac{n_j}{n_j^{eq}}
\gamma^{eq}(i,j,...\rightarrow a,b,...), \\
(a,b,...\leftrightarrow i,j,...) &\equiv& [a,b,...\leftrightarrow
i,j,...] - [\bar{a},\bar{b},...\leftrightarrow \bar{i},\bar{j},...] \; ,\\
\{a,b,...\leftrightarrow i,j,...\} &\equiv&
[a,b,...\leftrightarrow i,j,...] + [\bar{a},\bar{b},...\leftrightarrow
\bar{i},\bar{j},...] \; ,
\end{eqnarray}
and the prime written in the contribution of some processes indicates that
the on-shell contribution to them has to be subtracted. Note
that we have not included
scatterings involving gauge bosons. They do not
introduce qualitatively new effects and no further density asymmetries
are associated to them. We have also
ignored finite temperature corrections to the particle masses and
couplings \cite{gi04}. In particular we take all equilibrium number
densities $n_X^{eq}$, with $X \neq N_i, \bar N_i$, equal to those of
massless particles.
After summing over the most relevant contributions
\footnote{
The scatterings mediated by the Higgs or leptons involving two external
heavy neutrinos have been neglected because they are much slower than
the scatterings
involving only one external heavy neutrino and the top quark, since
the Yukawa couplings of the heavy neutrinos are much smaller than
the Yukawa of the top quark.}
we find:
\begin{eqnarray}
\frac{\dif Y_{N_i + \bar N_i}}{\dif z} &=&\frac{-2}{sHz}
\left(\frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar
N_i}^{eq}}-1\right) \sum_\alpha \left(
\g{N_i}{\ell_\alpha h}+ \g{N_i\bar \ell_\alpha}{\bar Q_3 t}
+ 2 \,
\g{N_i Q_3}{t \ell_\alpha} \right) \; ,
\label{eq:be2}\\
\frac{\dif Y_{N_i - \bar N_i}}{\dif z} &=& \frac{-1}{sHz} \left\{
\sum_\alpha \g{N_i}{\ell_\alpha h} \left[ y_{N_i} - y_{\ell_\alpha} - y_h \right] + \g{N_i \bar \ell_\alpha}{\bar Q_3 t} \left[ y_{N_i} - \frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar N_i}^{eq}} y_{\ell_\alpha} + y_{Q_3} - y_t \right]
\right. \nonumber
\\
&& \left. + \sum_\alpha
\g{N_i Q_3}{t \ell_\alpha} \left[ 2 y_{N_i} - 2 y_{\ell_\alpha}
+ \left( 1 + \frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar N_i}^{eq}} \right) \left(y_{Q_3}- y_t \right) \right] \right\} , \label{eq:be3}
\\
\frac{\dif Y_{\Delta_\alpha}}{\dif z} & =& \frac{-1}{sHz} \left\{
\sum_i \left( \frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar N_i}^{eq}} - 1
\right)\epsilon_{\alpha i} \, 2 \sum_\beta \left(\g{N_i}{\ell_\beta h}
+ \g{N_i \bar \ell_\beta}{\bar
Q_3 t} + 2 \, \g{N_i Q_3}{t
\ell_\beta} \right) \right. \nonumber \\
&& \left. + \sum_i \g{N_i}{\ell_\alpha h} \left[ y_{N_i} -
y_{\ell_\alpha} - y_h \right] + \g{N_i \bar \ell_\alpha}{\bar Q_3 t}
\left[ y_{N_i} - \frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar N_i}^{eq}}
y_{\ell_\alpha} + y_{Q_3} - y_t \right] \right.\nonumber \\
& & \left. + \sum_i
\g{N_i Q_3}{t \ell_\alpha} \left[ 2 y_{N_i} - 2 y_{\ell_\alpha}
+ \left( 1 + \frac{Y_{N_i + \bar N_i}}{Y_{N_i + \bar N_i}^{eq}} \right) \left(y_{Q_3}- y_t \right) \right] \right. \nonumber\\
&& \left. + \sum_{\beta \neq
\alpha} \left(
\gp{\ell_\beta h}{\ell_\alpha h} +
\g{\ell_\beta \bar{h}}{\ell_\alpha \bar{h}} +
\g{h\bar{h}}{\ell_\alpha\bar{\ell_\beta}}
\right) [y_{\ell_\beta} - y_{\ell_\alpha}] \right\} \; ,
\label{eq:be4}
\end{eqnarray}
where we have introduced the notation
$\g{a, b, \dots}{c, d, \dots}
\equiv \gamma(\proname{a, b, \dots}{c, d, \dots})$.
In writing Eqs.~\eqref{eq:be3} and \eqref{eq:be4} we have accounted
for the fact that the CP asymmetry in scatterings is equal to the CP
asymmetry in decays since only the wave part contributes to the CP
asymmetry of the different processes (see~\cite{nrr}). Moreover, since the
total CP asymmetries $\epsilon_i$ are
null, there are no source terms proportional to $\epsilon_{\alpha i}$ in the
equation for the evolution of $Y_{N_i - \bar N_i}$, which is therefore driven
only by terms proportional to the different density asymmetries.
It is also important to notice that the equations for the asymmetries are
not all independent
due to the condition $\frac{\dif Y_{B - L_{SM} - L_N}}{\dif z}=0$, where
$Y_{B - L_{SM} - L_N} \equiv Y_B - Y_{L_{SM}} - Y_{L_N}$,
$Y_{L_{SM}} \equiv \displaystyle\sum_\alpha Y_{L_\alpha}$, and
$Y_{L_N} \equiv \displaystyle \sum_i Y_{N_i - \bar N_i}$. If we take as initial
condition that all the asymmetries are null, then $\displaystyle \sum_\alpha
Y_{\Delta_\alpha} - \sum_i Y_{N_i - \bar N_i} = 0$.
In Fig.~\ref{fig:rates} we plot the different reaction densities
included in the Boltzmann equations, normalized to $H n_\ell^{eq}$,
where $H$ is the expansion rate of the universe.
This normalization is appropriate to study the contribution of the
different processes to the LFVW. We show the figure for
$M_1= 250$ GeV, $M_2 = 275$ GeV,
$(\lambda^\dag \lambda)_{11}= 8.2 \times 10^{-15} \;
(\tilde m_1=10^{-3}\; {\rm eV})$, and
$(\lambda^\dag \lambda)_{22}= 10^{-4}$,
which are the values of the parameters of one of the examples given in the
next section.
For other values of the Yukawa couplings the reaction densities
$\g{N_i}{\ell_\alpha h}$,
$\g{N_i\bar \ell_\alpha}{\bar Q_3 t}$, and
$\g{N_i Q_3}{t \ell_\alpha}$ scale as
$(\lambda^\dag \lambda)_{ii}$ while
$\g{\ell_\beta \bar{h}}{\ell_\alpha \bar{h}}$ and
$\g{h\bar{h}}{\ell_\alpha\bar{\ell_\beta}}$ scale as
$((\lambda^\dag \lambda)_{22})^2$
\footnote{The contribution of the virtual $N_1$ to the
LFVW processes is
negligible in all the cases we are going to deal with,
so we have only included $N_2$ virtual scatterings.}.
The subtracted $s$-channel reaction density
$\gp{\ell_\beta h}{\ell_\alpha h}$ is of the same order
and scales as the $t$- and $u$-channel ones shown in the plot.
The infrared divergence of the reaction mediated by the Higgs
in the t-channel has been regularized using a Higgs mass equal to
$M_1$ in the propagator.
For convenience we have
factorized out the flavour projection factors as explicitly
displayed in the figure.
\FIGURE[ht]{
\centering
\includegraphics[width=0.8\textwidth]{figrates.eps}
\caption[]{Reaction densities
included in the Boltzmann equations as a function of $z=M_1/T$, normalized
to $H n_\ell^{eq}$, where $H$ is the expansion rate of the universe.
\label{fig:rates}}}
From the figure we see that, as explained in
Sec.~\ref{sec:requirements}, the rates of processes involving $N_2$
are generically very large (if the CP asymmetry in $N_1$ decays is
required to be not too small). When $M_2 \sim M_1$ the LFVW
induced by processes involving external $N_2$ are dominant over
the ones mediated by virtual $N_2$ in the temperature range relevant
for $N_1$ leptogenesis. Conversely, if there is some hierarchy between
$M_2$ and $M_1$ the LFVW mediated by processes involving virtual
$N_2$ can become the most important ones, since they are not Boltzmann
suppressed for $T > M_2$.
The network of equations~\eqref{eq:be2}--\eqref{eq:be4} can be
solved after the densities $y_{\ell_\alpha}$, $ \ y_h$
and $y_t-y_{Q_3}$ are expressed in terms of the quantities
$Y_{\Delta_\alpha}$ with
the help of the equilibrium conditions imposed by the fast reactions
which hold in the considered temperature regime
(see~\cite{spect} and also~\cite{flavour2}). For
$10^6$ GeV $\gg T\gtrsim T_c$ (where we denote by $T_c$ the critical
temperature of the electroweak phase transition) they read~\cite{flavour2}
\begin{equation}
y_{\ell_\alpha}= -\sum_\beta C^\ell_{\alpha \beta}\>
\frac{Y_{\Delta_\beta}}{Y^{eq}}, \qquad
\qquad y_h = - \sum_\alpha C^H_\alpha\,
\frac{Y_{\Delta_\alpha}}{Y^{eq}}\, ,
\label{eq:equil}
\end{equation}
with
\begin{equation}
C^H = \frac{8}{79}(1 ,\> 1,\> 1) \qquad \hbox{\rm and} \qquad
C^\ell =\frac{1}{711} \begin{pmatrix}
221 & -16 & -16\\
-16 & 221 & -16 \\
-16 & -16 & 221 \end{pmatrix}\,.
\label{eq:chcl}
\end{equation}
Moreover, the equilibrium condition for the Yukawa interactions of the top quark implies $y_t - y_{Q_3}=y_h/2$.
Above the critical temperature,
fast sphaleron processes convert the generated lepton asymmetry to
baryon asymmetry \cite{krs}. Below $T_c$
the Higgs starts to acquire its vev and this $SU(2)_L$-breaking
suppresses the sphaleron rate, $\Gamma_{\Delta
(B+L)}$ \cite{clmcw,amc,ls}.
For temperatures $M_W(T)\ll T \ll M_W(T)/\alpha_W$,
$\Gamma_{\Delta (B+L)} \sim M_W (M_W(T)/\alpha_W T)^3(M_W(T)/T)^3 \exp
[-E_{sp}/T]$ \cite{amc,resonant}
where $\alpha_W$ is the $SU(2)_L$ fine structure constant,
$M_W(T)=g v(T)/\sqrt{2}$ is the
$W$-boson mass and the sphaleron energy is $E_{sp}\sim
M_W(T)/\alpha_W$. Because of the exponential suppression of
$\Gamma_{\Delta (B+L)}$ the lepton asymmetry is not longer converted
into baryon asymmetry below some temperature $T_f$ for which
$\Gamma_{\Delta(B+L)}(T_{f})/H(T_f)\leq 1$.
In order to properly account for the evolution of the relevant
abundances in the temperature regime $T_f<T<T_c$ one must extend
the system of Boltzmann Equations to include the temperature
dependent sphaleron rate. The overall effect can be approximated by
replacing the usual conversion factor $n_B=\frac{28}{79}n_{(B-L_{SM})}$
by a temperature-dependent rate given by \cite{ls,ht}
\begin{equation}
Y_B(T)=4\frac{77T^2+54v(T)^2}{869T^2+666v(T)^2}
\sum_\alpha Y_{\Delta_\alpha}(T),
\end{equation}
where $v(T)$ is the temperature-dependent Higgs vev:
\begin{equation}
v(T)=v\left(1-\frac{T^2}{T_c}\right)^{\frac{1}{2}}
\;\;\;
{\rm with}
\;\;\; T_c=v\left(\frac{1}{4}+\frac{{g'}^2}{16 \lambda}
+\frac{3{g}^2}{16 \lambda}+\frac{\lambda_t}{4 \lambda}\right)^{-\frac{1}{2}} \; .
\end{equation}
Here $\lambda$ is the quartic Higgs self-coupling,
$g$ and $g'$ are the $SU(2)_L$ and $U(1)_Y$ gauge couplings,
and $\lambda_t$ is the top Yukawa coupling.
Since the sphaleron processes are effectively switched off at
$T<T_f$, the baryon asymmetry is unaffected below this temperature.
In principle an additional effect is that the set of equilibrium
conditions leading to Eq.~\eqref{eq:chcl} are also modified in the
temperature range between $T_c$ and $T_f$.
We have verified that this effect does not lead to any
relevant change in our conclusions. Furthermore for $T<T_c$
the effects of the non-vanishing $v(T)$ must be accounted
for in the reaction densities. As long as $M_{i}$ is large
enough compared to $T_c$ these effects can be safely neglected.
\section{Results}
\label{sec:results}
In order to quantify the required conditions for generating the
observed baryon asymmetry,
$Y_B = (8.75 \pm 0.23) \times 10^{-11}$ ~\cite{lastwmap},
we have solved the Boltzmann equations presented in the previous section.
As discussed at the end of Sec.~\ref{sec:iss}, the CP asymmetry in
processes involving $N_1$ or $N_2$ is larger the closer the masses
of $N_2$ and $N_1$ are to each other. Indeed the proximity of the masses
of the heavy neutrinos is the key parameter that determines whether or
not successful leptogenesis is possible.
We first show how successful leptogenesis is possible in
this scenario in the resonant mass regime. We then
explore the requirements for obtaining the observed baryon asymmetry without
reaching the resonant condition.
\subsection{Resonant case}
In the case that the CP asymmetry of $N_1$ decays receives a resonant
contribution from $N_2$ the proposed mechanism works in a wide range
of the parameter space.
As an illustration we show in Fig.~\ref{fig:res} an
explicit example
in which
\begin{equation}
\begin{split}
M_1& =800~{\rm GeV}\;, \quad
M_2 = M_1 + \frac{\Gamma_{N_2}}{2} \;, \\
(\lambda^\dag \lambda)_{11} &= 10^{-12}\; ,\quad (\lambda^\dag \lambda)_{22}
= 10^{-10}\;,\\
K_{e 1}&= 0.3, \quad K_{\mu 1}=0.3, \quad K_{\tau 1}=0.4, \\
K_{e 2}&= 0.1, \quad K_{\mu 2}=0.1, \quad K_{\tau 2}= 0.8 \; ,\\
p^{12}_{e \mu}&=p_{e \tau}^{12}=p_{\mu \tau}^{12}=1 .
\end{split}
\label{eq:resparam}
\end{equation}
With these values the corresponding CP asymmetries are:
$\epsilon_{e 1} = 6.5 \times 10^{-2},\epsilon_{\mu 1} = 3.5 \times
10^{-2}, \epsilon_{\tau 1} = -1 \times 10^{-1}$, $\epsilon_{e 2} = 1.3
\times 10^{-3},\epsilon_{\mu 2} = 7 \times 10^{-4}$, and
$\epsilon_{\tau 2} = - 2 \times 10^{-3}$.
In deriving the final baryon asymmetry we have taken
$m_H=200$ GeV for which $T_c\simeq 150$ GeV and $T_f\simeq 100$ GeV.
\FIGURE[ht]{
\centerline{\protect\hbox{
\epsfig{file=figres.eps,width=\textwidth}}}
\caption[]{The asymmetries $|Y_{\Delta_\alpha}|, |Y_{N_i - \bar N_i}|,
Y_{B-L_{SM}}$, and the densities $Y_{N_i + \bar N_i}, Y_{N_i + \bar
N_i}^{eq}$ as a function of $z$
for the values of
the parameters given in Eq.~\eqref{eq:resparam}.
\label{fig:res}}}
The figure explicitly shows that $Y_{B-L_{SM}}\rightarrow 0 $ as
$T\rightarrow 0$ which is mandatory due
to the conservation of $B-L$ in this scenario and the assumed
initial conditions.
Indeed it can be verified that, although $Y_{\Delta_\alpha}$
saturate at a finite value at low temperature,
with the sign assignments for the asymmetries, at any $z$ it is verified that
$Y_{B-L_{SM}}=|Y_{\Delta_e}|
+|Y_{\Delta_\mu}|-|Y_{\Delta_\tau}|= Y_{L_N} =
|Y_{N_1-\overline{N_1}}|+|Y_{N_2-\overline{N_2}}|
\simeq |Y_{N_2-\overline{N_2}}|$.
Still as illustrated in the figure,
the observed baryon asymmetry is generated
once the sphalerons switch off below $T_f=100$ GeV ($z>8$).
We notice that despite the CP asymmetry $\epsilon_{\alpha 1}\gg
\epsilon_{\alpha 2}$, the $N_i$ asymmetries verify
$|Y_{N_1-\overline{N_1}}|\ll|Y_{N_2-\overline{N_2}}|$. This is so
because, even though the lepton asymmetries $y_{\ell_\alpha}$ are mostly
produced in processes involving $N_1$, it is the inverse decay $\ell_\alpha
h\rightarrow N_i$ what determines how much of the lepton asymmetries
is transferred to the $N_i$ asymmetry.
Moreover, recall that there are no source
terms proportional to $\epsilon_{\alpha i}$ in the evolution equations
of $Y_{N_i-\overline{N_i}}$ Eq.~\eqref{eq:be3}. Then,
since the $N_2$ Yukawa
couplings are larger, the inverse decays of the $N_2$ are more
efficient and a larger $N_2$ asymmetry is produced. Consequently
for the small $N_1$ Yukawa couplings considered, the $N_1$ asymmetry plays a
very little role in the dynamics of the system.
We note also that contrary to leptogenesis scenarios which occur well above
the electroweak scale, once the heavy neutrinos have decayed, the universe is
left with an equal amount of lepton and baryon asymmetry.
Furthermore the flavour
asymmetries $Y_{\Delta \alpha}$ typically remain some orders of magnitude
greater than $Y_B$.
\subsection{Non-resonant case}
\FIGURE[ht]{
\centerline{\protect\hbox{
\epsfig{file=fignores.eps,width=\textwidth}}}
\caption[]{The asymmetries $|Y_{\Delta_\alpha}|, |Y_{N_i - \bar N_i}|,
Y_{B-L_{SM}}$, and the densities $Y_{N_i + \bar N_i}, Y_{N_i + \bar
N_i}^{eq}$ as a function of $z$ for the values of
the parameters given in Eq.~\eqref{eq:noresparam}.
Note that the densities $Y_{N_i + \bar N_i}$ and
$Y_{N_i + \bar
N_i}^{eq}$ have been rescaled by $10^{-9}$
in order to fit into the figure.
\label{fig:nores}}}
In this regime, in order to have enough CP asymmetry
at least one of the Yukawas of $N_2$ has to be very large.
The strongest bounds on the Yukawas of the heavy neutrinos for the range
of masses we are interested in come from
constraints on violation of weak universality, lepton flavour violating
processes and collider signatures \cite{unitlim} which allow for
$|(\lambda^\dag \lambda)_{22}| \lesssim 5 \times 10^{-3} (M_2/v)^2$.
However for flavour effects to be relevant, the Yukawas of the $N_2$ must
be smaller than the Yukawas of (at least) one of the charged
leptons, so we require that
$\lambda_{2\alpha}\lesssim \lambda_\tau \sim 10^{-2}$~\footnote{This
is a rough estimate of the validity of the "fully flavor
regime" which is
enough for our purposes; for a more detailed analysis about this point
see~\cite{zeno}.}.
This is so because if the fastest leptonic reaction rates were those
associated to $N_2$, the flavour basis which diagonalizes the density
matrix would be that formed by the lepton in which $N_2$ decays,
$\ell_2=\sum_\alpha \lambda_{\alpha 2} \ell_\alpha
/\sum_\alpha| \lambda_{\alpha 2} |^2$,
together with two states orthogonal to $\ell_2$ and hence no baryon
asymmetry would be generated since in this basis the CP
asymmetry is zero.
Furthermore if the Yukawa couplings of
$N_2$ with all the three light leptons were comparable
and large (even if smaller than $\lambda_\tau$) there would be strong
LFVW in all flavours (see Fig.~\ref{fig:rates})
and the generation of the baryon asymmetry would be strongly suppressed.
Therefore, as was explained before, some of the Yukawa couplings have to
be very small. Note that for a fixed value of $(\lambda^\dag \lambda)_{22}$
the CP asymmetry in a given flavour decreases linearly
with decreasing Yukawa coupling of that flavour with $N_2$,
while the washout terms decrease quadratically with it. We conclude that the
most favorable situation is to have a strong hierarchy in the flavour
structure of the $N_2$ Yukawas.
Concerning the optimum range of Yukawa couplings for $N_1$ there are
two relevant effects. On one hand if they are too large --
$\tilde m_1 \gtrsim m_* \simeq 10^{-3}$~eV -- the maximum values of the
$B-L_{SM}$ asymmetry occur at $z < 1$, so
that $M_1<T_f$ in order for the sphalerons to decouple when the baryon
asymmetry is largest. In this case the analysis is more complex because one
cannot neglect the effects of the breaking of $SU(2)$ in the reaction
densities. The expected effect is the reduction of the $N_1$ decay rate due
to the phase space factors. In order to determine the effect on the final
$B$ asymmetry a dedicated study is required which is beyond the scope of
this paper. On the other hand, for values
$\tilde m_1\ll m_*$, the peak in the $B-L_{SM}$
asymmetry shifts to large values of $z$ but it is in general lower
because of the smaller production of $N_1$ when starting from a zero
abundance as initial condition. One may wonder if this
conclusion may be modified when assuming a non-vanishing initial $N_1$
abundance which would allow for very late $N_1$ decay. It is not,
because at large values of $z$ the LFVW are very suppressed and therefore
the flavour effects, which are essential in this scenario, do not survive.
In summary, generically larger baryon asymmetries are expected for
$\tilde m_1\sim m_*$.
With the above considerations in mind, we have explored the parameter space
for a fixed value of $M_2/M_1$ (chosen near to 1) and in
Fig.~\ref{fig:nores} we plot the
evolution of the different asymmetries and densities for a set of parameters
representative of the cases with highest production of baryon asymmetry:
\begin{equation}
\begin{split}
M_1 &= 250\; {\rm GeV},\quad M_2 = 275 \; {\rm GeV} \; , \\
(\lambda^\dag \lambda)_{11} &= 8.2 \times 10^{-15} \;
(\tilde m_1=10^{-3}\; {\rm eV})\; , \\
(\lambda^\dag \lambda)_{22} &= 10^{-4} \; ,\\
K_{e 1} &= 0.\; , \quad K_{\mu 1}=0.3\;, \quad K_{\tau 1}=0.7\;, \\
K_{e 2} &= 0.\;, \quad K_{\mu 2}=10^{-10}\;, \quad K_{\tau 2}\simeq 1 \; ,\\
p^{12}_{e \mu} &= p_{e \tau}^{12}=p_{\mu \tau}^{12}=1\; .
\end{split}
\label{eq:noresparam}
\end{equation}
Since $\sqrt{(\lambda^\dag \lambda)_{22}} > \lambda_\mu$
we have safely chosen the projectors $K_{e 1}, K_{e 2}=0$ to prevent
any possible flavour projection effects associated to the Yukawa interactions
of $N_2$, which would complicate the description of the problem without
substantially changing the results. In an effectively two flavour case the CP
asymmetries are proportional to only one phase factor, in this case to
$p_{\mu \tau}^{12}$, therefore we have adopted in the example its
maximum possible value.
With these values the corresponding CP asymmetries are:
$\epsilon_{e 1} = \epsilon_{e 2} =0 ,
\epsilon_{\mu 1} = -\epsilon_{\tau 1} = 8.7 \times 10^{-11},
\epsilon_{\mu 2} = -\epsilon_{\tau 2} = 8.7 \times 10^{-21}$.
From the figure it can be seen that even with these large
$N_2$ Yukawa couplings,
their strong flavour hierarchy and the small $1/(a_2-1)$ suppression,
the produced $Y_B$ falls short to explain the observations
by about 5 orders of magnitude.
However, we notice that in this regime the asymmetry comes mainly from
processes involving $N_1$. Therefore the $B-L_{SM}$ asymmetry is approximately
proportional
to $1/(a_2-1)$ which is around 5 in the example. If we take $M_2$ closer
to $M_1$, that factor and the corresponding $Y_B$
grow accordingly. Thus we see that in this regime it is also
possible to generate the required baryon asymmetry as long
as $N_1$ and $N_2$ are strongly degenerated even if still not in
the resonant regime. For example for the values
of parameters given in Eq.~\eqref{eq:noresparam} the observed
baryon asymmetry could be produced if $a_2-1 \sim 2.4\times 10^{-5}$,
which is still far from the resonance ($g_2 = 4 \times 10^{-6}$).
Something to note is that despite the great hierarchy among the
projectors of $N_2$ onto $\ell_\mu$ and $\ell_\tau$, the flavour
asymmetries $Y_{\Delta \alpha}$ ($\alpha = \mu, \tau$) are quite
similar in size. This is because the evolutions of the different
asymmetries are strongly coupled due to the conservation of $B-L$ and
the null value of the total CP asymmetry in $N_1$ decays.
To obtain an estimate of the order of magnitude of the density asymmetries as well as to understand their dependence on the $N_2$
flavour projectors (which are more relevant than the $N_1$ projectors when
$(\lambda^\dag \lambda)_{22} \gg (\lambda^\dag \lambda)_{11}$), we have
developed the following semiquantitative approximation:
(i) From Fig.~\ref{fig:rates} we see that for $K_{\mu 2} \gtrsim 10^{-9}$
the rates of the processes $\lrproname{N_2}{\ell_{\mu, \tau} h}$ are larger
than the expansion rate of the Universe in the most relevant range of
temperatures. Therefore at each instant the thermal bath has time to relax,
i.e., the production of asymmetry equals its erasure.
This implies that the derivatives of the density asymmetries are negligible
with respect to the source and LFVW terms, hence we
set them to zero in the Boltzmann equations.
(ii) We keep the CP asymmetries produced by $N_1$
(since $\epsilon_{\alpha 2} \ll \epsilon_{\alpha 1}$),
as well as the $N_2$ decay and inverse decay reactions, and
neglect all the remaining subdominant contributions, including
the partial conversion of lepton asymmetry into baryon asymmetry during the
leptogenesis era.
Within this approximation, Eqs.~\eqref{eq:be3} and \eqref{eq:be4} simplify to
\begin{eqnarray}
S_\mu(z) + K_{\mu 2} \g{N_2}{\ell h} \left[ y_{N_2} - y_{\ell_\mu} \right] &=& 0\; , \\
S_\tau(z) + K_{\tau 2} \g{N_2}{\ell h} \left[ y_{N_2} - y_{\ell_\tau} \right] &=& 0 \; ,
\end{eqnarray}
where $S_\mu(z) = -S_\tau(z) = \epsilon_{\mu 1} \, \bigl(\tfrac{Y_{N_1 + \bar N_1}}{Y_{N_1 + \bar N_1}^{eq}}-1 \bigr) \, 2 \, \g{N_1}{\ell h}$ is the source term normalized to $(-sHz)^{-1}$ and $ \g{N_i}{\ell h} \equiv \sum_\beta \g{N_i}{\ell_\beta h} $.
A third equation is provided by total lepton number conservation,
i.e. $Y_{N_2 - \bar N_2} + Y_{L_\mu} + Y_{L_\tau} = 0$. In the most relevant temperature
range for the model, $T \sim M_2$, we can approximate the equilibrium density of $N_2$ by
that of one relativistic degree of freedom, therefore
\begin{equation}
y_{N_2} + 2y_{\ell_\mu} + 2y_{\ell_\tau} = 0 \; .
\end{equation}
When $K_{\mu 2} \ll 1 \simeq K_{\tau 2}$ the solution to this system of
equations is
\begin{eqnarray}
y_{\ell_\mu} &=& \frac{3}{5} \frac{S_{\mu}(z)}{K_{\mu 2} \g{N_2}{\ell h}}\; ,
\nonumber \\
y_{N_2} = y_{\ell_\tau} &=&
- \frac{2}{5} \frac{S_{\mu}(z)}{K_{\mu 2} \g{N_2}{\ell h}}\; .
\end{eqnarray}
From this analysis,
it is clear that despite the large hierarchy in the projectors of $N_2$, all the density asymmetries have the same order of magnitude. Moreover, since the source term is proportional to $\sqrt{K_{\mu 2}}$, the density asymmetries are inversely proportional to $\sqrt{K_{\mu 2}}$. We have verified numerically that this dependency of the asymmetries on the projector actually holds in the range $10^{-9} \lesssim K_{\mu 2} \lesssim 10^{-1}$, where the approximations we made are expected to be valid.
For $\sqrt{K_{\mu 2}} \lesssim 10^{-10}$ the rates of the processes $\lrproname{N_2}{\ell_{\mu} h}$ are lower than the expansion rate of the Universe, hence point (i) is not longer true. In this range of $K_{\mu 2}$, the density asymmetries decrease as $\sqrt{K_{\mu 2}}$ because the main dependence on $K_{\mu 2}$ comes from the relation $\epsilon_{\mu,\tau \, 1} \propto \sqrt{K_{\mu 2}}$. Thus,
fixing all the parameters but $K_{\mu 2}$ to the values given in Eq.~\eqref{eq:noresparam}, the baryon asymmetry is maximized for $K_{\mu 2}$ between $10^{-9}$ and $10^{-10}$.
\section{Summary}
In this article we have studied the possibility of generating the observed
baryon asymmetry via leptogenesis in the decay (and scatterings)
of heavy Dirac Standard Model singlets with ${\cal O}$ (TeV) masses
in a framework with $B-L$ conservation above the electroweak scale.
In this scenario a total lepton number, which is perturbatively
conserved, can be defined. This lepton number is shared between the
Standard Model leptons and the heavy Dirac singlets as described in
Sec.\ref{sec:iss}. In this scenario, despite the total CP
asymmetry is null, a CP asymmetry in the different
SM lepton flavours can be generated (see Sec.\ref{sec:CP}).
The additional physical condition for generating a non-vanishing $B$ in
this framework is that the sphalerons depart from equilibrium during the
decay epoch. For symmetric initial conditions (no net baryon nor total lepton
number present), $B-L_{SM} = 0$ after the heavy neutrinos have
disappeared. Consequently if the sphalerons were still active after the
heavy Dirac singlets decay epoch, the final baryon asymmetry, being
proportional to $B-L_{SM}$, would be zero. However if they depart from
equilibrium
during the decay epoch the baryon asymmetry freezes at a value which
in general is not null.
In summary in this scenario the baryon asymmetry is generated by the
interplay of lepton flavour effects and the sphaleron decoupling in the
decay epoch.
In order to quantify whether enough baryon asymmetry can be generated we
have constructed and solved the network of relevant Boltzmann Equations
associated with the abundances of the two lightest Dirac heavy singlets,
their asymmetries and the three SM flavour asymmetries. The results are
given in Sec.\ref{sec:results}.
We find that the ratio of the masses of
the two heavy Dirac neutrinos is the key parameter that determines whether
or not successful leptogenesis is possible. The relevant Yukawa couplings
are constrained from above by the requirement of
having flavour effects, and from
below by the requirement of large enough CP asymmetry. Within these
boundaries we conclude that successful leptogenesis can occur if the
masses of two heavy Dirac singlets are very degenerate $M_2/M_1-1 \lesssim
{\cal O}(10^{-5})$.
Recall that in our framework this degeneration is not a consequence of
the small breaking of total lepton number, as in other low scale leptogenesis
scenarios \cite{resonant,ab}, although it may be due to an additional
symmetry of the heavy sector.
In particular, if the CP asymmetry is resonantly
enhanced -- $(M_2^2-M_1^2) \sim M_1 \Gamma_2$ -- , the mechanism works
for a wide range of values of the Yukawa couplings and flavour
projections.
It is worth to explore whether the heavy neutrinos will be observable
at the LHC and/or lead to measurable lepton flavour violating signals,
within the parameter regions allowed by leptogenesis.
\section*{Acknowledgments}
We thank Pilar Hern\'andez for very useful discussions about the
equilibrium conditions for baryogenesis.
This work is supported by spanish MICCIN under grants 2007-66665-C02-01,
FPA-2007-60323,
and Consolider-Ingenio 2010 CUP (CSD2008-00037), CPAN
(CSD2007-00042) and PAU (CSD2007-00060),
by CUR Generalitat de Catalunya grant 2009SGR502,
by Generalitat Valenciana grant PROMETEO/2009/116
and by USA-NSF grant PHY-0653342.
\newpage
|
2,877,628,088,642 | arxiv | \section{Introduction}
The motion of particles in curved backgrounds is essential for the understanding of various gravitational phenomena. Among these we may distinguish the study of black hole shadows \cite{Synge,Maeda,Mann,Haitang} or aspects of the gravitational memory effect \cite{Gib1,Gib2,Shore,Chakra}, both of which are related to the investigation of geodesic motion. Of special importance are cases where the geodesic system is characterized by enough existing integrals of motion, in involution, so as to be deemed as integrable in the Liouville sense. It is well known that, for the geodesic systems of equations, at least in the context of Riemannian geometry, the integrals of motion are closely related to the symmetries of the background manifold. For works regarding the symmetries of the geodesic equations as well as their geometrical significance, see \cite{Katzin,Cav,Hoj,Rosquist,Andr1,Andr2,Gomis,PetHar,Andr3}.
As far as space-time vectors are concerned, the homothetic algebra of the metric is particularly important in the generation of symmetries for the affinely parametrized geodesics \cite{Andr1}, giving rise to point symmetry transformations. On the other hand, Killing tensors of various ranks are connected with the generation of what we refer to as higher order, or hidden, or dynamical symmetries. An example of such a symmetry, which happens to be crucial for the integrability of the relative system, is the one associated to the Carter constant for the motion in a Kerr black-hole background \cite{Carter}. The seminal work by Carter motivated further studies on the subject \cite{Penrose,Woodhouse}. For more results on higher order or hidden symmetries in various systems, and the geometric conditions of the involved objects in their construction, see also \cite{Kalotas,Carhid,Ply,Tsamp1,Tsamp2}.
Apart from the Killing vectors or tensors however, there is an intriguing involvement of the conformal algebra of the metric. For example, in the case of null geodesics, all conformal Killing vectors (CKVs) generate conserved quantities which are linear in the momenta. However, when time-like geodesics are considered, the relative conserved quantities are just generated by the sub-set of the Killing vectors (KVs). We may say that, the introduction of mass for the test particle, leads to a symmetry breaking effect that reduces the dimensionality of the symmetry group since obviously KVs$\subset$CKVs. Interestingly enough, it has been shown \cite{dimgeo}, that the proper conformal Killing vectors, i.e. the elements of the set CKVs$\cap$KVs, are still involved in the construction of conserved charges, even in the massive case. However, what happens now is that they enter in non-local conserved quantities. Moreover, for their derivation, it is important to maintain the initial parametrization invariant form of the problem, e.g. not to start from the affinely parametrized equations, which explains why they are usually overlooked.
Among the various studies regarding geodesic equations, a great number is devoted to the motion in a pp-wave background \cite{pp1,pp2,pp3,pp4,pp5,pp6,pp7,pp8,ppwaves,BFgeo}. The symmetries of the pp-wave metrics have been extensively studied in several works \cite{Goenner,Maartens,Tupper}. The plane-fronted gravitational waves with parallel rays (or more simply pp-waves \cite{Kundt}) are non-flat space-times defined by the existence of a covariantly constant and null bi-vector. In many cases, another definition is encountered in the literature, that of possessing a covariantly constant, null vector \cite{SthephMac}. The latter however is not equivalent to the first; the two become indistinguishable, if we just regard vacuum solutions of General Relativity. Here, we follow mainly the formalism of \cite{Maartens,Tupper} and make use of the first definition, which is slightly more restrictive: It implies not only the existence of a covariantly constant, null vector, but also that the space-time is either of type $N$ or $O$ in the Petrov classification \cite{Tupper,Steele}.
The pp-waves have several interesting properties. They belong to a larger class of geometries, whose curvature scalars are all zero, the so called Vanishing Scalar Invariants Space-times (VSI) \cite{Coley}. Their metrics contain the very interesting case of plane gravitational waves and - through Penrose's limit \cite{Penlim} - their applications extend even to string theory \cite{Ortin}. In \cite{ppwaves}, it was shown that, when the background metric is that of a pp-wave spacetime, the non-local conserved charges of the general geodesic problem reduce to local expressions. The resulting integrals of motion appear as if they are generated from mass dependent distortions of the proper conformal Killing vectors of the metric, ``reinstating'', in a sense, the broken symmetries due to the introduction of the mass. Here, we further investigate the nature and the geometric implications of such vectors and show that they still emerge even if you consider a Finslerian generalization of the pp-wave geometry, which introduces an additional, Lorentz violating, parameter.
The outline of this work is the following: First, we start with an overview regarding the existence of non-local integrals of motion for massive geodesics in a generic space-time. We prove that these conserved charges are actually generated by non-local Noether symmetries and we derive their generators. We revisit the result, first appeared in \cite{ppwaves}, about the mass-dependent distortions of the proper conformal Killing vectors, which generate integrals of motion in the case of pp-waves. We concentrate on their geometric interpretation and prove that such vectors are the reduced form of higher order Noether symmetries. What is more, we demonstrate that there exist geometries, besides pp-waves, that can admit such types of ``distorted'' symmetries; as a brief example we consider the de Sitter solution of General Relativity. In the subsequent sections we extend the previous results in the case where a Lorentz violating parameter is also introduced, causing an extra symmetry breaking in conjunction to the mass. This is realized by taking the generalized Bogoslovsky-Finsler line-element. We further investigate the necessary geometric conditions for such distorted vectors to exist and we derive the explicit expressions for the Finslerian pp-waves.
\section{Mass distorted symmetry vectors}
In this section, for the convenience of the reader, we revisit some known facts from the theory of geodesic systems and also make a brief review of the results obtained in \cite{dimgeo} and \cite{ppwaves}, parts of which are going to be of importance in our analysis. We briefly describe the notion of non-local conservation laws related to conformal Killing vectors, as introduced in \cite{dimgeo}, for a general geodesic system. We additionally prove that these conserved quantities are owed to non-local Noether symmetries of the action and present the relative expressions. Subsequently, we proceed to revisit the specialization of this result in the case of pp-wave space-times, where the conformal vectors acquire mass dependent distortions in order to generate conserved charges for time-like geodesics \cite{ppwaves}. For a generic space-time, we investigate the geometric implications of such vectors and show that they are related to higher order (hidden) Noether symmetries. Finally, in order to demonstrate that there can be other geometries - beside pp-waves - admitting such type of symmetries, we present an example utilizing the de Sitter metric.
\subsection{Generic geodesic systems and non-local integrals of motion}
For the motion of a relativistic particle of mass $m$ in a background spacetime whose metric is given by $g_{\mu\nu}$, we consider the action
\begin{equation} \label{primact}
S[\lambda] = \int\!\! L d\lambda ,
\end{equation}
where
\begin{equation} \label{primLag}
L = \frac{1}{2 n} g_{\mu\nu}\dot{\mathrm{x}}^\mu\dot{\mathrm{x}}^\nu- \frac{m^2}{2} n.
\end{equation}
The latter is a quadratic, parameterization invariant Lagrangian. The $\lambda$ denotes the parameter along the trajectory, the $\mathrm{x}^\mu=\mathrm{x}^\mu(\lambda)$ are the coordinates and $n=n(\lambda)$ is an auxiliary degree of freedom referred to as the einbein \cite{einbein}. The dot in \eqref{primLag} is used to symbolize the derivatives with respect to $\lambda$, i.e. $\dot{\mathrm{x}}^\mu= \frac{d \mathrm{x}^\mu}{d\lambda}$.
Under an arbitrary change of the parameter $\lambda\mapsto \tilde{\lambda} = f(\lambda)$, and the transformation laws
\begin{equation} \label{ptrlaw}
n(\lambda) d\lambda \mapsto n(\tilde{\lambda}) d\tilde{\lambda} \quad \text{and} \quad x(\lambda) \mapsto \tilde{x}(\tilde{\lambda}),
\end{equation}
the action \eqref{primact} remains form invariant, i.e. $S[\lambda]=S[\tilde{\lambda}]$. Hence, arbitrary transformations of the parameter $\lambda$ constitute symmetries of $S$. It is for this reason that Lagrangian \eqref{primLag} is referred to as parametrization invariant. We observe from \eqref{ptrlaw} that, although $n$ is considered a degree of freedom on equal footing with the $\mathrm{x}^\mu$, there is the difference that the latter transform as scalars, while $n$ as a density of weight $+1$.
Maybe the most well-known Lagrangian, used to describe the motion of a relativistic massive particle, is the square root Lagrangian
\begin{equation} \label{sqLAg}
L_{\text{sq}}=- m \sqrt{-g_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu},
\end{equation}
where we use the minus inside the square root because we adopt the convention $g_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu<0$ for time-like geodesics (throughout this work we also make use of the units $c=1$). Lagrangian \eqref{sqLAg} is also parametrization invariant, but this time not because of an auxiliary field, like \eqref{primLag}, but because it is a homogeneous function of degree one in the velocities, i.e. $L_{\text{sq}}(\mathrm{x},\sigma \dot{\mathrm{x}})=\sigma L_{\text{sq}}(\mathrm{x}, \dot{\mathrm{x}})$, where $\sigma$ is a positive constant.
At this point it is useful to remind Euler's theorem on homogeneous functions, which states that: If $h(y)$ is a homogeneous function of degree $k$, i.e. $h(\sigma y) = \sigma^k h(y)$ for $\sigma>0$, then the following equality holds
\begin{equation}\label{Eultheo}
y^\mu \frac{\partial h}{\partial y^\mu} = k h .
\end{equation}
By simply setting $h=L_\text{sq}$, $y=\dot{\mathrm{x}}$ and $k=1$ in \eqref{Eultheo}, the theorem, in the case of Lagrangian \eqref{sqLAg}, implies that the latter has an identically zero Hamiltonian, $\dot{\mathrm{x}}^\mu\frac{\partial L_{\text{sq}}}{\partial \dot{\mathrm{x}}^\mu}-L_{\text{sq}}\equiv 0$. This, together with the fact that the euler-Lagrange equations of \eqref{sqLAg} are not well-defined for null geodesics (the expression $g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu=\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}_\mu=0$ appears in denominators), makes the use of the $L$ of \eqref{primLag} better suited for our purposes.
The einbein Lagrangian of \eqref{primLag} is dynamically equivalent to $L_{\text{sq}}$. In order to see this we need to write down the Euler-Lagrange equations of \eqref{primLag}, which are equivalent to
\begin{subequations}
\begin{align} \label{eulgenLx}
& \ddot{\mathrm{x}}^\mu + \Gamma^\mu_{\kappa\lambda} \dot{\mathrm{x}}^\kappa \dot{\mathrm{x}}^\lambda = \dot{\mathrm{x}}^\mu \frac{d}{d\lambda}\left( \ln n\right) \\
& \frac{1}{n^2} g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu+ m^2 =0 , \label{eulgenLn}
\end{align}
\end{subequations}
where the $\Gamma^\mu_{\kappa\lambda}$ are the Christoffel symbols of the metric $g_{\mu\nu}$. The first set consists of the second order equations obtained by variation with respect to $\mathrm{x}$, while the last equation, \eqref{eulgenLn}, is the constraint equation acquired by variation with respect to $n$. By solving algebraically this last relation for the einbein, $n$, and substituting it in equations \eqref{eulgenLx}, we obtain the Euler-Lagrange equations of $L_{\text{sq}}$. In the einbein formalism, the affinely parametrized geodesics are obtained by using the gauge fixing condition $n=$constant, which leads, from \eqref{eulgenLn}, to $\do{\mathrm{x}}^\mu \dot{\mathrm{x}}_\mu=$const. What is more, for the null geodesics, we need to just set $m=0$ in \eqref{eulgenLn}, which leads to no complications \eqref{eulgenLx}.
Unlike $L_{\text{sq}}$, the Hamiltonian of \eqref{primLag} is not identically zero; it happens to become zero, but in a weak sense according to Dirac's theory of constrained systems \cite{Dirac,Sund}. The total Hamiltonian is obtained through the Dirac-Bergmann algorithm \cite{Dirac,AndBer} and it reads
\begin{equation} \label{Ham}
H_T = \frac{n}{2} \mathcal{H} + u_n p_n,
\end{equation}
which is a linear combination of constraints $p_n\approx 0$ and $\mathcal{H}\approx 0$. The symbol ``$\approx$'' denotes a weak equality, meaning, that the respective quantities (here $p_n$ and $\mathcal{H}$) cannot be set to zero prior to carrying out Poisson bracket calculations. Only the end result - after calculating Poisson brackets - is meant to be projected on the constraint surface, defined by the equations $p_n=0$ and $\mathcal{H}=0$. The $p_n$ corresponds to the momentum for the degree of freedom $n$ and the relation $p_n\approx 0$ forms the primary constraint of the theory, the $u_n$ is an arbitrary multiplier and
\begin{equation}\label{Hamcon}
\mathcal{H}= g^{\mu\nu} p_\mu p_\nu + m^2 \approx 0
\end{equation}
is the secondary constraint - also called Hamiltonian or quadratic constraint. The $p_\mu=\frac{\partial L}{\partial \dot{\mathrm{x}}^\mu}$ are the usual momenta conjugate to the degrees of freedom $\mathrm{x}^\mu$.
As we mentioned, the action \eqref{primact} describes a parametrization invariant system, i.e. one whose action and equations of motion remain invariant under arbitrary changes of the parameter $\lambda$. The symmetry structure of this type of quadratic in the velocities Lagrangians, including a potential term, has been studied in \cite{tchris} together with its connection to minisuperspace cosmological systems in Einstein's General Relativity. For recent studies on the algebra spanned by the symmetries of such a Lagrangian, associated to minisuperspace cosmology, see \cite{Livine1,Livine2}. In particular in what regards geodesic problems, it has been shown \cite{dimgeo}, that the system described by \eqref{primLag} admits non-local conserved quantities of the form
\begin{equation} \label{genericnonlocal}
I(\lambda,\mathrm{x},p) = Y^\mu \frac{\partial L}{\partial \dot{\mathrm{x}}^\mu} + m^2 \int\!\! n(\lambda) \omega(\mathrm{x}(\lambda)) d\lambda = Y^\mu p_\mu + m^2 \int\!\! n(\lambda) \omega(\mathrm{x}(\lambda)) d\lambda ,
\end{equation}
where the $Y^\mu$ are the components of conformal Killing vectors of $g_{\mu\nu}$ with conformal factor $\omega(\mathrm{x})$, i.e.
\begin{equation}\label{confeq}
\mathcal{L}_Y g_{\mu\nu} = 2 \omega(\mathrm{x}) g_{\mu\nu} ,
\end{equation}
where we use $\mathcal{L}$ do denote the Lie derivative. The charge $I$ has an explicit dependence on the parameter $\lambda$, brought about by the integral we see on the right hand side of \eqref{genericnonlocal}. The total derivative of $I$ with respect to the parameter can be seen that it is zero by virtue of the Hamiltonian constraint:
\begin{equation}
\frac{dI}{d\lambda} = \frac{\partial I}{\partial \lambda} + \{I,H_T\} = n \omega(\mathrm{x}) \mathcal{H} \approx 0 .
\end{equation}
The conserved charge given by $I$ in \eqref{genericnonlocal} is non-local due to involving an integral of phase space functions. This means that at least some prior knowledge of the trajectory is in principle needed in order to carry out the integration in the right hand side of \eqref{genericnonlocal} and acquire the explicit dependence on $\lambda$ that $I(\lambda,\mathrm{x},p)$ has. The parametrization invariance however, can help overcome such a difficulty. To experience this, we need to remember that $n(\lambda)$ can be used to fix appropriately the gauge, i.e. choose the parameter along the curve. For example a choice like $n=\omega(\mathrm{x})^{-1}$ makes the $I$ for the corresponding conformal Killing vector to become
\begin{equation} \label{Ifixed}
I = Y^\mu p_\mu + m^2 \lambda .
\end{equation}
Such is the case, when we consider the affinely parametrized geodesics ($n=1$) and a generic homothetic vector ($\omega=1$), which is known to result in an integral of motion like \eqref{Ifixed}, possessing a linear dependence on the parameter $\lambda$. What we see here, with the help of \eqref{genericnonlocal}, is that any proper conformal Killing vector can lead, under the appropriate choice of parameter along the curve, to an integral of motion of the form \eqref{Ifixed}. Thus, we can always ``localize'' at least one of any integrals of motion of the form \eqref{genericnonlocal}, by choosing appropriately the parameter $\lambda$ (a time choice gauge fixing).
We need to mention that, the expression \eqref{genericnonlocal}, also yields two other well-known results from the theory of symmetries of geodesic systems:
\begin{enumerate}[a)]
\item When $Y$ corresponds to a Killing field, i.e. $\omega(\mathrm{x})=0$, we obtain the typical conserved quantities of the form $I=Y^\mu p_\mu$.
\item If we consider null geodesics ($m=0$), then all conformal Killing fields generate conserved quantities of the form $I=Y^\mu p_\mu$.
\end{enumerate}
Obviously, the substitution of either $\omega(\mathrm{x})=0$ or $m=0$ in \eqref{genericnonlocal} leads to the desired linear in the momenta expressions and thus, we obtain the expected results of the two cases. It is interesting to note, that the two previous properties, signal an explicit symmetry breaking at the level of the Lagrangian \eqref{primLag}. When the parameter $m$ is zero, conformal Killing vectors (CKVs) form symmetries and generate conserved charges. On the other hand, when $m\neq 0$, only pure Killing vectors (KVs) remain with this property. Of course we have KVs$\subset$CKVs, hence, the mass is responsible for breaking a symmetry group. The new information that \eqref{genericnonlocal} provides us with, is that, even when $m\neq0$, the proper conformal Killing fields, still somehow contribute in generating integrals of motion, but of non-local nature. An important question is, if these new charges are actually owed to some Noether symmetries, which substitute the ones broken of the original CKVs. This is what we will prove later, after briefly presenting the concept of Noether symmetries and their charges.
\subsection{Noether Symmetries} \label{secNoether}
We start with a short review of how Noether symmetries are calculated. In this presentation, we use as our model the Lagrangian \eqref{primLag}, since it is the one that it is of interests to us.
If we consider a general transformation in the space of the dependent and independent variables - $n(\lambda)$, $\mathrm{x}^\mu(\lambda)$ and $\lambda$ respectively - then its generator is written as
\begin{equation} \label{upsbN}
X =\chi \frac{\partial}{\partial \lambda}+ X_n \frac{\partial}{\partial n}+ X^\mu \frac{\partial}{\partial \mathrm{x}^\mu},
\end{equation}
where $\chi$, $X_n$ and $X^\mu$ denote the coefficients in the relative directions. If the corresponding transformation leaves form invariant the action \eqref{primact} of the system ($\delta S=0$), we say that it makes up a variational or, more broadly known as, a Noether symmetry transformation. In infinitesimal form, the criterion which tells us if this condition is satisfied reads \cite{Olver}
\begin{equation} \label{generalsymcon}
\mathrm{pr}^{(1)} X (L) + L \frac{d\chi}{ d\lambda} = \frac{d \Phi}{d \lambda},
\end{equation}
where $\Phi$ is some function related with the surface term up to which the action $S$ may change ($\delta S=0\Rightarrow \delta(Ld\lambda)=d\Phi$) \cite{Sund}. Symmetries that satisfy \eqref{generalsymcon} for $\Phi\neq$const. are sometimes referred to as quasi-symmetries, exactly because they cause the action to change by a surface term. The $\mathrm{pr}^{(1)} X$ is called the first prolongation of the vector $X$. It is the extension of the basic vector $X$ to the space of the first order derivatives $\dot{\mathrm{x}}^\alpha$ and it is given by the formula
\begin{equation} \label{prologform}
\mathrm{pr}^{(1)} X = X + \left(\frac{dX^\mu}{d\lambda} -\dot{\mathrm{x}}^\mu \frac{d\chi}{d\lambda}\right) \frac{\partial}{\partial \dot{\mathrm{x}}^\mu} .
\end{equation}
We just consider the first prolongation because the Lagrangian $L$ that we use has a dependence up to velocities. For higher order Lagrangians, e.g. containing accelerations, one would also use higher order prolongations.
When a vector satisfies the symmetry criterion \eqref{generalsymcon}, it gives rise to the conserved quantity of the form
\begin{equation}\label{genint}
I = X^\mu \frac{\partial L}{ \partial \dot{\mathrm{x}}^\mu} + \chi \left(L - \dot{\mathrm{x}}^\mu \frac{\partial L}{\partial \dot{\mathrm{x}}^\mu} \right) - \Phi = X^\mu p_\mu - \chi \mathcal{H} - \Phi,
\end{equation}
where in the last equality we substituted the equivalent phase space expressions for the momenta and the Hamiltonian constraint. In both \eqref{generalsymcon} and \eqref{genint} we neglected terms that would formally appear and have to do with derivatives of the Lagrangian with respect to $\dot{n}$. Since $L$ has no $\dot{n}$ dependence, these terms are bound to be trivially zero.
Up to now, we have made no assumption over the dependencies that the involved functions may have. These classify the symmetry vector, $X$, into different categories. For example if $\chi$, $X_n$ and $X^\mu$ depend only on the independent and dependent variables, $\lambda$, $\mathrm{x}^\mu$ and $n$, then we say that $X$ is a generator of a \emph{point} symmetry. If, on the other hand, there is additional dependence on derivatives, like for example $\dot{\mathrm{x}}^\mu$, then we talk about \emph{higher order} (or \emph{hidden}) symmetries. If there is dependence on non-local expressions, then we refer to $X$ as a \emph{non-local} symmetry generator.
The simplest case is that of point symmetries, because, for them, there exists an algorithmic procedure of deriving the corresponding symmetry generator. The process is the following: When calculating \eqref{generalsymcon}, we obtain an expression which contains the functions $\chi$, $X_n$, $X^\mu$, $\Phi$ and their derivatives. Due to the presence of $L$ in \eqref{generalsymcon}, there appear terms involving products of velocities $\dot{\mathrm{x}}^\mu$. However, since we consider a point symmetry, none of the involved functions $\chi$, $X_n$, $X^\mu$ or $\Phi$ depends on velocities. As a result, each coefficient of different velocity products inside \eqref{generalsymcon} needs to be set separately equal to zero. This creates an over-determined system of partial differential equations for the coefficients of the vector $X$ and for $\Phi$, which, when solved, it derives the desirable symmetry vector. For example, in the case of the geodesic Lagrangian \eqref{primLag}, and for $m\neq0$, the Killing vectors of the metric emerge as generators of point symmetries: equation \eqref{generalsymcon} leads to $\mathcal{L}_X g_{\mu\nu}=0$ for $X=X^\mu(\mathrm{x}) \frac{\partial}{\partial \mathrm{x}^\mu}$ and $\Phi=$const. According to \eqref{genint}, the corresponding conserved charge is linear in the momenta, $I=X^\mu p_\mu$.
The situation gets severely more complicated for higher order symmetries. Imagine for example that we allow dependencies on the velocities, $\dot{\mathrm{x}}^\mu$, inside $\chi$, $X_n$, $X^\mu$ and $\Phi$. Then, we cannot proceed in the same manner as before, by breaking equation \eqref{generalsymcon} in smaller pieces according to the different velocity dependencies. The \eqref{generalsymcon} is to be solved in its totality as a single equation. This complexity is what makes higher order symmetries, sometimes to be referred to, as hidden symmetries. In order to facilitate the procedure of encountering such symmetries however, certain restrictions are usually assumed in the dependencies of the velocities inside the aforementioned functions, e.g. consider only polynomial dependencies up to certain order. The most usual case is, when considering a linear dependence in the velocities inside the coefficients of $X$. Then, you obtain integrals of motion associated with Killing tensors of second rank leading, through \eqref{genint}, to quadratic in the momenta constants of the motion. Such is the case of the famous Carter constant in the Kerr geometry, which is related to the existence of a non-trivial Killing tensor $K_{\mu\nu}$. Now, \eqref{generalsymcon} results in a symmetry generator of the form $X=K^\mu_{\phantom{\mu}\nu}(\mathrm{x}) \dot{\mathrm{x}}^\nu \frac{\partial}{\partial \mathrm{x}^\mu}$, under the condition $\nabla_{(\kappa}K_{\mu\nu)}=0$ and $\Phi=$const. Here, $\nabla$ is the covariant derivative and the parenthesis in the indices denotes the usual full symmetrization, e.g. $A_{(\mu\nu)}=\frac{1}{2} \left(A_{\mu\nu}+A_{\nu\mu} \right)$. In this case, \eqref{genint} yields a quadratic in the momenta conserved charge, $I=K^{\mu\nu} p_\mu p_\nu$.
Let us consider the integral of motion \eqref{genericnonlocal}, which is a non-local expression. It is logical to assume that there might be some non-local symmetry generator \eqref{upsbN} satisfying \eqref{generalsymcon} for the einbein Lagrangian \eqref{primLag}. Truly, it is not very difficult to see that if we write the vector
\begin{equation} \label{symgennl}
X = \left(Y^\mu - \frac{\dot{\mathrm{x}}^\mu}{n} \int\!\! n \omega(\mathrm{x}) d\lambda \right)\frac{\partial}{\partial \mathrm{x}^\mu} ,
\end{equation}
where $Y$ is a conformal Killing vector satisfying \eqref{confeq}, then this $X$ satisfies \eqref{generalsymcon} for $\Phi=$const. According to the prolongation formula \eqref{prologform}, we obtain for the vector \eqref{symgennl}
\begin{equation}
\begin{split}
\mathrm{pr}^{(1)}X & = \left(Y^\mu - \frac{\dot{\mathrm{x}}^\mu}{n} \int\!\! n \omega d\lambda \right)\frac{\partial}{\partial \mathrm{x}^\mu} + \left( \frac{d Y^\mu}{d \lambda} + \left(\frac{\dot{n} \dot{\mathrm{x}}^\mu}{n^2} -\frac{\ddot{\mathrm{x}}^\mu}{n} \right)\int\!\! n \omega d\lambda - \omega \dot{\mathrm{x}}^\mu \right)\frac{\partial}{\partial \dot{\mathrm{x}}^\mu} \\
& = \left(Y^\mu - \frac{\dot{\mathrm{x}}^\mu}{n} \int\!\! n \omega d\lambda \right)\frac{\partial}{\partial \mathrm{x}^\mu} + \left( \frac{\partial Y^\mu}{\partial \mathrm{x}^\kappa} \dot{\mathrm{x}}^\kappa + \frac{1}{n}\Gamma^{\mu}_{\kappa\lambda} \dot{\mathrm{x}}^\kappa \dot{\mathrm{x}}^\lambda \int\!\! n \omega d\lambda - \omega \dot{\mathrm{x}}^\mu \right) \frac{\partial}{\partial \dot{\mathrm{x}}^\mu}.
\end{split}
\end{equation}
In the above expression we used the chain rule in order to write $\frac{d Y^\mu}{d \lambda}= \frac{\partial Y^\mu}{\partial \mathrm{x}^\kappa} \dot{\mathrm{x}}^\kappa$ and the equations of motion \eqref{eulgenLx} to eliminate the accelerations $\ddot{\mathrm{x}}^\mu$. The action of the above prolonged vector on the Lagrangian \eqref{primLag} yields
\begin{equation} \label{XactonL}
\begin{split}
\mathrm{pr}^{(1)}X (L) = \frac{1}{2n} \left(\mathcal{L}_Y g_{\mu\nu}-2\omega(\mathrm{x}) g_{\mu\nu}\right) \dot{\mathrm{x}}^\mu\dot{\mathrm{x}}^\nu - \frac{1}{2 n^2} \left(\int\!\! n(\lambda) \omega(\mathrm{x}(\lambda)) d\lambda \right) \nabla_\kappa g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu \dot{\mathrm{x}}^\kappa = 0 .
\end{split}
\end{equation}
The first term after the equality in \eqref{XactonL} is zero because, by our assumption, $Y$ is a conformal Killing vector satisfying \eqref{confeq}, while the second also vanishes due to the covariant derivative of the metric being zero, $\nabla_\kappa g_{\mu\nu}=0$. Hence, criterion \eqref{generalsymcon} is satisfied for \eqref{symgennl} with $\Phi=$const. We have thus proved that there exists a non-local symmetry generator \eqref{symgennl}, which gives rise to the integral of motion \eqref{genericnonlocal}. We may now proceed and see how all these apply in the case of a pp-wave geometry and why it is, in a sense, special.
\subsection{The exceptional pp-wave case} \label{secRpp}
A generic pp-wave space-time, in Brinkmann coordinates, is described by a line-element of the form
\begin{equation} \label{lineel}
ds^2 = g_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu= H(u,x,y) du^2 + 2 du dv + dx^2+ dy^2,
\end{equation}
where $H=H(u,x,y)$ is the profile function and $\mathrm{x}^\mu=(u,v,x,y)$ are the coordinates. The expressions for a generic conformal Killing vector \eqref{confeq} and the corresponding conformal factor are well known for pp-wave space-times and in these coordinates are given by \cite{Maartens}:
\begin{subequations}
\label{zetackv}
\begin{align}
& Y^u = \frac{\mu}{2} \delta_{ij} x^i x^j + a_i(u) x^i + a(u), \\ \label{zeta2}
& Y^v = - \mu v^2 + \left( x^i a_i'(u)+ 2 \bar{b}(u) - a'(u)\right) v + M(u,x,y) \quad , \quad i,j=1,2\\
& Y^{i} = - \left( \mu x^i + a_i\right) v + \gamma_{ijkl} a_j'(u)x^k x^l + \bar{b}(u) x^i - \epsilon_{ij}c(u)x^j + c_i(u),
\end{align}
\end{subequations}
and
\begin{equation} \label{omega}
\omega = \omega(u,v,x^i)= \bar{b}(u) + x^i a_i'(u) - \mu v
\end{equation}
respectively. The $a$, $\bar{b}$, $c$, $a_i$, and $c_i$, where $i=1,2$, are all functions of the variable $u$, while $\mu$ is a constant parameter. The function $M(u,x,y)$ needs to satisfy certain integrability conditions, given in the appendix \ref{app0}, while the rest of the functions are connected to the profile $H(u,x,y)$ of the pp-wave through
\begin{equation}
\label{rulH}
\left[ \mu x^i +a_i(u)\right] \partial_i H = 2 \mu H + 2 a_i''(u) x^i -2 a''(u)+4 \bar{b}'(u) .
\end{equation}
In our relations we use the indices $i,j,k,l$ to denote the coordinates on the two-dimensional flat plane $x^i=(x,y)$. The $\delta_{ij}$ is used as a metric in this surface and we won't bother with distinguishing between upper and lower indices in that plane. For the other two coordinates of $\mathrm{x}^\mu$, namely $u$ and $v$, we use the relative letter as a superscript, when we want to denote the component corresponding in that direction. For example, the $Y^u$ denotes the component of the vector $Y$ in the direction $u$. The symbols like $\partial_u$, $\partial_v$ and $\partial_i$, are used to express in a compact form the relative partial derivatives with respect to the corresponding coordinate, e.g. $\partial_u= \frac{\partial}{\partial u}$, $\partial_i= \frac{\partial}{\partial x^i}$.
If we use the pp-wave space-time metric in \eqref{primLag} we obtain the geodesic Lagrangian
\begin{equation} \label{LagppR}
L = \frac{1}{2 n} \left(H \dot{u}^2 +2 \dot{u}\dot{v} + \dot{x}^2 + \dot{y}^2 \right) - n \frac{m^2}{2} .
\end{equation}
The Euler-Lagrange equations of the system lead to
\begin{subequations}
\label{eulgen}
\begin{align}
\label{Neul}
E_n(L) := \frac{\partial L}{\partial n} - \frac{d}{d\lambda} \left( \frac{\partial L}{\partial \dot{n}} \right)=0 & \Rightarrow H(u,x,y) \dot{u}^2+ 2 \dot{u}\dot{v} + \delta_{ij}\dot{x}^i \dot{x}^j + n^2 m^2 =0
\\
\label{ueul}
E_u(L) := \frac{\partial L}{\partial u} - \frac{d}{d\lambda} \left( \frac{\partial L}{\partial \dot{u}} \right)=0 & \Rightarrow \ddot{v} +\partial_i H(u,x,y) \dot{x}^i \dot{u}+\frac{1}{2} \partial_u H(u,x,y)\dot{u}^2 -\frac{\dot{n}}{n}\dot{v}=0 , \\
\label{veul}
E_v(L) := \frac{\partial L}{\partial v} - \frac{d}{d\lambda} \left( \frac{\partial L}{\partial \dot{v}} \right)=0 & \Rightarrow \ddot{u} - \frac{\dot{n}}{n}\dot{u} =0,
\\
\label{xeul}
E_i(L) := \frac{\partial L}{\partial x^i} - \frac{d}{d\lambda} \left( \frac{\partial L}{\partial \dot{x}^i} \right)=0 & \Rightarrow \ddot{x}^i -\frac{1}{2}\partial_i H(u,x,y)\dot{u}^2 -\frac{\dot{n}}{n}\dot{x}^i,
= 0
\end{align}
\end{subequations}
where $E_n$, $E_\mu$ denote the Euler derivatives with respect to $n$ and $\mathrm{x}^\mu=(u,v,x^i)$.
According to what we saw in the previous section, we expect the Killing vectors of the pp-wave metric to be associated with point symmetries of the Lagrangian \eqref{LagppR}, yielding linear in the momenta integrals of motion. The proper conformal Killing vectors are also to be involved, but generally in non-local expressions.
Let us note that, the existence of the covariantly constant null Killing vector field, $\ell=\partial_v$, for any pp-wave metric \eqref{lineel} guarantees the conservation of the momentum $p_v=\frac{\partial L}{\partial \dot{v}}$, whose on mass shell value we symbolize with $\pi_v$; this, in order to distinguish it from the phase space variable $p_v$. In other words, on mass shell we have $p_v=\pi_v=$const., due to the conservation law $\frac{dp_v}{d\lambda}=0$. This implies
\begin{equation} \label{intpv}
p_v = \pi_v \Rightarrow \frac{\dot{u}}{n} = \pi_v \Rightarrow n = \frac{\dot{u}}{\pi_v } ,
\end{equation}
which is also the solution to the Euler-Lagrange equation \eqref{veul}. Hence, the auxiliary degree of freedom $n$ is proportional to the velocity $\dot{u}$. Note that this is not a gauge fixing condition for $n$, it is bound to hold for any possible parameterization. By using \eqref{intpv} and \eqref{Neul}, it was shown in \cite{ppwaves} that the generic conformal factor $\omega$ of \eqref{omega} can be written in such a way so as to have
\begin{equation} \label{nomega}
n\, \omega = \frac{d}{d\lambda}\left(g _{\mu\nu}f^\mu \frac{\dot{\mathrm{x}}^\nu}{\dot{u}} \right) = \frac{1}{\pi_v^2}\frac{d}{d\lambda}\left(g _{\mu\nu}f^\mu \frac{\dot{\mathrm{x}}^\nu}{n} \right) = \frac{1}{\pi_v^2}\frac{d}{d\lambda}\left(f^\mu p_\mu \right),
\end{equation}
where $p_\mu = \frac{\partial L}{\partial \dot{\mathrm{x}}^\mu}= \frac{1}{n}g_{\mu\nu}\dot{\mathrm{x}}^\nu$ are the momenta, and $f$ is a spacetime vector with components
\begin{subequations}
\label{coefef}
\begin{align}
f^u = & 0,
\\ \label{coefefv}
f^v = & \frac{1}{2} u \left(x^i a_i'(u) - a' (u)+2 \bar{b}(u)-2 \mu v \right) + \frac{1}{2} x^i a_i (u) + \frac{\mu}{4} \delta_{ij} x^i x^j
+ \frac{1}{2} a(u) - \frac{m^2}{\pi_v^2} \frac{\mu}{4} u^2\,,
\\
f^{i} = & -\frac{1}{2}u \left(\mu\, x^i + a_i(u) \right)\,.
\end{align}
\end{subequations}
As a result the generally non-local conserved charge \eqref{genericnonlocal} is expressed in phase space, by virtue of \eqref{nomega}, as
\begin{equation}\label{redintR}
I = \left(Y^\mu + \frac{m^2}{\pi_v^2} f^\mu\right) p_\mu = \Upsilon^\mu p_\mu
\end{equation}
with $Y$ being a conformal Killing vector and where we introduced a new vector $\Upsilon$ with components
\begin{equation} \label{upsilonR}
\Upsilon^\mu = Y^\mu + \frac{m^2}{\pi_v^2} f^\mu .
\end{equation}
This vector expresses a mass dependent distortion of the proper conformal Killing vectors $Y$. It can be seen that the contribution of $f$ in $I$ is relevant only when $Y$ is a proper CKV. That is, the pure Killing vectors still generate the known conserved expressions $Y^\mu p_\mu$. It is only when $Y$ is a proper CKV that a mass dependent modification is needed in order to have a conserved quantity.
The corresponding conservation law reads:
\begin{equation} \label{conslawR}
\frac{dI}{d\lambda} = -2n \Omega E_n(L) - \Upsilon^\mu E_\mu(L) - \frac{m^2}{n} \Omega \left(n^2-\frac{\dot{u}^2}{\pi_v^2}\right)
\end{equation}
where $\Omega = \omega - \frac{m^2}{2\pi_v^2} \mu\, u$. The right hand side is zero because of the Euler-Lagrange equations \eqref{eulgen} and the known first integral \eqref{intpv}. The two first terms in the right hand side of \eqref{conslawR} is the result of what you get when you take the total derivative of a typical Noether charge; a linear combination of the Euler-Lagrange equations. The existence of the last term in \eqref{conslawR} however, is something different. It is not an equation of motion, but a first order relation, which is zero due to an already known conserved charge. In other words, relation \eqref{conslawR} gives us a conservation law which holds due to the given constant value of another known integral of the motion. In the next section we are going to study what is the exact relation of the vector $\Upsilon$ in \eqref{upsilonR} and the conserved charge $I$ of \eqref{redintR} with the Noether symmetries of this system.
\subsection{Relation to a Noether symmetry}
In section \ref{secNoether}, we gave a brief description of the typical Noether symmetry approach. As can be seen by \eqref{genint}, linear in the momenta integrals of motion of the form $I=X^\mu p_\mu$ are given by point symmetry generators, i.e. vectors whose components depend purely on the dependent and independent variables (no higher order or non-local dependence):
\begin{equation} \label{upsbN2}
X = X^\mu(\lambda,n,\mathrm{x}) \frac{\partial}{\partial \mathrm{x}^\mu}.
\end{equation}
By utilizing the symmetry criterion \eqref{generalsymcon} it is easy to derive that, for the pp-wave space-time, as for any metric, only the Killing vectors of the space-time generate point symmetries of this form. In particular, for massive geodesics $m\neq 0$, we get that $X$ is a symmetry if $\mathcal{L}_X g_{\mu\nu}=0$. On the other hand, as we saw in the previous section, we were able to write the conserved quantity appearing in \eqref{redintR}, which is a linear in the momenta integral of the motion, but which is generated by a mass dependent distortion of the conformal Killing vectors of the metric, the vector $\Upsilon$ in \eqref{upsilonR}. The latter appears to generate a linear conserved charge even though it is not a Noether symmetry. Let us mention here that conserved quantities are not all necessarily of Noetherian origin. However, in this case, we shall demonstrate that there is actually a relation of $\Upsilon$ to a formal Noether symmetry.
In order to reveal the true Noether symmetry it is enough to naively substitute the constant ratio $\frac{m^2}{\pi_v^2}$ that we see in \eqref{upsilonR}\footnote{Note that there is an extra $\frac{m^2}{\pi_v^2}$ term inside the $f^v$ component of $f$, see relation \eqref{coefefv}, which also needs to be substituted.} with its dynamical equivalent. In other words lets us substitute $m^2$ from \eqref{eulgenLn}, with respect to velocities, and $\pi_v=p_v= \frac{\partial L}{\partial \dot{v}}=\frac{\dot{u}}{n}$, then we obtain
\begin{equation}\label{consttodyn}
\frac{m^2}{\pi_v^2} = - \frac{g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{\dot{u}^2}.
\end{equation}
We may now write a new vector $\tilde{\Upsilon}$ whose components are defined as
\begin{equation} \label{Uhid}
\tilde{\Upsilon}^\mu := \Upsilon^\mu|_{(m,\pi_v)\rightarrow \dot{\mathrm{x}}} = Y^\mu - \frac{g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{\dot{u}^2} \tilde{f}^\mu,
\end{equation}
where $\tilde{f}:=f|_{(m,\pi_v)\rightarrow \dot{\mathrm{x}}}$. Let us consider the first prolongation of this vector, with the help of formula \eqref{prologform}, in order to extend it in the space of the velocities
\begin{equation}
\mathrm{pr}^{(1)} \tilde{\Upsilon} = \tilde{\Upsilon}^\mu \frac{\partial}{\partial \mathrm{x}^\mu} + \dot{\tilde{\Upsilon}}^\mu \frac{\partial}{\partial \dot{\mathrm{x}}^\mu} = \left(Y^\mu - \frac{g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{\dot{u}^2} \tilde{f}^\mu\right) \frac{\partial}{\partial \mathrm{x}^\mu} + \left(\dot{Y}^\mu - \frac{g_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{\dot{u}^2} \dot{\tilde{f}}^\mu\right) \frac{\partial}{\partial \dot{\mathrm{x}}^\mu} .
\end{equation}
The second equality in the above relation holds on mass shell. The components $\dot{\tilde{\Upsilon}}^\mu$ in general contain accelerations, which however can be eliminated by using the Euler-Lagrange equations \eqref{ueul}-\eqref{xeul} or equivalently by remembering that the ratio containing velocities in \eqref{Uhid} is an on mass shell constant.
It is easy to verify that $\mathrm{pr}^{(1)} \tilde{\Upsilon} (L)=0$, which means that the vector $\tilde{\Upsilon}$ is a Noether symmetry of the action. However, it is not a point symmetry, since its components in \eqref{Uhid} contain dependence on the first derivatives. The vector $\tilde{\Upsilon}$ constitutes a higher order or a hidden symmetry. The corresponding conserved charge, $\tilde{I}$, which is generated by symmetry \eqref{Uhid}, is connected to the $I$ of \eqref{redintR} in the same manner that the $\tilde{\Upsilon}$ is connected to the $\Upsilon$
\begin{equation} \label{genQ}
\tilde{I} := I|_{(m,\pi_v)\rightarrow p} = Y^\alpha p_\alpha - \frac{g^{\alpha\beta} (f|_{(m,\pi_v)\rightarrow p})^\gamma p_\alpha p_\beta p_\gamma}{K^{\mu\nu} p_\mu p_\nu}.
\end{equation}
In the above relation we have substituted the constant ratio \eqref{consttodyn} with respect to the momenta, as $\frac{m^2}{\pi_v^2} = - \frac{g^{\mu\nu} p_\mu p_\nu}{p_v^2}$ and we have used the trivial second rank Killing tensor $K= \ell \otimes \ell =\partial_v \otimes \partial_v$, which is constructed out of the covariantly constant Killing vector $\ell$. It is clear that $\pi_v^2 = p_v^2 = K^{\mu\nu}p_\mu p_\nu$. The total derivative of $\tilde{I}$ with respect to the parameter $\lambda$, is zero purely by virtue of the Euler-Lagrange equations \eqref{eulgen}.
We thus have, a higher order symmetry generator $\tilde{\Upsilon}$ whose components are given in \eqref{Uhid}. This generates a Noether charge, $\tilde{I}$, that is a rational function in the momenta. The interesting coincidence is that, on mass shell, part of this ratio is already constant, equal to $\frac{m^2}{\pi_v^2}$. This leads to the reduced expression of the original conserved quantity, which we denoted with $I$, and which has a linear dependence on the momenta. This reduced charge seems as if generated by a mass dependent distortion of the conformal Killing vectors of the metric; the vector $\Upsilon$ with its components supplied by \eqref{upsilonR}. The latter, even though it is not a formal Noether symmetry, has some interesting geometrical implications that offer a generalization of what we see happening in pp-waves.
\subsection{Geometric interpretation and generalizations}
It is interesting to study, whether this nice coincidence that we encountered in the case of pp-wave space-times, where a higher order symmetry of the geodesics is revealed as a mass dependent distortion of the conformal Killing vectors, can be generalized to include other geometries. We shall see that in principle this is possible, in fact let us first state that:
\begin{theorem} \label{theo1}
For a given manifold with metric $g_{\mu\nu}$, which admits a second rank Killing tensor $K_{\mu\nu}$, any space-time vector $\Upsilon$ satisfying
\begin{equation} \label{geomups}
\mathcal{L}_\Upsilon g_{\mu\nu} = 2 \Omega(\mathrm{x}) \left( g_{\mu\nu} + \frac{m^2}{\kappa} K_{\mu\nu} \right),
\end{equation}
produces a linear in the momenta conserved charge $I = \Upsilon^\mu p_\mu$ for the corresponding geodesic system by virtue of the Hamiltonian constraint \eqref{Hamcon} and the conserved charge $K^{\mu\nu}p_\mu p_\nu= \kappa$.
\end{theorem}
The proof can be easily deduced by simply taking the Poisson bracket of $I$ with the Hamiltonian constraint \eqref{Hamcon}, which plays the principal role in the time evolution:
\begin{equation}
\begin{split}
\{I,\mathcal{H}\} & = \{\Upsilon^\alpha p_\alpha,g^{\mu\nu}p_\mu p_\nu+m^2\}=-\left(\mathcal{L}_{\Upsilon}g^{\mu\nu}\right)p_\mu p_\nu = 2 \Omega(\mathrm{x}) \left( g^{\mu\nu} + \frac{m^2}{\kappa} K^{\mu\nu} \right)p_\mu p_\nu \\
& = 2\Omega(\mathrm{x}) \left( g^{\mu\nu} p_\mu p_\nu + m^2 \right) = 2 \Omega(\mathrm{x}) \mathcal{H} \approx 0,
\end{split}
\end{equation}
with the second equation being valid due to having $K^{\mu\nu}p_\mu p_\nu = \kappa$. That is, the integral of motion $I$ is related to the constant value of the known quadratic integral. In the pp-wave case, we had $\Omega = \omega- \frac{m^2}{2\kappa} \mu u$, where $\omega$ is the conformal factor ($\mathcal{L}_Y g_{\mu\nu}=2 \omega g_{\mu\nu}$), $\Upsilon$ given by \eqref{upsilonR}, $K=\ell\otimes\ell$ and $\kappa=\pi_v^2$.
In addition to the above, we can prove the following:
\begin{theorem} \label{theo2}
If a space-time with metric $g_{\mu\nu}$, admits a second rank Killing tensor $K(\neq g)$ and a vector $\Upsilon=\Upsilon(\mathrm{x},\frac{m^2}{\kappa})$ satisfying \eqref{geomups}, then the
\begin{equation} \label{geomupshigh}
\tilde{\Upsilon} = \Upsilon(\mathrm{x},\frac{-g^{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{K_{\alpha\beta}\dot{\mathrm{x}}^\alpha \dot{\mathrm{x}}^\beta}),
\end{equation}
is a higher order Noether symmetry generator of the geodesic action, yielding the conserved charge of the form
\begin{equation} \label{geomtIhigh}
\tilde{I} = \tilde{\Upsilon}^\alpha p_\alpha .
\end{equation}
\end{theorem}
The proof is quite straightforward to derive and makes use of the fact that $K$ is a Killing tensor; it does not necessarily require that the space-time is a pp-wave. It can be found in detail in the appendix \ref{App1}.
In the pp-wave case we saw that the vectors satisfying \eqref{geomups} obtain the nice form
\begin{equation} \label{splitU}
\Upsilon = Y + \frac{m^2}{\kappa} f.
\end{equation}
An important observation is, that the vectors $\Upsilon$ do not necessarily close an algebra. In principle this seems counter-intuitive from our usual experience, but it becomes better understood if we think of our situation in terms of the Poisson bracket formalism. Remember that the linear $I$ of the \eqref{redintR} are the on mass shell reduced expressions of the actual charges $\tilde{I}$ of \eqref{genQ}. The Poisson bracket of two reduced charges $I$ will not necessarily give something which happens to also be a reduced expression of some higher order charge. It is the Poisson brackets between two $\tilde{I}$ charges that are bound to be conserved, not those involving the $I$. The vectors $\Upsilon$ are not the actual symmetries, they offer a convenient reduction scheme that allows for simpler calculations and to reveal the higher order - true Noether - symmetries $\tilde{\Upsilon}$, which would be quite more difficult to derive in the conventional manner.
The relation \eqref{geomups} satisfied by $\Upsilon$, reveals the latter as a generator of disformal transformations. These form a generalization of conformal transformations and where initially introduced by Bekenstein \cite{Bekenstein}. Usually, a disformal transformation of the metric is written as
\begin{equation} \label{disfmet}
\hat{g}_{\mu\nu} = A(x) g_{\mu\nu} + B(x) \ell_\mu \ell_\nu .
\end{equation}
where $\hat{g}_{\mu\nu}$ is a new ``physical" metric, $\ell_\mu$ is the gradient of some scalar field, i.e. $\ell_\mu=\nabla_\mu \phi$, and $A(x)$, $B(x)$ are scalar functions of the space-time \cite{Lobo}. One motivation behind the introduction of disformal transformations, was to connect different scalar-tensor theories \cite{BenAchour}. For further uses and applications of disformal transformations see \cite{disf1,disf2,disf3,disf4,disf5,disf6}. We may generalize \eqref{disfmet}, by defining a transformation of the form $\hat{g}_{\mu\nu} = A(x) g_{\mu\nu} + B(x) K_{\mu\nu}$, with $K$ any second rank tensor, which would be compatible with \eqref{geomups}. In the pp-wave case, a vector like $\Upsilon$, satisfying a relation like \eqref{geomups} for $K^{\mu\nu}=\ell^\mu \ell^\nu$ with $\ell=\partial_v$ a null, Killing vector, is referred as a \emph{null-like disformal Killing vector} in the terminology of \cite{Lobo}. The $\ell$ can be also written as the gradient of some scalar field $\ell_\mu=\nabla_\mu \phi$, where $\phi=u$. Thus, in the pp-wave case, the vectors $\Upsilon$ generate disformal transformations in accordance with definition \eqref{disfmet}. We need to note that, in the original definition \cite{Bekenstein}, the functions $A$ and $B$ depended only on the scalar field $\phi$ and the inner product $\ell^\mu \ell_\mu = \nabla_\mu \phi \nabla^\mu \phi$. Obviously this is more restrictive than requiring $A$ and $B$ to be space-time functions.
The existence of a vector $\Upsilon$ satisfying \eqref{geomups} signifies that, in order for these conserved charges to appear, there must exist a coordinate transformation, which at the same time is a disformal transformation of the metric involving a Killing tensor $K$. We may proceed to examine an example of a different metric admitting such symmetries.
\subsubsection{The de Sitter example}
As we demonstrated, for pp-waves, you can satisfy equation \eqref{geomups} by distorting appropriately the conformal Killing vectors of the metric. This raises the question, whether there exist other space-times which also have this property and for which we can derive hidden symmetries through \eqref{geomups}. One obvious answer is the flat space, since all the relations that we used for pp-waves can lead trivially to the Minkowski space (in light-cone coordinates) by simply setting the profile function, $H(u,x,y)$, in the line-element \eqref{lineel}, equal to zero. Here, we report another example in the form of the de Sitter universe corresponding to a spatially flat Friedmann--Lema\^{\i}tre--Robertson--Walker (FLRW) space-time that solves Einstein's equations with a cosmological constant.
If we write the line-element in Cartesian coordinates $\mathrm{x} = (t,x,y,z)$ we have
\begin{equation}\label{deSitterlinel}
ds^2 = - dt^2 +e^{l t} \left( dx^2 +dy^2 +dz^2 \right),
\end{equation}
where $l$ denotes the constant associated with the value of the Ricci scalar, $R=3l^2$ and the cosmological constant $\Lambda=\frac{3 l^2}{4}$, for which the metric \eqref{deSitterlinel} satisfies Einstein's equations $R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R +\Lambda g_{\mu\nu}=0$.
The Lagrangian that reproduces the geodesic equations is given by
\begin{equation}\label{DSLag}
L = \frac{1}{2 n} \left[ - \dot{t}^2 + e^{l t} \left( \dot{x}^2 + \dot{y}^2 + \dot{z}^2 \right) \right] - n \frac{m^2}{2}
\end{equation}
and the equations themselves are equivalent to
\begin{subequations} \label{DSeq}
\begin{align} \label{DSeq0}
& \dot{t}^2 - e^{l t} \left(\dot{x}^2+\dot{y}^2+\dot{z}^2\right)-m^2 n^2 =0 \\ \label{DSeq1}
& \ddot{t} = \frac{\dot{n} \dot{t}}{n}-\frac{1}{2} l e^{l t} \left(\dot{x}^2+\dot{y}^2+\dot{z}^2\right) \\ \label{DSeq2}
& \ddot{x}^i= \frac{\dot{x}^i \left(\dot{n}-n l \dot{t}\right)}{n},
\end{align}
\end{subequations}
where $t(\lambda)$, $x^i(\lambda)$ and $n(\lambda)$ are all functions of $\lambda$, which symbolizes the parameter along the curve.
The manifold where the motion takes place is maximally symmetric, thus possessing ten Killing vectors. They of course generate the corresponding linear in the momenta conserved charges of the geodesic equations. We refrain from giving their expressions here. We are more interested in the five proper conformal Killing vectors of the space-time, which, in these coordinates, are written as
\begin{equation} \label{DSCKV}
\begin{split}
& Y_0 = e^{\frac{l}{2} t} \partial_t, \quad Y_i = e^{\frac{l}{2} t} x^i \partial_t - \frac{2}{l} e^{-\frac{l}{2} t} \partial_i \\
& Y_4 = e^{-\frac{l}{2} t} \left(\frac{4}{l^2} - x^j x_j \right) \partial_t - \frac{4}{l} e^{-\frac{l}{2} t} x^i \partial_i,
\end{split}
\end{equation}
where now $i,j=1,2,3$ and $x^i=(x,y,z)$. The $i,j$ indices are raised and lowered with the spatial part of the metric $g_{ij}=e^{l t} \delta_{ij}$. These vectors generate conserved charges only when $m=0$ (null geodesics).
Let us see how they can be distorted to generate conserved quantities in the massive case. First, let us construct the trivial Killing tensor $K=\partial_i \otimes \partial_i$ out of the sum of the tensor products of the Killing vectors that constitute the spatial translations. Its covariant components are
\begin{equation}
K_{\mu\nu} = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & e^{2l t} & 0 & 0 \\
0 & 0 & e^{2l t} & 0 \\
0 & 0 & 0 & e^{2l t}
\end{pmatrix}
\end{equation}
and of course we have $\nabla_{(\kappa} K_{\mu\nu)}=0$. By using equation \eqref{geomups}, we find the following five vectors that satisfy it for appropriate functions $\Omega(x)$:
\begin{equation}\label{DSups}
\begin{split}
& \Upsilon_0 = \frac{e^{\frac{l}{2} t}}{\left(\frac{m^2}{\kappa} e^{l t}+1\right)^{\frac{1}{2}}} \partial_t, \quad \Upsilon_i = \frac{e^{\frac{l}{2} t}}{\left(\frac{m^2}{\kappa} e^{l t}+1\right)^{\frac{1}{2}}} x^i \partial_t - \frac{2}{l} e^{-\frac{l}{2} t} \left(\frac{m^2}{\kappa} e^{l t}+1\right)^{\frac{1}{2}} \partial_i \\
& \Upsilon_4 = \frac{e^{-\frac{l}{2} t}}{\left(\frac{m^2}{\kappa} e^{l t}+1\right)^{\frac{1}{2}}} \left(\frac{4}{l^2} - x^j x_j \right) \partial_t - \frac{4}{l} e^{-\frac{l}{2} t} \left(\frac{m^2}{\kappa} e^{l t}+1\right)^{\frac{1}{2}} x^i \partial_i.
\end{split}
\end{equation}
Notice that by setting $m=0$ in \eqref{DSups} we obtain the proper conformal Killing vectors \eqref{DSCKV}. Due to theorem \ref{theo2} we expect the vectors $\tilde{\Upsilon}$, that emerge by substituting the constant ratio $\frac{m^2}{\kappa}$ in \eqref{DSups} with the expression
\begin{equation} \label{DScontodyn}
\frac{m^2}{\kappa} = - \frac{g^{\mu\nu}p_\mu p_\nu}{K^{\alpha\beta}p_\alpha p_\beta}= \frac{p_\mu p^\mu}{p_x^2+p_y^2+p_z^2} = e^{-2 l t} \left(\frac{\dot{t}^2}{\dot{x}^2+\dot{y}^2+\dot{z}^2}-e^{l t}\right),
\end{equation}
are higher order symmetries of the Lagrangian. Truly, it can be easily verified that the resulting $\tilde{\Upsilon}(\mathrm{x}, \dot{\mathrm{x}}):=\Upsilon|_{\mathrm{eq} \; \eqref{DScontodyn}}$ satisfy $\mathrm{pr}^{(1)}\tilde{\Upsilon}(L)=0$ and form higher order Noether symmetries, whose charges are the $\tilde{I} = \tilde{\Upsilon}^\alpha p_\alpha$. As an example let us take the conserved charge corresponding to the
\begin{equation}\label{dsex1}
\tilde{\Upsilon}_0 = \frac{e^{\frac{l t}{2}}}{\left(\frac{\dot{t}^2 e^{-l t}}{\dot{x}^2+\dot{y}^2+\dot{z}^2}\right)^{\frac{1}{2}}} \partial_t
\end{equation}
which reads
\begin{equation}\label{dsex1int}
\tilde{I}_0= \tilde{\Upsilon}_0^\mu p_\mu = \tilde{\Upsilon}_0^\mu \frac{\partial L}{\partial \dot{\mathrm{x}}^\mu} = -\frac{e^{\frac{1}{2} l t} \dot{t}}{n \left(\frac{e^{-l t} \dot{t}^2}{\dot{x}^2+\dot{y}^2+\dot{z}^2}\right)^{\frac{1}{2}}}.
\end{equation}
It is straightforward to see that $\frac{d\tilde{I}_0 }{d\lambda}=0$ due to \eqref{DSeq1} and \eqref{DSeq2}. The corresponding on mass shell reduced expression of the charges $\tilde{I}$ are given by the $I=\Upsilon^\alpha p_\alpha$, which are conserved on account of $m^2 = \frac{1}{n} \left[\dot{t}- e^{l t}\left( \dot{x}^2+\dot{y}^2+\dot{z}^2 \right)\right]$ and $\kappa = \frac{e^{2l t}}{n} \left( \dot{x}^2+\dot{y}^2+\dot{z}^2 \right) $.
We thus see that there exist geometries beyond pp-waves, where the conformal Killing vectors of the space admit mass dependent distortions. These generate additional conserved quantities when $m\neq 0$.
Up to now, we considered geodesics in (pseudo-)Riemannian geometry and saw how the symmetry breaking owed to the mass can lead to the appearance of new classes of symmetries. In the following sections we shall depart from the (pseudo-)Riemannian case, in order to introduce another symmetry breaking parameter. This will be realized through considering a more general, Finslerian, geometry and in particular that of a Bogoslovsky space-time, which involves a Lorentz violating parameter $b$.
\section{The Bogoslovsky-Finsler line-element}
We start this section with some generic information on Finsler geometry, so as to facilitate the following presentation. In Finsler geometry we consider a general line-element of the form
\begin{equation} \label{Finel1}
ds_F^2 = F(\mathrm{x},d\mathrm{x})^2,
\end{equation}
where $F(\mathrm{x},d\mathrm{x})$ is a function homogeneous of degree one in the $d\mathrm{x}^\mu$, that is, $F(\mathrm{x},\sigma d\mathrm{x}) = \sigma F(\mathrm{x},d\mathrm{x})$ for every $\sigma>0$. This is a generalization which contains the (pseudo-)Riemannian case, where the $F^2$ is simply quadratic in the $d\mathrm{x}^\mu$. A metric tensor can still be introduced as
\begin{equation} \label{Finmet}
G_{\mu\nu}(\mathrm{x},d\mathrm{x}) = - \frac{1}{2} \frac{\partial^2 F^2}{\partial(d\mathrm{x}^\mu)\partial (d\mathrm{x}^\nu)}
\end{equation}
and thus write
\begin{equation} \label{Finel2}
ds_F^2 = G_{\mu\nu} (\mathrm{x},d\mathrm{x}) d\mathrm{x}^\mu d\mathrm{x}^\nu ,
\end{equation}
with the difference that the metric $G_{\mu\nu}$ now, in contrast to the Riemannian $g_{\mu\nu}$, carries a dependence on the differentials $d\mathrm{x}^\mu$. The equality of \eqref{Finel1} and \eqref{Finel2} is obtained with the use of Euler's theorem for homogeneous functions and from exploiting the fact that $F^2$ is a homogeneous function of degree two in the $d\mathrm{x}^\mu$. Given a Finsler function $F(\mathrm{x},d\mathrm{x})$ we may also express the line-element as
\begin{equation} \label{genFlineel}
ds_F^2= F(\mathrm{x},d\mathrm{x})^2= \mathcal{F}(\mathrm{x},d\mathrm{x}) g_{\mu\nu}(\mathrm{x}) d\mathrm{x}^\mu d\mathrm{x}^\nu
\end{equation}
where $\mathcal{F}(\mathrm{x},d\mathrm{x})$ is a function homogeneous of degree zero in $d\mathrm{x}$.
Bogoslovsky \cite{Bogo1,Bogo2} introduced a line-element of the form
\begin{equation} \label{bogoel}
ds_F^2 = \eta_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu \left[\frac{\left(\ell_\mu d\mathrm{x}^\mu\right)^{2}}{- \eta_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu}\right]^b,
\end{equation}
where $0<b<1$ is a dimensionless parameter, $\eta_{\mu\nu}=\mathrm{diag}(-1,1,1,1)$, $\nabla_\mu \ell^\nu=0$, $\eta_{\mu\nu}\ell^\mu \ell^\nu =0$ and $\ell^0>0$. This serves as a generalization of Special Relativity where the isotropy is broken in a preferred direction, which is set by the future directed null vector $\ell$. The symmetries of line-element \eqref{bogoel} have been studied in \cite{GiGoPo1} and are identified to form the eight dimensional group named $DISIM_b(2)$, which is a deformation of the $ISIM(2)$ group of Very Special Relativity \cite{VSL}. This lower symmetry count, in comparison to the ten dimensional Poincar\'e group of the quadratic line-element $ds^2=\eta_{\mu\nu} d\mathrm{x}^\mu d\mathrm{x}^\nu$, implies that we have another symmetry breaking effect owed to the anisotropy parameter $b$.
Bogoslovsky's theory has an obvious generalization to a curved space with the substitution $\eta_{\mu\nu}\mapsto g_{\mu\nu}$ \cite{Bogo3,Stavrinos}. In the case of a pp-wave space-time we can set as the preferred direction the covariantly constant null vector $\ell=\partial_v$. Then, the line-element of the Finslerian extension is written as
\begin{align} \label{bogoel2}
ds_F^2 = g_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu \left[\frac{K_{\alpha\beta}d\mathrm{x}^\alpha d\mathrm{x}^\beta}{- g_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu}\right]^b ,
\end{align}
where, we made the use of the $K_{\alpha\beta}d\mathrm{x}^\alpha d\mathrm{x}^\beta=\left(\ell_\mu d\mathrm{x}^\mu\right)^{2}=du^2$ of the pp-wave case - remember that $K_{\mu\nu}=\ell_\mu \ell_\nu$.
For line-elements of the form of \eqref{bogoel2}, appearing in this Finslerian version of pp-waves, there exists an interesting theorem owed to Roxburgh and proven in \cite{Rox}. In brief it states that, if in the Finslerian line-element \eqref{genFlineel} the function $\mathcal{F}(\mathrm{x},d\mathrm{x})$ is such, so that
\begin{equation}
\mathcal{F}(\mathrm{x},d\mathrm{x}) = \mathcal{F} \left(\frac{\left(K_{\mu_1....\mu_k}(x) dx^{\mu_1}...dx^{\mu_k}\right)^{\frac{2}{k}}}{g_{\mu\nu}(\mathrm{x}) d\mathrm{x}^\mu d\mathrm{x}^\nu} \right) ,
\end{equation}
with $K_{\mu_1....\mu_k}(\mathrm{x})$ a tensor of rank $k$, which is covariantly constant ($\nabla_\kappa K_{\mu\nu}=0$) with respect to the connection associated with $g_{\mu\nu}$, then the geodesics of $ds_F^2$ are identical to those produced by the Riemannian metric $g_{\mu\nu}$ and the typical quadratic line-element $ds^2=g_{\mu\nu}d\mathrm{x}^\mu d\mathrm{x}^\nu$.
Obviously, the pp-wave case falls into this category since $\ell$ is covariantly constant and hence so is $K_{\mu\nu}$. Thus, Finslerian pp-waves of this type basically produce the same geodesics as the Riemannian case. However, the physically related parameters are indeed affected by the presence of $b\neq 0$. What is more, we are going to see that interesting changes take place in what regards the symmetry structure of the system and - what the mass does when breaking the conformal Killing symmetries in the Riemannian case - now the parameter $b$ also does it to certain isometries of the base metric $g_{\mu\nu}$.
\section{Geodesics and a new conservation law}
The geodesic Lagrangian for the Finslerian line-element $ds_F^2$ of \eqref{bogoel} is given by
\begin{equation} \label{Lag1}
L_F = -m\sqrt{-F^2} ,
\end{equation}
or equivalently, in the einbein formalism, by
\begin{equation}
L = \frac{1}{2n} G_{\mu\nu}(\mathrm{x},\dot{\mathrm{x}}) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu - n\frac{m^2}{2} ,
\end{equation}
which is of the same form as \eqref{primLag} with the difference that instead of $g_{\mu\nu}$, it now involves the Finsler metric $G_{\mu\nu}(\mathrm{x},\dot{\mathrm{x}})$ defined in \eqref{Finmet}.
The (geodesic) Euler-Lagrange equations for the degrees of freedom $\mathrm{x}^\mu$ are equivalent to \cite{Baobook}
\begin{equation} \label{FinEL}
\ddot{\mathrm{x}}^\mu + \gamma^\mu_{\kappa\lambda}\dot{\mathrm{x}}^\kappa \dot{\mathrm{x}}^\lambda =\mathrm{x}^\mu \frac{d}{d\lambda}\left(\ln n\right),
\end{equation}
where $\gamma^\mu_{\kappa\lambda} = \frac{1}{2} G^{\mu\sigma} \left( \frac{\partial G_{\sigma\lambda}}{\partial \mathrm{x}^\kappa} + \frac{\partial G_{\kappa\sigma}}{\partial \mathrm{x}^\lambda} - \frac{\partial G_{\kappa\lambda}}{\partial \mathrm{x}^\sigma}\right)$ are the Christoffel symbols with respect to the metric $G_{\mu\nu}$. The Euler-Lagrange equation for the einbein field, $n$, yields the constraint
\begin{equation}
\frac{1}{n^2} G_{\mu\nu}(\mathrm{x},\dot{\mathrm{x}}) \dot{x}^\mu \dot{x}^\nu + m^2 =0.
\end{equation}
The momenta are given by
\begin{equation} \label{Finmom}
p_\kappa = \frac{\partial L_n}{\partial \dot{\mathrm{x}}^\mu} = \frac{1}{n} G_{\mu\kappa} \dot{\mathrm{x}}^\mu ,
\end{equation}
which looks similar to the relation of the Riemannian case, but this time the right hand side is not linear in the velocities. We note that the extra term that would appear, due to the dependence of $G_{\mu\nu}$ on the velocities, is identically zero by virtue of the definition \eqref{Finmet} and the fact that the Finsler metric, $G_{\mu\nu}$, is a homogeneous function of degree zero in the velocities; this term would be proportional to
\begin{equation}
\frac{\partial G_{\mu\nu}}{\partial \dot{\mathrm{x}}^\kappa} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu = -\frac{1}{2}\frac{\partial^3 F^2}{\partial \dot{\mathrm{x}}^\mu\partial \dot{\mathrm{x}}^\nu\partial \dot{\mathrm{x}}^\kappa}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu = \frac{\partial G_{\mu\kappa}}{\partial \dot{\mathrm{x}}^\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu =0 .
\end{equation}
The last equality holds due to Euler's theorem on homogeneous functions, which in this case implies $\frac{\partial G_{\mu\kappa}}{\partial \dot{\mathrm{x}}^\nu}\dot{\mathrm{x}}^\nu=0$ (see \eqref{Eultheo} for $k=0$).
We may thus once more write the Hamiltonian constraint as
\begin{equation} \label{FinHamcon}
\mathcal{H} = G^{\mu\nu}(\mathrm{x},p)p_\mu p_\nu + m^2 \approx 0,
\end{equation}
assuming that we have managed to invert relations \eqref{Finmom} and thus express the velocities with respect to the momenta. If we want to look for a linear in the momenta conserved quantity of the form $I=\Upsilon^\mu p_\mu$, we need to demand $\{I,\mathcal{H}\} \approx 0$. We use the weak equality here because, due to the Hamiltonian constraint \eqref{FinHamcon}, which is bound to be zero, it is enough that the Poisson bracket produces something multiple of the constraint. The Poisson bracket is calculated to be
\begin{equation} \label{PBraG}
\begin{split}
\{I,\mathcal{H}\} & = - \left( \Upsilon^\sigma \frac{\partial G^{\mu\nu}}{\partial \mathrm{x}^\sigma} + G^{\mu\sigma} \frac{\partial \Upsilon^\nu}{\partial \mathrm{x}^\sigma} + G^{\sigma\nu} \frac{\partial \Upsilon^\mu}{\partial \mathrm{x}^\sigma}\right) p_\mu p_\nu \\
& = \frac{1}{n^2}\left( \Upsilon^\sigma \frac{\partial G_{\mu\nu}}{\partial \mathrm{x}^\sigma} - G_{\mu\sigma} \frac{\partial \Upsilon^\nu}{\partial \mathrm{x}^\sigma} - G_{\sigma\nu} \frac{\partial \Upsilon^\mu}{\partial \mathrm{x}^\sigma}\right) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu ,
\end{split}
\end{equation}
where, in the last equality, we made the transition to velocity phase space coordinates by utilizing \eqref{Finmom}. In the parenthesis we recognize what would be the Lie derivative of $G_{\mu\nu}$ if the latter had no dependence in the velocities. Once more, an additional appearing term of the form $\frac{\partial \Upsilon^\kappa}{\partial \mathrm{x}^\sigma}\frac{\partial G^{\mu\nu}}{\partial p_\sigma}p_\mu p_\nu p_\kappa$ has been set to zero because
\begin{equation}
\frac{\partial G^{\mu\nu}}{\partial p_\sigma}p_\mu = \frac{\partial G^{\mu\nu}}{\partial \dot{\mathrm{x}}^\kappa} \frac{\partial \dot{\mathrm{x}}^\kappa}{\partial p_\sigma} p_\mu = - G^{\lambda\nu} G^{\tau \mu} \frac{\partial G_{\lambda\tau}}{\partial \dot{\mathrm{x}}^\kappa} \frac{\partial \dot{\mathrm{x}}^\kappa}{\partial p_\sigma} G_{\rho\mu} \dot{\mathrm{x}}^\rho = - G^{\lambda\nu} \frac{\partial \dot{\mathrm{x}}^\kappa}{\partial p_\sigma} \frac{\partial G_{\lambda\rho}}{\partial \dot{\mathrm{x}}^\kappa}\dot{\mathrm{x}}^\rho=0 .
\end{equation}
The last equality holds again due to Euler's theorem.
For the line-element of \eqref{bogoel2} the corresponding Finsler metric $G_{\mu\nu}$ reads
\begin{equation} \label{Finmetsp}
\begin{split}
G_{\mu\nu}(\mathrm{x},\dot{\mathrm{x}}) = & 2 b(1-b) \left[g_{\sigma \mu} g_{\tau \nu} \frac{\mathcal{K}^b}{\mathcal{G}^{1+b}} + \left( g_{\sigma\mu} K_{\tau\nu} + g_{\sigma\nu} K_{\tau\mu} \right) \frac{\mathcal{K}^{b-1}}{\mathcal{G}^b} + K_{\sigma \mu} K_{\tau \nu} \frac{\mathcal{K}^{b-2}}{\mathcal{G}^{1-b}} \right] \dot{\mathrm{x}}^\sigma \dot{\mathrm{x}}^\tau \\
& (1-b) g_{\mu\nu} \frac{\mathcal{K}^b}{\mathcal{G}^b} - b K_{\mu\nu} \frac{\mathcal{K}^{b-1}}{\mathcal{G}^{b-1}},
\end{split}
\end{equation}
where, for abbreviation, we use $\mathcal{K} = K_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu$ and $\mathcal{G} = -g_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu$. When we insert the metric \eqref{Finmetsp} inside \eqref{PBraG} and consider the equation $\{I,\mathcal{H}\}\approx 0$, we arrive at
\begin{equation} \label{intermcond}
\{I,\mathcal{H}\} = \frac{1}{n^2} \left[(1-b) \left(\mathcal{L}_\Upsilon g_{\mu\nu}\right) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu \frac{\mathcal{K}^b}{\mathcal{G}^b} - b \left(\mathcal{L}_\Upsilon K_{\mu\nu} \right) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu \frac{\mathcal{K}^{b-1}}{\mathcal{G}^{b-1}} \right] \approx 0 ,
\end{equation}
where $\mathcal{L}_\Upsilon$ now stands for the Lie derivative with respect to the vector $\Upsilon$.
In the case where $K_{\mu\nu}$ is a covariantly constant Killing vector, which means that, according to Roxburg's theorem \cite{Rox}, the \eqref{FinEL} become the Riemannian \eqref{eulgenLx}, the $\mathcal{K}$ and $\mathcal{G}$ are constants of the motion. Let us set the on mass shell constant value of their ratio as $\frac{\mathcal{K}}{\mathcal{G}}= \frac{1}{M_b^2}$. Then, the condition \eqref{intermcond} becomes
\begin{equation}
\{I,\mathcal{H}\} = \frac{1}{n^2}\mathcal{L}_\Upsilon \left( \frac{1-b}{M_b^{2b}} g_{\mu\nu} - \frac{b}{M_b^{2(b-1)}} K_{\mu\nu}\right) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu \approx 0.
\end{equation}
The equality to zero is sufficient to be satisfied on mass shell. By taking an example from the Riemannian case, we may relax it to formulate the following:
\begin{theorem}
Consider the Bogoslovsky space described by the line-element \eqref{bogoel2}. If $K$ is a second rank covariantly constant Killing tensor of $g$, and there exists a vector $\Upsilon$ satisfying
\begin{equation}\label{Fincond}
\mathcal{L}_\Upsilon \left( g_{\mu\nu} - \frac{b}{1-b} M_b^{2} K_{\mu\nu}\right) = 2 \Omega(\mathrm{x}) \left( g_{\mu\nu} + M_b^{2} K_{\mu\nu} \right),
\end{equation}
where the constant $M_b$ of the geodesic motion is defined as $M_b^{-2} = \frac{K_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{-g_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}$, then the $I=\Upsilon^\mu p_\mu$ is conserved along the geodesics.
\end{theorem}
The proof is straightforward. When we contract the right hand side of equation \eqref{Fincond} with the velocities we obtain something which is on mass shell zero
\begin{equation}
\{I,\mathcal{H}\} \propto \frac{2\Omega}{n^2} \left( g_{\mu\nu} + M_b^{2} K_{\mu\nu} \right) \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu =\frac{2\Omega}{n^2}\left( -M_b^{2} K_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu + M_b^{2} K_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu \right) =0
\end{equation}
by simply using the fact that
\begin{equation} \label{Mbdef}
M_b^{-2} = \frac{K_{\mu\nu} \dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}{-g_{\mu\nu}\dot{\mathrm{x}}^\mu \dot{\mathrm{x}}^\nu}.
\end{equation}
For the condition \eqref{Fincond} we have not made any assumption about the space-time e.g. if it is going to be a pp-wave or not. The only thing that is needed is for $K$ to be covariantly constant with respect to the Levi-Civita connection compatible with $g_{\mu\nu}$, i.e. $\nabla_\kappa K_{\mu\nu}=0$. We may also notice that on the left hand side of \eqref{Fincond} there appears a metric disformally related to the original $g_{\mu\nu}$
\begin{equation}\label{disfmet2}
\hat{g}_{\mu\nu} = g_{\mu\nu} - \frac{b}{1-b} M_b^{2} K_{\mu\nu} .
\end{equation}
The Killing vectors of this metric, satisfy \eqref{Fincond} for $\Omega=0$. In the next section we proceed to study what happens in the case where $g$ is a pp-wave metric.
\section{The distorted symmetry vector in Finslerian pp-waves and in flat space}
Lets consider the metric $g$ of a pp-wave space-time derived from \eqref{lineel} and take as $K_{\mu\nu}$ the ``square'' of the covariantly constant vector $\ell=\partial_v$. A study on the integrability of the corresponding geodesic equations based on conventional integrals of motion has been given previously in \cite{BFgeo}. Here, we concentrate on the quantities that condition \eqref{Fincond} generates. The Finslerian geodesic Lagrangian \eqref{Lag1}, is written as
\begin{equation} \label{Lag1pp}
L_F = -m\sqrt{-F^2} = - m \dot{u}^b \left(-H(u,x,y)\dot{u}^2 - 2 \dot{u}\dot{v} - \delta_{ij}x^i x^j \right)^{\frac{1-b}{2}} .
\end{equation}
Once more, we use the coordinates $\mathrm{x}^\mu = (u,v,x^i)$, as we did in section \ref{secRpp}. In the case $b=0$, the Lagrangian $L_F$ reduces to the usual square root Lagrangian of the Riemannian geodesics. The dynamically equivalent Lagrangian in the einbein formalism is given by
\begin{equation} \label{Lag2}
L = -\frac{1}{2 n} \dot{u}^{2b} \left(-H(u,x,y)\dot{u}^2 - 2 \dot{u}\dot{v} - \delta_{ij}x^i x^j \right)^{1-b} - n \frac{m^2}{2}
\end{equation}
and it reproduces a set of Euler-Lagrange equations equivalent to those of $L_F$. The constraint equation (the Euler-Lagrange for $n$) leads to
\begin{equation}\label{ruln}
n = \pm \frac{\dot{u}^b}{m} \left(-H(u,x,y)\dot{u}^2 - 2 \dot{u}\dot{v} - \delta_{ij}x^i x^j \right)^{\frac{1-b}{2}}.
\end{equation}
Substitution of \eqref{ruln} in \eqref{Lag2} gives $L= \pm L_F$. From now on - and to be consistent with the sign conventions assumed - where ever $n$ is substituted from \eqref{ruln} the plus root is utilized, so that we obtain the correspondence $L=L_F$.
Since $\ell$ is covariantly constant, obviously the same holds for $K = \ell \otimes \ell$, which we used to write \eqref{Lag1}. Thus, Lagrangians \eqref{Lag1pp} and \eqref{Lag2} are bound to generate the same geodesic equations as the \eqref{LagppR} of the Riemannian case. It can be easily verified that this is the true. Consequently, we expect that the same number of conserved quantities must be admitted in both systems. However, we need to mention, that it will not necessarily be the same vectors that generate symmetries. This is because, Lagrangians \eqref{Lag1pp} and \eqref{Lag2} are distinct from their Riemannian counterparts (obtained when $b=0$); they have a different functional dependence on velocities and this changes the form of the generating vectors.
An obvious symmetry that remains the same is the one owed to the covariantly constant vector $\ell$, which tells us that the momentum in the $v$ direction is again conserved. Truly, it is easy to see that we have the conserved charge
\begin{equation} \label{Finppintpv}
p_v=\frac{\partial L}{\partial \dot{v}} = (1-b) \frac{\dot{u}^{1+2 b}}{n} \left(-H(u,x,y)\dot{u}^2 - 2 \dot{u}\dot{v} - \delta_{ij}x^i x^j \right)^{-b} = \pi_v.
\end{equation}
Once more, we use the Greek letter $\pi_v$ to denote the on mass shell constant value of the momentum $p_v$. By combining \eqref{Finppintpv} with \eqref{ruln} and remembering Eq. \eqref{Mbdef}, it is easy to derive the relation
\begin{equation} \label{bmass}
M_b^2 = \left[ \frac{(1-b)^2 m^2}{\pi_v^2} \right]^{\frac{1}{1+b}}
\end{equation}
among the constants of integration. The latter, gives us $M_b$ in terms of the mass $m$, the momentum $\pi_v$ and the Lorentz violating parameter $b$; obviously, when $b=0$, $M_0^2 = \frac{m^2}{\pi_v^2}$.
By solving equation \eqref{Fincond}, we now obtain the following vector
\begin{equation}\label{symvec}
\Upsilon^\mu = Y^\mu + M_b^2 f_1^\mu + \frac{b}{1-b} M_b^2 f_2^\mu,
\end{equation}
where $Y$ is a conformal Killing vector of the pp-wave metric $g_{\mu\nu}$ (its components are given by Eqs. \eqref{zetackv}) and $f_1$, $f_2$ are the acquired distortions. The first, is exactly the same as the one derived in \eqref{coefef} with the identification $\frac{m^2}{\pi_v^2}\rightarrow M_b^2$, i.e.
\begin{subequations} \label{modf1}
\begin{align}
f_1^u = & 0,
\\ \nonumber
f_1^v = & \frac{1}{2} u \left(x^i a_i'(u) - a' (u)+2 \bar{b}(u)-2 \mu v \right) + \frac{1}{2} x^i a_i (u) + \frac{\mu}{4} \delta_{ij} x^i x^j
\\ \label{modf1v}
& + \frac{a(u)}{2} - M_b^2 \frac{\mu}{4} u^2\,,
\\
f_1^{i} = & -\frac{1}{2}u \left(\mu\, x^i + a_i(u) \right).
\end{align}
\end{subequations}
The second distortion is new and contributes only along the $v$ direction
\begin{subequations} \label{modf2}
\begin{align}
& f_2^u =0 ,\quad f_2^i =0 \\ \label{modf2v}
& f_2^v = \frac{\mu}{2} \delta_{ij}x^i x^j + a_i(u) x^i + a(u) .
\end{align}
\end{subequations}
An alternative way to write the expression for $\Upsilon$ using only the $Y$ vector components is
\begin{equation} \label{symvec2}
\Upsilon^\mu = Y^\mu + \sum_{n=1}^2 M_b^{2n} \frac{u^n}{2^{2n-1}}\frac{\partial^n}{\partial v^n} Y^\mu + \frac{b+1}{2 (1-b)} M_b^2 \; \delta^\mu_{\;v} Y^u .
\end{equation}
It can be checked that the quantity $I=\Upsilon^\mu \frac{\partial L}{\partial \dot{x}^\mu}$, made up from the vector \eqref{symvec}, is a constant of the motion, by virtue of the already known conserved charge \eqref{Finppintpv} and the constraint equation, which yields \eqref{ruln}. The corresponding function $\Omega(\mathrm{x})$ for which $\Upsilon$ satisfies \eqref{Fincond} is given by
\begin{equation}
\Omega(\mathrm{x}) = x^i a_i'(u) + \bar{b}(u) - \mu \left( v + \frac{M_b^2}{2} u \right) .
\end{equation}
A first observation regarding the vector \eqref{symvec} is that the Lorentz violating parameter $b$ introduces an additional distortion in terms of the vector $f_2$. In what follows we are going to study more in detail this distortion and its nature. Apart from the pp-wave case, we will also comment separately on what happens in the flat case, $H(u,x,y)=0$. The corresponding higher order symmetries of the flat case have been studied separately in \cite{Dimletter}.
Another interesting point is that, once more, if we take the basic vector \eqref{symvec} and substitute, in place of $M_b^2$, the ratio involving velocities, i.e. \eqref{Mbdef}. Then, exactly as it happened in the Riemannian case, the induced vector $\tilde{\Upsilon}:=\Upsilon|_{M_b^2=\frac{\mathcal{K}}{\mathcal{G}}}$ forms a higher order symmetry vector satisfying the infinitesimal criterion of invariance with $\mathrm{pr}^{(1)} \tilde{\Upsilon}(L)=0$. Hence, we again have a higher order Noether symmetry, whose on mass shell reduced expression yields the distorted space-time vector \eqref{symvec}.
\subsection{The non-flat case, $H(u,x,y)\neq 0$}
In the non flat case, where the metric describes a pp-wave space-time, the most general expression of a Killing vector, $\mathcal{L}_\xi g_{\mu\nu}=0$, is\footnote{We use the $\xi$ here to denote the subset of the $Y$ consisting only of the pure Killing vectors of the metric, i.e. those $Y$ corresponding to a conformal factor of $\omega(\mathrm{x})=0$.} \cite{Maartens}
\begin{equation} \label{Kilgen}
\xi = \left(\alpha u + \beta \right) \partial_u + \left(\sigma -\alpha v - c_i'(u) x^i \right) \partial_v + \left(\gamma \epsilon_{ij}x^i + c_i(u) \right) \partial_j,
\end{equation}
whose components are obtained from \eqref{zetackv} when setting the functions, $a_i(u)$, $\bar{b}(u)$ and the parameter $\mu$ appearing in the conformal factor \eqref{omega} equal to zero and by introducing
\begin{equation} \label{nfkc}
a(u) = \alpha u + \beta, \quad c(u) = \gamma \quad \text{and} \quad M(u,x,y) = \sigma - c_i'(u) x^i.
\end{equation}
Of course $H(u,x,y)$ has to also satisfy a certain partial differential equation, which is obtained from \eqref{rulM1} with the above substitutions together with $a_i(u)=\bar{b}(u)=\mu=0$
\begin{equation}
\frac{1}{2} \left(\alpha u+\beta\right) \partial_u H + \frac{1}{2}\left(c_i(u) - \epsilon_{ij} x^j \right) \partial_i H + \alpha H - c_i''(u) x^i =0 .
\end{equation}
The constant parameters $\alpha, \beta, \gamma$ and $\sigma$ characterize the corresponding mono-parametric groups of motion; of them, the Killing vector owed to the parameter $\sigma$, i.e. $\xi_v= \ell=\partial_v$, is present in all pp-wave space-times. In total, a non-flat pp-wave can admit at most seven Killing vectors \cite{Maartens}.
As we observe from \eqref{modf1} and \eqref{modf2} the modification owed to the presence of Killing fields has just to do with the function $a(u)$, since it is the only one appearing in the $f_1^\mu$ and $f_2^\mu$ components, which at the same time does not belong to the conformal factor $\omega$ of \eqref{omega}. From the linear expression of $a(u)$ in \eqref{nfkc} we can also notice that no modification owed to the $f_1^\mu$ can affect a Killing vector. The $\alpha u$ part is automatically cancelled in the component $f_1^v$ by the combination $\frac{1}{2}\left(a-u a'\right)=\frac{\beta}{2}$; as it can be seen from \eqref{modf1v}. The remaining $\beta$ constant is nothing but a contribution which can be subtracted by a constant multiple of the already known Killing field $\xi_v=\ell=\partial_v$ and thus the $f_1^\mu$ can be cleared out from all modifications involving Killing fields.
On the contrary, the $f_2^\mu$ modification is bound to contain parameter $\alpha$ (when the appropriate Killing field exists), see the $f_2^v$ in \eqref{modf2v}. The parameter $\beta$ can still be removed by subtracting a multiple of the existing symmetry $\ell$. As a result, whenever $g_{\mu\nu}$ is such that the vector associated with $\alpha$,
\begin{equation}
\xi_0 = u \partial_u - v \partial_v,
\end{equation}
is Killing, i.e. $\mathcal{L}_{\xi_0} g_{\mu\nu}=0$, then, $\xi_0$ is broken as a symmetry for the Finslerian space-time with $b\neq0$. However, with the appropriate distortion owed to $f_2$ we may write the
\begin{equation} \label{upsnfk}
\Upsilon_0= u \partial_u + \left( \frac{b}{1-b} M_b^2 u- v \right)\partial_v,
\end{equation}
which generates a conserved charge $I=\Upsilon_0^\mu p_\mu$ in place of the broken symmetry. It is easy to check that, the $\Upsilon_0$ is a Killing vector of the disformally related metric \eqref{disfmet2}, i.e. $\mathcal{L}_{\Upsilon_0} \hat{g}_{\mu\nu}=0$. In other words, the modification that restores the broken Killing symmetry, satisfies \eqref{Fincond} for $\Omega =0$. We see thus that, in contrast to the $b=0$ case, when $b\neq 0$, it is possible to have a modification over a Killing vector field instead of just proper CKVs. The introduction of $b$ can truly break isometries and one needs to add certain modifications, which on the mass shell lead to conserved quantities.
From the Riemannian case, we remember that the distorted conformal Killing vectors - being the reduced form of some formal symmetries - do not necessarily close an algebra. Here, we shall see that the $b$-dependent modification acquired in $\Upsilon_0$ does not alter the algebra status with the rest of the symmetry vectors. To demonstrate this, let us break down the $\xi$ of \eqref{Kilgen} with respect to the rest of the parameters. Then, we have the following possible Killing vectors: $\xi_u=\partial_u$, $\xi_{xy} = -y \partial_x + x \partial_y$ and of course the always present $\xi_v=\ell=\partial_v$. In addition to the above we may also have vectors of the form
\begin{equation}
\xi_{c_i} = - c_i'(u) x^i \partial_v + c_i(u) \partial_j .
\end{equation}
It is clear from the above expressions that the only commutator relation that is altered by the modification term appearing in \eqref{upsnfk} is the
\begin{equation}
[\Upsilon_0, \xi_u] = - \xi_u + \frac{b}{1-b} M_b^2 \xi_v .
\end{equation}
Even though this commutator brings about no problem in the closing of an algebra, in reality such a situation cannot arise for a non flat space-time, because $\xi_u$ and the non-modified vector $\xi_0 = u \partial_u - v\partial_v$ cannot be both Killing vectors for a non-flat metric of the form of $g_{\mu\nu}$ (application of both $\xi_u$ and $\xi_0$ leads to $H(u,x,y)=0$). Thus, the modified vector $\Upsilon_0$ is bound to close the same commutator relations as the original $\xi_0$ vector with the rest of the unbroken symmetries.
Finally, we can summarize that, in the pp-wave case, just one of the Killing vectors may acquire a distortion, the vector $\xi_0$. The distortion does not affect the property of closing an algebra with the rest of the Killing vectors of the original metric $g_{\mu\nu}$. Any other acquired distortions will be associated with the existing proper conformal Killing vectors.
\subsection{The flat case. ``Reinstating'' the Poincar\'e algebra.}
The flat space trivially satisfies the same relations as those of a pp-wave case by simply setting $H(u,x,y)=0$. The Bogoslovsky-Finsler line-element given by \eqref{bogoel} yields the space-time of Deformed Very Special Relativity, which, in light-cone coordinates, it is written
\begin{equation} \label{bogoelxp}
ds_F^2 = -\left(-2du dv -dx^2 - dy^2\right)^{1-b} (du)^{2b}.
\end{equation}
As we previously noticed, the geodesic motion is described either by \eqref{Lag1pp} or \eqref{Lag2} with the substitution $H(u,x,y)=0$. It is known that the space-time possesses an eight-dimensional symmetry group, the $DISIM_b(2)$, which was presented in \cite{GiGoPo1} and which is a deformation of the $ISIM(2)$ group of Very Special Relativity \cite{VSL}. The symmetry generators of $DISIM_b(2)$ are given by:
\begin{itemize}
\begin{subequations} \label{disimb}
\item The translations
\begin{equation}
T_u = \partial_u, \quad T_v = \partial_v, \quad T_i = \partial_i
\end{equation}
\item the rotation
\begin{equation}
R = x \partial_y - y \partial_x,
\end{equation}
\item the (combination of boosts and rotations)
\begin{equation}
B_{ui}= u \partial_i - x^i \partial_v
\end{equation}
\end{subequations}
\end{itemize}
and a vector with an explicit dependence on $b$
\begin{equation} \label{Nb8}
\mathcal{N}_b = (b-1) u \partial_u +(1+b) v \partial_v + b x^i \partial_i .
\end{equation}
By looking at the \eqref{disimb}, we understand that the existence of the parameter $b$, has broken the three of the rest of the Poincar\'e symmetries, namely the vectors
\begin{subequations} \label{Poimiss}
\begin{align}
B_0 & = u \partial_u - v \partial_v \\
B_{vi} & = -x^i \partial_u + v \partial_x^j,
\end{align}
\end{subequations}
which now fail to produce conserved charges for the geodesic motion in the space characterized by \eqref{bogoelxp}.
In \cite{Dimletter}, it was shown that these symmetries are substituted by higher order symmetries, which are associated with the distorted vectors of the type that we study here. If we apply the condition \eqref{Fincond} for the metric $g_{\mu\nu}$ with $H(u,x,y)=0$ we derive - except from the known symmetries \eqref{disimb} - the following additional vectors
\begin{itemize}
\item The distorted Killing (for $b=0$) vectors
\begin{subequations} \label{distKil}
\begin{align}
& \Upsilon_0 = u \partial_u + \left(\frac{b}{1-b} M_b^2 u - v\right) \partial_v \\
& \Upsilon_{vi} = -x^i \partial_u - \frac{b}{1-b} M_b^2 x^i \partial_v + v \partial_x^j .
\end{align}
\end{subequations}
\item The distorted proper conformal Killing (when $b=0$, $m=0$) vectors
\begin{subequations} \label{distCKV}
\begin{align} \label{distCKV1}
\Upsilon_D =& \left(M_b^2 u + 2 v\right) \partial_v + x^i \partial_i \\
\Upsilon_K =& u^2 \partial_u + \frac{1}{2} \left(\frac{1+b}{1-b} M_b^2 u^2 - \delta_{ij}x^i x^j \right) \partial_v + u x^i \partial_i \\
\Upsilon_{C_1} =& \frac{\delta_{ij}x^i x^j}{2} \partial_u - \frac{1}{4} \left[ M_b^4 u^2 + M_b^2 \left( 4 u v - \frac{1-b}{1+b}\delta_{ij}x^i x^j \right)+ 4 v^2 \right] \partial_v \nonumber \\
& - \left(\frac{M_b^2}{2} u + v\right)x^i \partial_i \\
\Upsilon_{C_2}^k =& u x^k \partial_u + \left(\frac{1}{1-b}M_b^2 u +v \right)x^k\partial_v - \frac{1}{2} \left(M_b^2 u^2 +2 u v +\delta_{ij}x^i x^j \right) \partial_k + x^k x^i \partial_i.
\end{align}
\end{subequations}
\end{itemize}
All of the above yield linear in the momenta conserved quantities - we shall see an example of this later. We notice that the first set \eqref{distKil}, consists of distortions of the Killing vectors \eqref{Poimiss} of $g_{\mu\nu}$. The $\Upsilon_0$ and $\Upsilon_{vi}$ are themselves Killing vectors, but of the disformally related metric $\hat{g}_{\mu\nu}$ of \eqref{disfmet2}, which in these coordinates reads
\begin{equation} \label{modmet0}
\hat{g}_{\mu\nu} = \begin{pmatrix}
-\frac{b}{1-b}M_b^2 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}.
\end{equation}
Thus, we can now understand how the condition \eqref{Fincond} works for the various vectors. The non-distorted symmetries \eqref{disimb} satisfy \eqref{Fincond} by yielding $\mathcal{L}_X g_{\mu\nu}=\mathcal{L}_X K_{\mu\nu}=0$, where $X$ is any vector of the \eqref{disimb}. The \eqref{distKil} satisfy \eqref{Fincond} by being Killing vectors of the disformally related metric \eqref{modmet0}, i.e. $\mathcal{L}_X \hat{g}_{\mu\nu}=0$, where $X$ is now any of the \eqref{distKil}. Finally, the \eqref{distCKV} are solutions of \eqref{Fincond} for appropriate non-zero $\Omega(\mathrm{x})$.
An interesting additional observation is that the linear combination of the distorted homothecy $\Upsilon_D$ and Killing vector $\Upsilon_0$ gives rise to the symmetry vector \eqref{Nb8}, which together with the seven Killing vectors ($T_u,T_v,T_i,R,B_{ui}$) forms the eight dimensional algebra corresponding to the $DISIM_b(2)$ group of the symmetries we mentioned in the beginning of the section
\begin{equation}
\mathcal{N}_b = b \Upsilon_D + (b-1) \Upsilon_0 .
\end{equation}
We already stated that the three \eqref{distKil} are Killing vectors of $\hat{g}_{\mu\nu}$, the same holds trivially also for the seven \eqref{disimb} since the action of their Lie derivative returns a zero for both $g_{\mu\nu}$ and $K_{\mu\nu}$. Hence, they are bound to close an algebra. The non-zero Lie brackets among these ten vectors are:
\begin{equation} \label{alg1}
\begin{split}
& [T_u,B_{ui}] = T_i, \quad [T_u,\Upsilon_0]=T_u + \frac{b}{1-b} M_b^2 T_v, \quad [T_v,\Upsilon_0]= -T_v \\
& [T_v,\Upsilon_{vi}] = T_i, \quad [T_i,R]=\epsilon_{ij} T_j, \quad [T_i,B_{uj}]= -\delta_{ij} T_v \\
& [T_i,B_{vj}] = - \delta_{ij}\left( T_u+ \frac{b}{1-b} M_b^2 T_v \right), \quad [R,B_{ui}] = \epsilon_{ji} B_{uj}\\
& [R,\Upsilon_{vi}] = \epsilon_{ji} B_{vj}, \quad [B_{ui},\Upsilon_0] = - B_{ui}, \quad [B_{ui},B_{vj}] = \epsilon_{ji}R- \delta_{ij}\Upsilon_0 \\
& [\Upsilon_0,\Upsilon_{vi}] = -\Upsilon_{vi}+\frac{b}{1-b} M_b^2 B_{ui}, \quad [\Upsilon_{vi},B_{vj}] = \epsilon_{ji} \frac{b}{1-b} M_b^2 R .
\end{split}
\end{equation}
This is isomorphic to the Poincar\'e algebra. It can be easily noticed by simply observing that the disformally related metric $\hat{g}_{\mu\nu}$ is flat. Hence, its ten Killing vectors \eqref{disimb} and \eqref{distKil} just span the Poincar\'e algebra expressed in a different coordinate system. We need to be clear however, that the actual symmetry of the space with line-element \eqref{bogoelxp} is still the $DISIM_b(2)$ group. Strictly speaking, the \eqref{distKil} are not formal symmetries. As we already mentioned, such vectors (together with the \eqref{distCKV}) are the on mass shell reduced expressions of higher order symmetries, which happen upon the reduction to yield space-time vectors.
\subsubsection{Constants of the motion and comparison with the Minkowski case}
In order to make direct comparisons with the free relativistic particle in Minkowski space let us use the transformation
\begin{equation} \label{uvtotz}
u = \frac{1}{\sqrt{2}} \left(z-t\right), \quad v = \frac{1}{\sqrt{2}} \left(t+z\right)
\end{equation}
and take some linear combinations of the vectors involved in the algebra \eqref{alg1}, so as to write them as
\begin{equation} \label{genmink}
\begin{split}
\mathcal{T}_\mu & = \frac{\partial}{\partial x^\mu}, \\
\mathcal{L}^{ij} & = x^j \frac{\partial}{\partial x^i} - x^i \frac{\partial}{\partial x^j} - \frac{b M_b^2}{2(1-b)} \left( \delta^j_z x^i - \delta^i_z x^j\right) \left(\frac{\partial}{\partial t}+\frac{\partial}{\partial z} \right), \quad i,j=1,2,3 \\
\mathcal{M}^j & = x^j \frac{\partial}{\partial t}+ t \frac{\partial}{\partial x^j} - \frac{b M_b^2}{2(1-b)} \left(x^j - \delta^j_z t \right)\left(\frac{\partial}{\partial t}+\frac{\partial}{\partial z} \right) .
\end{split}
\end{equation}
Notice that here, for the Cartesian coordinates, we use the Latin indices to denote the spatial components. Thus, the $i,j$ in this section run from $1$ to $3$. We additionally observe that, in these coordinates, the only vectors that do not admit a modification based on $b$ are the translations and the rotation in the $x-y$ plane. Of course by linear combinations one can write as many ``unmodified by $b$" Killing vectors as in the previous section. However, we choose to use \eqref{genmink} as the basic vectors so that we have a direct comparison with what we know from the classical free relativistic particle problem, when $b=0$ is enforced. Thus, $\mathcal{L}^{ij}$ become the rotations and $\mathcal{M}^j$ the boosts when $b=0$.
Under the use of transformation \eqref{uvtotz} the Lagrangian \eqref{Lag2} becomes
\begin{equation} \label{flatexLag}
L = - \frac{1}{2^{1+b} n } \left(\dot{z} - \dot{t}\right)^{2b} (-\eta_{\mu\nu}x^\mu x^\nu)^{1-b} - n \frac{m^2}{2}, \quad ,
\end{equation}
where $\eta_{\mu\nu}= \mathrm{diag}(-1,1,1,1)$. For $b=0$, we obviously recover the Lagrangian of a relativistic free particle in flat space. The solution to the Euler-Lagrange equations can be written as
\begin{subequations} \label{flatsol}
\begin{align}
& n(\lambda)= 2^{-\frac{b}{b+1}} (1-b)^{\frac{1-b}{1+b}} m^{-\frac{2 b}{b+1}} (p_0+p_z)^{\frac{2 b}{1+b}} \\
& t(\lambda)= t_0- \frac{1}{2} \left[ \frac{p_x^2+p_y^2}{(p_0+p_z)^2} + 1 + \frac{(1-b)^{\frac{2}{b+1}} m^{\frac{2}{b+1}}}{2^{\frac{b}{b+1}} (p_0+p_z)^{\frac{2}{b+1}}} \right] (p_0+p_z)\lambda \\
& x(\lambda) = p_x \lambda +x_0 \\
& y(\lambda) = p_y \lambda +y_0 \\
& z(\lambda) = z_0 - \frac{1}{2} \left[\frac{p_x^2+p_y^2}{(p_0+p_z)^2} -1 + \frac{(1-b)^{\frac{2}{b+1}} m^{\frac{2}{b+1}}}{2^{\frac{b}{b+1}}(p_0+p_z)^{\frac{2}{b+1}}} \right] (p_0+p_z)\lambda,
\end{align}
\end{subequations}
where we have substituted the $b$-dependent mass \eqref{bmass} as
\begin{equation} \label{rulM}
M_b = \left[\frac{\sqrt{2}(1-b)m}{p_0+p_z} \right]^{\frac{1}{1+b}},
\end{equation}
since we have $p_v=\frac{p_0+p_z}{\sqrt{2}}$ from \eqref{uvtotz}. The $t_0$, $x^i_0$ together with all the $p_\mu$ are constants of integration. Since \eqref{flatexLag} produces equations equivalent to the Minkowski case, the solutions $t(\lambda)$, $x(\lambda)$, $y(\lambda)$ and $z(\lambda)$ are bound to be linear in $\lambda$ when the latter is the affine parameter, i.e. when $n(\lambda)=$const. However, the constants of integration are now associated in a different manner with respect to the physical observables, due to the presence of $b$. In \eqref{flatsol}, the constants of integration are arranged so that on mass sell we have
\begin{equation}
\frac{\partial L}{\partial \dot{t}} =p_0, \quad \frac{\partial L}{\partial \dot{x}} =p_x , \quad \frac{\partial L}{\partial \dot{y}} =p_y, \quad \frac{\partial L}{\partial \dot{z}} =p_z .
\end{equation}
As we mentioned, the $p_\mu$ here are all constant. This is owed to the fact that the translations $\mathcal{T}_\mu$ are still symmetries of the problem. In this section we make no reference to phase-space formalism, so we do not distinct between the variables $p_\mu$ and their on mass shell constant values, we simply use $p_\mu$ to also denote the constants of integration. It can be seen that when $b=0$, and under the constraint
\begin{equation} \label{rulm}
m^2 = p_0^2-p_x^2-p_y^2-p_z^2,
\end{equation}
the expressions \eqref{flatsol} reduce to the usual
\begin{equation}
n(\lambda)=1, \quad t(\lambda)= t_0 - p_0 \lambda, \quad x^i(\lambda)= x^i_0 + p_i \lambda.
\end{equation}
With the help of \eqref{flatsol} we may write the relation
\begin{equation}
\eta^{\mu\nu} \frac{\partial L}{\partial \dot{x}^\mu}\frac{\partial L}{\partial \dot{x}^\nu} = -2^{-\frac{b}{b+1}} (1+b) (1-b)^{\frac{1-b}{b+1}} m^{\frac{2}{b+1}} (p_0 + p_z)^{\frac{2 b}{b+1}},
\end{equation}
which is exactly equivalent with the one given in \cite{GiGoPo1} for the Hamiltonian constraint (when considering the $p_\mu$ as phase-space variables). Obviously, upon setting $b=0$ we return as to the usual Hamiltonian constraint of (special) relativistic motion \eqref{rulm}.
It is easy to verify that the whole set of \eqref{genmink} produces conserved quantities. Whenever $b=0$ and the constraint among constants \eqref{rulm} is used, they become those generated by the Poincar\'e generators for the free relativistic particle. For example let us take the modified rotation around $x$ axis. According to \eqref{genmink} the vector is written as
\begin{equation}
\mathcal{L}^{yz} = - \frac{b M_b^2}{2(1-b)} y \frac{\partial}{\partial t} + z \frac{\partial}{\partial y} - \left[1 + \frac{b M_b^2}{2(1-b)} \right] y \frac{\partial}{\partial z} .
\end{equation}
We expect a constant of motion to be given by the quantity
\begin{equation}
I_{yz} = (\mathcal{L}^{yz})^\mu \frac{\partial L}{\partial \dot{x}^\mu}
\end{equation}
which yields
\begin{equation}
\begin{split}
I_{yz} = & \frac{2^{-(1+b)}}{n\left(\dot{t}^2-\dot{x}^2-\dot{y}^2-\dot{z}^2\right)^{b}} \left(\dot{z}-\dot{t}\right)^{2 b-1} \Bigg[2 (1-b) z \dot{y} \left(\dot{z}-\dot{t}\right) \\
& - y \Big(b \left(M_b^2-2\right) \dot{t}^2-2 \left(b \left(M_b^2-1\right)+1\right) \dot{t} \dot{z}+b M_b^2 \dot{z}^2+2 \left( b \dot{x}^2+ b \dot{y}^2+\dot{z}^2\right)\Big) \Bigg] .
\end{split}
\end{equation}
Direct use of \eqref{flatsol} demonstrates that $I_{yz}$ is indeed a constant of motion on mass shell, acquiring the value
\begin{equation}
\begin{split}
I_{yz} = \frac{(p_0+p_z)^{-\frac{b+3}{b+1}}}{2^{\frac{2 b+1}{b+1}} } \Big[& 2^{\frac{b}{b+1}} (p_0+p_z)^{\frac{2}{b+1}} \left( \left(p_x^2+p_y^2-\left(p_0+p_z\right)^2\right)y_0 +2 p_y (p_0+p_z) z_0 \right) \\
&+ (1-b)^{\frac{2}{b+1}} m^{\frac{2}{b+1}} (p_0+p_z)^2 y_0 \Big],
\end{split}
\end{equation}
where \eqref{rulM} has been used so as the physical mass appears in the expression. Upon substitution of the latter from \eqref{rulm} and by setting $b=0$ the above relation becomes non-other but
\begin{equation}
I_{yz}|_{b=0} = z_0 p_y - y_0 p_z,
\end{equation}
which is the usual angular momentum in the $x$ direction.
The same is true of course for the conserved quantities constructed with the distorted conformal Killing vectors. For example, let us take the $\Upsilon_D$ of \eqref{distCKV1}, which in Cartesian coordinates, performing the change \eqref{uvtotz}, becomes
\begin{equation}
\Upsilon_D = \left(t+z +\frac{1}{2} M_b^2 \left(z-t \right) \right) \left(\partial_t + \partial_z\right) + x \partial_x + y \partial y .
\end{equation}
With the use of \eqref{flatsol} we calculate the on mass shell constant value of the charge,
\begin{equation}
I_{D} = \Upsilon_D^\mu p_\mu = p_x x_0 +p_y y_0 + (p_0 +p_z) (t_0+z_0) + \frac{(1-b)^{\frac{2}{1+b}} m^{\frac{2}{1+b}}}{2^{\frac{b}{1+b}}} (p_0 +p_z)^{\frac{b-1}{1+b}} (z_0-t_0) ,
\end{equation}
where once more \eqref{rulM} has been used. By setting $b=0$ we obtain the mass distorted charge of the Minkowski case
\begin{equation}
I_{D}|_{b=0} = p_x x_0 +p_y y_0 + (p_0 +p_z) (t_0+z_0) + m^{2} \frac{z_0-t_0}{p_0+p_z}
\end{equation}
and further, for $m=0$ we obtain the conserved charge of the null geodesics generated by the corresponding pure conformal Killing vector.
The disformally related metric \eqref{modmet0} in the Cartesian coordinates becomes
\begin{equation} \label{modmet}
\hat{g}_{\mu\nu} = \begin{pmatrix}
-\left(1 + \frac{b M_b^2}{2(1-b)} \right) & 0 & 0 & \frac{b M_b^2}{2(1-b)} \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
\frac{b M_b^2}{2(1-b)} & 0 & 0 & 1 - \frac{b M_b^2}{2(1-b)}
\end{pmatrix},
\end{equation}
and of course the \eqref{genmink} are its Killing vectors. We can write \eqref{modmet} as
\begin{equation}\label{disfmetflat}
\hat{g}_{\mu\nu} = \eta_{\mu\nu} - \frac{b M_b^2}{1-b} \partial_\mu \phi \partial_\nu \phi
\end{equation}
with the introduction of a scalar $\phi = \frac{1}{\sqrt{2}}\left(z-t \right)$, so as to be consistent with in the original definition by Bekenstein regarding disformal transformations. It can be seen that the light-like geodesics are not preserved when passing from $\eta_{\mu\nu}$ to $\hat{g}_{\mu\nu}$, for $b\neq 0$. However, the causal structure is not affected since, for any vector $A^\mu$ for which $\eta_{\mu\nu} A^\mu A^\nu <0$, we obtain $\hat{g}_{\mu\nu} A^\mu A^\nu <0$ as long as $0 <b <1$. Of course, $\hat{g}_{\mu\nu}$ and $\eta_{\mu\nu}$ both describe a flat space, we perform the aforementioned comparison of $\hat{g}_{\mu\nu} A^\mu A^\nu$ with $\eta_{\mu\nu} A^\mu A^\nu$ by considering the coordinate system fixed. In other words, we take $\hat{g}_{\mu\nu}$ and $\eta_{\mu\nu}$ to be different metrics written in the same coordinate system, not the same metric expressed in different coordinate systems. A comparison from this point of view is reasonable if we remember that the $b\neq0$ case actually describes motion in a Finslerian geometry; the $\hat{g}_{\mu\nu}$ we write here serves as a (pseudo-)Riemannian ``simulation'' of how the motion looks for a massive particle of mass $m$.
In figure \ref{lcones} we draw the light cones\footnote{By using the term light cones here we do not imply the $m=0$ surfaces of the initial problem, but the geometric surfaces $\hat{g}_{\mu\nu} A^\mu A^\nu =0$ and how they differ from $\eta_{\mu\nu} A^\mu A^\nu =0$.} depending on the value of $b$ and given by the metric $\hat{g}_{\mu\nu}$, while keeping fixed the ratio where the physical mass $m$ is involved: $\frac{m}{\pi_v}=\frac{\sqrt{2} m}{p_0+p_z}=1$. The first layer from the top corresponds to $b=0$ where we have the typical Minkowski metric. The intermediate layer is for $b=\frac{1}{10}\Rightarrow M_b\simeq 0.995$ and the last corresponds to $b=\frac{1}{2}\Rightarrow M_b\simeq 0.909$. We observe the expected deviation in the $z$ direction from the isotropy and from the $b=0$ surface of Special Relativity as $b$ becomes larger. A physically reasonable value for the Lorentz violating parameter $b$ however, is way more minuscule $b<10^{-26}$ \cite{GiGoPo1}.
\begin{figure}[ht]
\hspace{2cm}\includegraphics[scale=0.6]{lightcones.pdf}
\caption{Light cones in the $x-z$ plane for (starting from the upper surface) $b=0$, $b=\frac{1}{10}$ and $b=\frac{1}{2}$. In all cases we have considered $\frac{\sqrt{2} m}{p_0+p_z}=1$.\label{lcones}}
\end{figure}
\section{Conclusion}
We examined how the elements of the conformal algebra of a given geometry may admit appropriate distortions, which lead to additional conserved quantities. We observed that these distortions are related to parameters that bring about an explicit symmetry breaking effect at the level of the geodesic equations. What is more, the resulting distorted vectors are generators of certain disformal transformations of the metric. We established a connection between these distorted conformal Killing vectors with higher order or hidden symmetries of the relative problem. The corresponding conserved Noether charges are in general rational functions of the momenta, which conveniently reduce to linear expressions on the mass shell.
We initiated our presentation by studying the geodesic motion in a (pseudo-)Riemannian space. The resulting distorted vectors in this case are owed to the mass, which breaks the proper conformal symmetries for non-null geodesics. The proper CKVs acquire mass dependent distortions in order to continue producing conserved quantities. We derived the geometric condition in order for such distortions to emerge. In short, the space-time needs to admit a second rank Killing tensor and additionally there has to exist a coordinate transformation, mapping the original metric to another, disformally related, which makes use of the same Killing tensor. Our basic example of a geometry satisfying the necessary conditions has been that of a generic pp-wave space-time. We wrote all the resulting distortions of the proper conformal Killing vectors and their connection to higher order symmetries. Apart from the pp-wave case however, we also presented an additional novel example in the form of the de Sitter solution of Einstein's equations with a cosmological constant. In the pp-wave case the distortion appeared in an additively manner, while in the de Sitter case it assumes a more complicated form.
The consideration of a Finslerian geometry in the form of the generalized Bogoslovsky line-element, revealed that the additional introduction of a (Lorentz) symmetry breaking parameter follows a similar pattern. This time, again for a pp-wave space-time, one of the Killing symmetries is lost due to the newly introduced parameter. The latter becomes involved in an appropriate distortion, which reveals a conserved quantity that takes the place of the one lost from the missing symmetry. For the rest of the proper conformal Killing vectors, similar distortions to those of the Riemannian case appear. We derive explicitly all the relative expressions and we notice that again the ensuing vectors are associated with higher order symmetries. We further investigated the general geometric conditions that are necessary so that such a type of symmetry emerges. Finally, we briefly mentioned how all this applies to the flat case, where three of the original Killing vectors acquire the necessary distortions. The distorted Killing vectors lead naturally to a disformally related metric by assuming the role of its isometries. We use this exact metric to compare the motion of the Finslerian case to the one taking place in the Minkowski space-time of Special Relativity.
By looking at the necessary condition that we derived for the Finslerian line-element, Eq. \eqref{Fincond}, we notice that, in contrast to the (pseudo-)Riemannian case, there exists an additional restriction; the condition requires the Killing tensor admitted by the base metric to be covariantly constant. This is easily satisfied in the pp-wave case, by simply utilizing the trivial tensor $K=\ell\otimes\ell$, constructed by the covariantly constant Killing vector $\ell$, which all pp-waves possess. However, the particular example of the de Sitter space, that we considered in the Riemannian case, does not apply here, since the relevant Killing tensor that we used there is not covariantly constant. This leaves an open question on whether we can incorporate other geometries in the Finslerian case, which can make use of \eqref{Fincond}.
It is particularly interesting how the pp-waves appear to be on the spot in what regards the emergence of this type of symmetries. They conveniently satisfy all the necessary conditions, both in the Riemannian case and in the Finslerian generalization. This adds up to the intriguing geometrical properties that these space-times possess and justifies their importance in physical theories. Further study is necessary however, in order to reveal other types of geometries where hidden symmetries can be reduced in such a way so as to be mapped to distortions of the conformal structure. The example of the de Sitter metric in the Riemannian case shows that this is in general possible and novel symmetries can be revealed for certain space-times.
Finally, we need to also comment on the fact of how explicit symmetry breaking effects, either because of the mass or due to some Lorentz violating parameter, lead to the appearance of hidden symmetries in the relevant theories. What is more, the latter seem to substitute the ones which were broken by the introduction of the relevant parameter. This is a subject that certainly requires further attention and the study of additional examples.
\section*{Acknowledgement}
This work was supported by the Fundamental Research Funds for the Central Universities, Sichuan University Full-time Postdoctoral Research and Development Fund No. 2021SCU12117
|
2,877,628,088,643 | arxiv | \section{Introduction and Main Results}
\noindent Throughout this article, we assume the readers are familiar with
the fundamental results and standard notations of the Nevanlinna
distribution theory of meromorphic functions such as $m(r,f),$ $N(r,f),$
M(r,f),$ $T(r,f),$ which can be found in $\left[ 13,14,25\right] .$\ The
concepts of logarithmic order and logarithmic type of entire or meromorphic
functions were introduced by Chern, $\left[ 9,10\right] $. Since then, many
authors used them in order to generalize previous results obtained on the
growth of solutions of linear difference equations and linear differential
equations in which the coefficients are entire or meromorphic functions in
the complex plane $\mathbb{C}$ of positive order different to zero, see for
example $\left[ 1,6,11,18,19,21,22\right] $, their new results were on the
logarithmic order, the logarithmic lower order and the logarithmic exponent
of convergence, where they considered the case when the coefficients are of
zero order see, for example, $\left[ 2,3,4,7,12,16,17,23\right] $. In this
article, we also use these concepts to investigate the lower logarithmic
order of solutions to more general homogeneous and non homogeneous linear
delay-differential equations, where we generalize those results obtained in
\left[ 5,8\right] $. We start by stating some important definitions.
\quad
\noindent \textbf{Definition 1.1} $\left( \left[ 3,10\right] \right) $ The
logarithmic order and the logarithmic lower order of a meromorphic function
f$ are defined by
\begin{equation*}
\rho _{\log }(f)=\limsup_{r\longrightarrow +\infty }\frac{\log T(r,f)}{\log
\log r},\quad \mu _{\log }(f)=\liminf_{r\longrightarrow +\infty }\frac{\log
T(r,f)}{\log \log r}.
\end{equation*
where $T(r,f)$ denotes the Nevanlinna character of the function $f$. If $f$
is an entire function, the
\begin{equation*}
\rho _{\log }(f)=\limsup_{r\longrightarrow +\infty }\frac{\log \log M(r,f)}
\log \log r}=\limsup_{r\longrightarrow +\infty }\frac{\log \log T(r,f)}{\log
\log r},
\end{equation*
\begin{equation*}
\mu _{\log }(f)=\liminf_{r\longrightarrow +\infty }\frac{\log \log M(r,f)}
\log \log r}=\liminf_{r\longrightarrow +\infty }\frac{\log \log T(r,f)}{\log
\log r},
\end{equation*
where $M(r,f)$ denotes the maximum modulus of $f$ in the circle $\left\vert
z\right\vert =r$.
\noindent \qquad It is clear that, the logarithmic order of any non-constant
rational function $f$ is one, and thus, any transcendental meromorphic
function in the plane has logarithmic order no less than one. Moreover, any
meromorphic function with finite logarithmic order in the plane is of order
zero.
\quad
\noindent \textbf{Definition 1.2 }$\left( \left[ 3,7\right] \right) $ The
logarithmic type and the logarithmic lower type of a meromorphic function $f$
are defined by
\begin{equation*}
\tau _{\log }(f)=\limsup_{r\longrightarrow +\infty }\frac{T(r,f)}{(\log {r
)^{\rho _{\log }(f)}},\quad \underline{\tau }_{\log
}(f)=\liminf_{r\longrightarrow +\infty }\frac{T(r,f)}{(\log {r})^{\mu _{\log
}(f)}}.
\end{equation*
If $f$ is an entire function, the
\begin{equation*}
\tau _{\log }(f)=\limsup_{r\longrightarrow +\infty }\frac{\log M(r,f)}{(\log
{r})^{\rho _{\log }(f)}}=\limsup_{r\longrightarrow +\infty }\frac{\log T(r,f
}{(\log {r})^{\rho _{\log }(f)}},
\end{equation*
\begin{equation*}
\underline{\tau }_{\log }(f)=\liminf_{r\longrightarrow +\infty }\frac{\log
M(r,f)}{(\log {r})^{\mu _{\log }(f)}}=\liminf_{r\longrightarrow +\infty
\frac{\log M(r,f)}{(\log {r})^{\mu _{\log }(f)}}.
\end{equation*
It is clear that, the logarithmic type of any non-constant polynomial $P$
equals its degree $\deg P$, that any non-constant rational function is of
finite logarithmic type, and that any transcendental meromorphic function
whose logarithmic order equals one in the plane must be of infinite
logarithmic type.
\quad
\noindent \textbf{Definition 1.3 }$\left( \left[ 10\right] \right) $\textbf
\ }Let $f$ be meromorphic function. Then, the logarithmic exponent of
convergence of poles of $f$ is defined by
\begin{equation*}
\lambda _{\log }\left( \frac{1}{f}\right) =\limsup_{r\longrightarrow +\infty
}\frac{\log n(r,f)}{\log \log r}=\limsup_{r\longrightarrow +\infty }\frac
\log N(r,f)}{\log \log r}-1,
\end{equation*
where $n(r,f)$ denotes the number of poles and $N(r,f)$ is the counting
function of poles of $f$ in the disc $\left\vert z\right\vert \leq r$.
\quad
\noindent \textbf{Definition 1.4 }$\left( \left[ 25\right] \right) $ Let
a\in \overline{\mathbb{C}}=\mathbb{C}\cup \{\infty \},$ the deficiency of $a$
with respect to a meromorphic function $f$ is given b
\begin{equation*}
\delta \left( a,f\right) =\underset{r\rightarrow +\infty }{\lim \inf }\frac
m\left( r,\frac{1}{f-a}\right) }{T\left( r,f\right) }=1-\underset
r\rightarrow +\infty }{\lim \sup }\frac{N\left( r,\frac{1}{f-a}\right) }
T\left( r,f\right) }.
\end{equation*}
\noindent \qquad Recently, the research on the properties of meromorphic
solutions of complex delay-differential equations has become a subject of
great interest from the viewpoint of Nevanlinna theory and its difference
analogues. In $\left[ 20\right] $, Liu, Laine and Yang presented
developments and new results on complex delay-differential equations, an
area with important and interesting applications, which also gathers
increasing attention (see, $\left[ 4,5,8,24\right] $. In $\left[ 8\right] $,
Chen and Zheng considered the following homogeneous complex
delay-differential equation
\begin{equation}
\sum_{i=0}^{n}\sum_{j=0}^{m}A_{ij}(z)f^{(j)}(z+c_{i})=0, \tag{1.1}
\end{equation
where $A_{ij}(z)$ $(i=0,1,\ldots ,n,j=0,1,\ldots ,m)$ and $F(z)$ are entire
functions of finite order, $c_{i}(i=0,\ldots ,n)$ are distinct non-zero
complex constants, and they proved the following results.
\quad
\noindent \textbf{Theorem A }$\left( \left[ 8\right] \right) $ \textit{Let }
A_{ij}(z)$\textit{\ }$(i=0,1,\ldots ,n,j=0,1,\ldots ,m)$\textit{\ be entire
functions, and }$a,l\in \left\{ 0,1,...,n\right\} ,$\textit{\ }$b\in
\{0,1,...,m\}$\textit{\ such that }$(a,b)\not=(l,0).$\textit{\ If the
following three assumptions hold simultaneously}:
\begin{enumerate}
\item $\max \{\mu (A_{ab}),\rho (A_{ij})\colon (i,j)\neq (a,b),(l,0)\}\leq
\mu (A_{l0})<\infty ,\mu (A_{l0})>0;$
\item $\underline{\tau }_{M}(A_{l0})>\underline{\tau }_{M}(A_{ab}),$ \textit
when} $\mu (A_{l0})=\mu (A_{ab});$
\item $\underline{\tau }_{M}(A_{l0})>\max \{\tau _{M}((A_{ij}):\rho
(A_{ij})=\mu (A_{l0})\colon (i,j)\neq (a,b),(l,0)\},$ \textit{when} $\mu
(A_{l0})=\max \{\rho (A_{ij})\colon (i,j)\neq (a,b),(l,0)\},$\newline
\end{enumerate}
\textit{then any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }
\left( 1.1\right) $\textit{\ satisfies }$\rho (f)\geq \mu (A_{l0})+1.$
\quad
\noindent \textbf{Theorem B }$\left( \left[ 8\right] \right) $ \textbf{\
\textit{Let }$A_{ij}(z)$\textit{\ }$(i=0,1,\ldots ,n,j=0,1,\ldots ,m)
\textit{\ be meromorphic functions, and }$a,l\in \left\{ 0,1,...,n\right\} ,
\textit{\ }$b\in \{0,1,...,m\}$\textit{\ such that }$(a,b)\not=(l,0).
\textit{\ If the following four assumptions hold simultaneously}:
\begin{enumerate}
\item $\delta (\infty ,A_{l0})=\delta >0;$
\item $\max \{\mu (A_{ab}),\rho (A_{ij})\colon (i,j)\neq (a,b),(l,0)\}\leq
\mu (A_{l0})<\infty ,\mu (A_{l0})>0;$
\item $\delta \underline{\tau }(A_{l0})>\underline{\tau }(A_{ab}),$ when
\mu (A_{l0})=\mu (A_{ab});$
\item $\delta \underline{\tau }(A_{l0})>\max \{\tau ((A_{ij}):\rho
(A_{ij})=\mu (A_{l0})\colon (i,j)\neq (a,b),(l,0)\},$ when $\mu
(A_{l0})=\max \{\rho (A_{ij})\colon (i,j)\neq (a,b),(l,0)\},$\newline
\textit{then any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }
\left( 1.1\right) $\textit{\ satisfies }$\rho (f)\geq \mu (A_{l0})+1.$
\end{enumerate}
\quad
\noindent \qquad Further, Bellaama and Bela\"{\i}di in $\left[ 5\right] $
extended the previous results to the non homogeneous delay differential
equation
\begin{equation}
\sum_{i=0}^{n}\sum_{j=0}^{m}A_{ij}(z)f^{(j)}(z+c_{i})=F(z), \tag{1.2}
\end{equation
where $A_{ij}(z)$ $(i=0,1,\ldots ,n,j=0,1,\ldots ,m,n,m\in \mathbb{N}),$
\mathbb{N=}\left\{ 0,1,2,\cdots \right\} $ denote the set of natural numbers
and $F(z)$ are meromorphic or entire functions of finite logarithmic order,
c_{i}(i=0,\ldots ,n)$ are distinct non-zero complex constants, and obtained
the following theorems for the homogeneous and non-homogeneous cases.
\quad
\noindent \textbf{Theorem C }$\left( \left[ 5\right] \right) $\textbf{\
\textit{Consider the delay differential equation with entire coefficients.
Suppose that one of the coefficients, say }$A_{l0}$\textit{\ with }$\mu
(A_{l0})>0$\textit{, is dominate in the sens that} :
\begin{enumerate}
\item $\max \{\mu (A_{ab}),\rho (S)\}\leq \mu (A_{l0})<\infty ;$
\item $\underline{\tau }_{M}(A_{l0})>\underline{\tau }_{M}(A_{ab}),$ \textit
whenever} $\mu (A_{l0})=\mu (A_{ab});$
\item $\underline{\tau }_{M}(A_{l0})>\max \{\tau _{M}(g):\rho (g)=\mu
(A_{l0})\colon g\in S\},$ \textit{whenever} $\mu (A_{l0})=\rho (S),$ \textit
where} $S:=\{F,A_{ij}\colon (i,j)\neq (a,b),(l,0)\}$ \textit{and} $\rho
(S):=\max \{\rho (g)\colon g\in S\}.$\newline
\textit{Then any meromorphic solution }$f$\textit{\ of }$\left( 1.2\right)
\textit{\ satisfies }$\rho (f)\geq \mu (A_{l0})$\textit{\ if }
F(z)(\not\equiv 0).$\textit{\ Further if }$F(z)(\equiv 0),$\textit{\ then
any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }$\left(
1.1\right) $ \textit{satisfies }$\rho (f)\geq \mu (A_{l0})+1.$
\end{enumerate}
\quad
\noindent \textbf{Theorem D }$\left( \left[ 5\right] \right) $ \textit
Consider the delay differential equation with meromorphic coefficients.
Suppose that one of the coefficients, say }$A_{l0}$\textit{\ with }$\mu
(A_{l0})>0$\textit{, is dominate in the sens that }:
\begin{enumerate}
\item $\max \{\mu (A_{ab}),\rho (S)\}\leq \mu (A_{l0})<\infty ;$
\item $\underline{\tau }(A_{l0})>\underline{\tau }(A_{ab}),$ whenever $\mu
(A_{l0})=\mu (A_{ab});$
\item $\sum_{\rho (A_{ij})=\mu (A_{l0}),(i,j)\not=(l,0),(a,b)}\tau
(A_{ij})+\tau (F)<\underline{\tau }(A_{l0})<\infty ,$ \textit{whenever} $\mu
(A_{l0})=\rho (S);$
\item $\sum_{\rho (A_{ij})=\mu (A_{l0}),(i,j)\not=(l,0),(a,b)}\tau (A_{ij})
\underline{\tau }(A_{ab})<\underline{\tau }(A_{l0})<\infty ,$ \textit
whenever} $\mu (A_{l0})=\mu (A_{ab})=\rho (S);$
\item $\lambda \left( \frac{1}{A_{l0}}\right) <\mu (A_{l0})<\infty .$\newline
\end{enumerate}
\textit{Then any meromorphic solution }$f$\textit{\ of }$\left( 1.2\right)
\textit{\ satisfies }$\rho (f)\geq \mu (A_{l0})$\textit{\ if }
F(z)(\not\equiv 0).$\textit{\ Further if }$F(z)(\equiv 0),$\textit{\ then
any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }$\left(
1.1\right) $\textit{\ satisfies }$\rho (f)\geq \mu (A_{l0})+1.$
\quad
\noindent Note that the case when the coefficients are of order zero is not
included in the above results and because the logarithmic order is an
effective technique to express the growth of solutions of the linear
difference equations and the linear differential equations even when the
coefficients are zero order entire or meromorphic functions, in this
article, our main aim is to investigate the logarithmic lower order of
meromorphic solutions of equations $\left( 1.1\right) $ and $\left(
1.2\right) $ to extend and improve the above theorems. When the coefficients
of $\left( 1.1\right) $ and $\left( 1.2\right) $ are meromorphic functions
and there is one dominating coefficient by its logarithmic lower order or by
its logarithmic lower type, we get the following two theorems.
\quad
\noindent \textbf{Theorem 1.1} \textit{Let }$A_{ij}(z)$\textit{\ }
(i=0,1,\ldots ,n,j=0,1,\ldots ,m,n,m\in
\mathbb{N}
)$\textit{\ be meromorphic functions, and }$a,l\in \left\{ {0,1,...,n
\right\} ,$\textit{\ }$b\in \{0,1,...,m\}$\textit{\ such that }
(a,b)\not=(l,0).$\textit{\ Suppose that one of the coefficients, say }
A_{l0} $\textit{\ with }$\lambda _{\log }\left( \frac{1}{A_{l0}}\right)
+1<\mu _{\log }(A_{l0})<\infty $\textit{\ is dominate in the sens that }:
\noindent $\left( \text{i}\right) $ $\max \{\mu _{\log }(A_{ab}),\rho _{\log
}(S)\}\leq \mu _{\log }(A_{l0})<\infty ;$
\noindent $\left( \text{ii}\right) $ $\underline{\tau }_{\log }(A_{l0})
\underline{\tau }_{\log }(A_{ab}),$ \textit{whenever} $\mu _{\log
}(A_{l0})=\mu _{\log }(A_{ab});$
\noindent $\left( \text{iii}\right) $ $\sum_{\rho _{\log }(A_{ij})=\mu
_{\log }(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log
}(F)<\underline{\tau }_{\log }(A_{l0})<\infty ,$ \textit{whenever} $\mu
_{\log }(A_{l0})=\rho _{\log }(S);$
\noindent $\left( \text{iv}\right) $ $\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log }(F)
\underline{\tau }_{\log }(A_{ab})<\underline{\tau }_{\log }(A_{l0})<\infty ,$
\textit{whenever} $\mu _{\log }(A_{l0})=\mu _{\log }(A_{ab})=\rho _{\log
}(S),$ \textit{where} $S:=\{F,A_{ij}\colon (i,j)\neq (a,b),(l,0)\}$ \textit
and} $\rho _{\log }(S):=\max \{\rho _{\log }(g)\colon g\in S\}.$
\noindent \textit{Then any meromorphic solution }$f$\textit{\ of }$\left(
1.2\right) $\textit{\ satisfies }$\rho _{\log }(f)\geq \mu _{\log }(A_{l0})
\textit{\ if }$F(z)(\not\equiv 0).$\textit{\ Further if }$F(z)(\equiv 0),
\textit{\ then any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }
\left( 1.1\right) $\textit{\ satisfies }$\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\quad
\noindent \textbf{Theorem 1.2} \textit{Let }$A_{ij}(z)$\textit{\ }
(i=0,1,\ldots ,n,j=0,1,\ldots ,m,n,m\in
\mathbb{N}
)$\textit{\ be meromorphic functions, and }$a,l\in \left\{ {0,1,...,n
\right\} ,$\textit{\ }$b\in \{0,1,...,m\}$\textit{\ such that }
(a,b)\not=(l,0).$\textit{\ Suppose that one of the coefficients, say }
A_{l0} $\textit{\ with }$\mu (A_{l0})>0$ \textit{and} $\delta (\infty
,A_{l0})>0$\textit{\ is dominate in the sens that }:
\noindent $\left( \text{i}\right) $ $\max \{\mu _{\log }(A_{ab}),\rho _{\log
}(S)\}\leq \mu _{\log }(A_{l0})<\infty ;$
\noindent $\left( \text{ii}\right) $ $\delta \underline{\tau }_{\log
}(A_{l0})>\underline{\tau }_{\log }(A_{ab}),$ \textit{whenever} $\mu _{\log
}(A_{l0})=\mu _{\log }(A_{ab});$
\noindent $\left( \text{iii}\right) $ $\sum_{\rho _{\log }(A_{ij})=\mu
_{\log }(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log
}(F)<\delta \underline{\tau }_{\log }(A_{l0})<\infty ,$ \textit{whenever}
\mu _{\log }(A_{l0})=\rho _{\log }(S);$
\noindent $\left( \text{iv}\right) $ $\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log }(F)
\underline{\tau }_{\log }(A_{ab})<\delta \underline{\tau }_{\log
}(A_{l0})<\infty ,$ \textit{whenever} $\mu _{\log }(A_{l0})=\mu _{\log
}(A_{ab})=\rho _{\log }(S),$ \textit{where} $S:=\{F,A_{ij}\colon (i,j)\neq
(a,b),(l,0)\}$ \textit{and} $\rho _{\log }(S):=\max \{\rho _{\log }(g)\colon
g\in S\}.$
\noindent \textit{Then any meromorphic solution }$f$\textit{\ of }$\left(
1.2\right) $\textit{\ satisfies }$\rho _{\log }(f)\geq \mu _{\log }(A_{l0})
\textit{\ if }$F(z)(\not\equiv 0).$\textit{\ Further if }$F(z)(\equiv 0),
\textit{\ then any meromorphic solution }$f(z)(\not\equiv 0)$\textit{\ of }
\left( 1.1\right) $\textit{\ satisfies }$\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\section*{Some lemmas}
\noindent The following lemmas are important to our proofs.
\quad
\noindent \textbf{Lemma 2.1} ($\left[ 15\right] $). \textit{Let }$k$\textit
\ and }$j$\textit{\ be integers such that }$k>j\geq 0.$\textit{\ Let }$f
\textit{\ be a meromorphic function in the plane }
\mathbb{C}
$\textit{\ such that }$f^{(j)}$\textit{\ does not vanish identically. Then,
there exists an }$r_{0}>1$\textit{\ such that
\begin{equation*}
m(r,\frac{f^{(k)}}{f^{(j)}})\leq (k-j)\log ^{+}\frac{\rho (T(\rho ,f))}
r(\rho -r)}+\log \frac{k!}{j!}+5.3078(k-j),
\end{equation*
\textit{for all }$r_{0}<r<\rho <+\infty .$\textit{\ If }$f$\textit{\ is of
finite order }$s$\textit{, then
\begin{equation*}
\limsup\limits_{r\rightarrow +\infty }\frac{m(r,\frac{f^{(k)}}{f^{(j)}})}
\log r}\leq \max \{0,(k-j)(s-1)\}.
\end{equation*
\textbf{Remark 2.1. \ }It is shown in $\left[ 13,\text{ p}.\text{ 66}\right]
$, that for an arbitrary complex number $c\neq 0$, the following inequalitie
\begin{equation*}
\left( 1+o\left( 1\right) \right) T\left( r-\left\vert c\right\vert ,f\left(
z\right) \right) \leq T\left( r,f\left( z+c\right) \right) \leq \left(
1+o\left( 1\right) \right) T\left( r+\left\vert c\right\vert ,f\left(
z\right) \right)
\end{equation*
hold as $r\rightarrow +\infty $ for a general meromorphic function $f\left(
z\right) $. Therefore, it is easy to obtain tha
\begin{equation*}
\rho _{\log }(f+c)=\rho _{\log }(f),\text{ }\mu _{\log }(f+c)=\mu _{\log
}(f).
\end{equation*
\textbf{Lemma 2.2 }$\left( \left[ 3\right] \right) $\textbf{\ }\textit{Let }
f$ \textit{be a meromorphic function with }$1\leq \mu _{\log }\left(
f\right) <+\infty .$ \textit{Then there exists a set} $E_{1}\subset \left(
1,+\infty \right) $ \textit{with infinite logarithmic measure such that for
any given }$\varepsilon >0$ \textit{and} $r\in E_{1}\subset \left( 1,+\infty
\right) ,$ \textit{we have
\begin{equation*}
T\left( r,f\right) <\left( \log r\right) ^{\mu _{\log }\left( f\right)
+\varepsilon }.
\end{equation*
\textbf{Lemma 2.3 }\textit{Let }$f$ \textit{be a meromorphic function with }
1\leq \mu _{\log }\left( f\right) <+\infty .$ \textit{Then there exists a se
} $E_{2}\subset \left( 1,+\infty \right) $ \textit{with infinite logarithmic
measure such that}
\begin{equation*}
\underline{\tau }_{\log }(f)=\underset{\underset{r\in E_{2}}{r\rightarrow
+\infty }}{\lim }\frac{T(r,f)}{(\log r)^{\mu _{\log }(f)}}.
\end{equation*
\textit{Consequently}, \textit{for any given }$\varepsilon >0$ \textit{and
all sufficiently large} $r\in E_{2},$ \textit{we have}
\begin{equation*}
T\left( r,f\right) <\left( \underline{\tau }_{\log }(f)+\varepsilon \right)
\left( \log r\right) ^{\mu _{\log }\left( f\right) },
\end{equation*
\textit{Proof.}\textbf{\ }By the definition of the logarithmic lower type,
there exists a sequence $\left\{ r_{n}\right\} _{n=1}^{\infty }$ tending to
\infty $ satisfying $\left( 1+\frac{1}{n}\right) r_{n}<r_{n+1},$ an
\begin{equation*}
\underline{\tau }_{\log }(f)=\underset{r_{n}\rightarrow +\infty }{\lim
\frac{T(r_{n},f)}{(\log r_{n})^{\mu _{\log }(f)}}.
\end{equation*
Then for any given $\varepsilon >0,$ there exists an $n_{1}$ such that for
n\geq n_{1}$ and any $r\in \left[ \frac{n}{n+1}r_{n},r_{n}\right] ,$ we hav
\begin{equation*}
\frac{T(\frac{n}{n+1}r_{n},f)}{(\log r_{n})^{\mu _{\log }(f)}}\leq \frac
T(r,f)}{(\log r)^{\mu _{\log }(f)}}\leq \frac{T(r_{n},f)}{(\log \frac{n}{n+1
r_{n})^{\mu _{\log }(f)}}.
\end{equation*
It follows that
\begin{equation*}
\left( \frac{\log \frac{n}{n+1}r_{n}{}}{\log r_{n}}\right) ^{\mu _{\log }(f)
\frac{T(\frac{n}{n+1}r_{n},f)}{(\log \frac{n}{n+1}r_{n})^{\mu _{\log }(f)}
\leq \frac{T(r,f)}{(\log r)^{\mu _{\log }(f)}}
\end{equation*
\begin{equation}
\leq \frac{T(r_{n},f)}{(\log r_{n})^{\mu _{\log }(f)}}\left( \frac{\log
r_{n}{}}{\log \frac{n}{n+1}r_{n}}\right) ^{\mu _{\log }(f)}. \tag{2.1}
\end{equation
Set
\begin{equation*}
E_{2}=\bigcup\limits_{n=n_{1}}^{+\infty }\left[ \frac{n}{n+1}r_{n},r_{n
\right] .
\end{equation*
Then from $\left( 2.1\right) $, we obtain
\begin{equation*}
\underset{\underset{r\in E_{2}}{r\rightarrow +\infty }}{\lim }\frac{T(r,f)}
(\log r)^{\mu _{\log }(f)}}=\underset{r_{n}\rightarrow +\infty }{\lim }\frac
T(r_{n},f)}{(\log r_{n})^{\mu _{\log }(f)}}=\underline{\tau }_{\log }(f),
\end{equation*
so for any given $\varepsilon >0$ and all sufficiently large $r\in E_{2},$
we get
\begin{equation*}
T\left( r,f\right) <\left( \underline{\tau }_{\log }(f)+\varepsilon \right)
\left( \log r\right) ^{\mu _{\log }\left( f\right) },
\end{equation*
where $lm\left( E_{2}\right) =\int\limits_{E_{2}}\frac{dr}{r
=\sum\limits_{n=n_{1}}^{+\infty }\int\limits_{\frac{n}{n+1}r_{n}}^{r_{n}
\frac{dt}{t}=\sum\limits_{n=n_{1}}^{+\infty }\log \left( 1+\frac{1}{n
\right) =+\infty .$
\noindent \textbf{Lemma 2.4 }$\left( \left[ 3\right] \right) $ \textit{Let }
\eta _{1},\eta _{2}$ \textit{be two arbitrary complex numbers such that }
\eta _{1}\neq \eta _{2}$ \textit{and let }$f$ \textit{be a finite
logarithmic order meromorphic function. Let }$\rho $ \textit{be the
logarithmic order of }$f$. \textit{Then for each }$\varepsilon >0,$ \textit
we have
\begin{equation*}
m\left( r,\frac{f\left( z+\eta _{1}\right) }{f\left( z+\eta _{2}\right)
\right) =O\left( \left( \log r\right) ^{\rho -1+\varepsilon }\right) .
\end{equation*}
\section{\textbf{Proof of Theorem 1.1}}
\noindent Let $f$ be a meromorphic solution of $\left( 1.2\right) $. If $f$
has infinite logarithmic order, then the result holds. Now, we suppose that
\rho _{\log }(f)<\infty $. \ We divide $(1.2)$ by $f(z+c_{l})$ to ge
\begin{equation*}
-A_{l0}(z)=\sum_{i=0,i\neq l,a}^{n}\sum_{j=0}^{m}A_{ij}\frac{f^{(j)}(z+c_{i}
}{f(z+c_{i})}\frac{f(z+c_{i})}{f(z+c_{l})}
\end{equation*
\begin{equation*}
+\sum_{j=0,j\neq b}^{m}A_{aj}\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\frac
f(z+c_{a})}{f(z+c_{l})}+\sum_{j=1}^{m}A_{lj}\frac{f^{(j)}(z+c_{l})}
f(z+c_{l})}
\end{equation*
\begin{equation}
+A_{ab}\frac{f^{(b)}(z+c_{a})}{f(z+c_{a})}\frac{f(z+c_{a})}{f(z+c_{l})}
\frac{F(z)}{f(z+c_{l})}. \tag{3.1}
\end{equation
By $\left( 3.1\right) $ and Remark 2.1, for sufficiently large $r$, we have
\begin{equation*}
T(r,A_{l0})=m(r,A_{l0})+N(r,A_{l0})\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}m(r,A_{ij})+m(r,A_{ab})
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}m(r,A_{lj})+\sum_{j=0,j\neq b}^{m}m(r,A_{aj})+\sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}m\left( r,\frac{f^{(j)}(z+c_{i})}{f(z+c_{i})}\right)
\end{equation*
\begin{equation*}
+\sum_{i=0,i\neq l,a}^{n}m\left( r,\frac{f(z+c_{i})}{f(z+c_{l})}\right)
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right)
+2m\left( r,\frac{f(z+c_{a})}{f(z+c_{l})}\right)
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{l})}{f(z+c_{l})}\right) +m\left(
r,F\right) +m\left( r,\frac{1}{f(z+c_{l})}\right) +N(r,A_{l0})+O(1)
\end{equation*
\begin{equation*}
\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})+\sum_{j=1}^{m}T(r,A_{lj})
\sum_{j=0,j\neq b}^{m}T(r,A_{aj})
\end{equation*
\begin{equation*}
+\sum_{i=0,i\neq l,a}^{n}\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{i})}
f(z+c_{i})}\right) +\sum_{i=0,i\neq l,a}^{n}m\left( r,\frac{f(z+c_{i})}
f(z+c_{l})}\right)
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right)
+2m\left( r,\frac{f(z+c_{a})}{f(z+c_{l})}\right)
\end{equation*
\begin{equation}
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{l})}{f(z+c_{l})}\right) +T\left(
r,F\right) +2T\left( 2r,f\right) +N(r,A_{l0})+O(1). \tag{3.2}
\end{equation
From Lemma 2.1, for sufficiently large $r$, we obtai
\begin{equation}
m\left( r,\frac{f^{(j)}(z+c_{i})}{f(z+c_{i})}\right) \leq 2j\log ^{+}T\left(
2r,f\right) ,\ (i=0,1,...,n,j=1,...,m). \tag{3.3}
\end{equation
By Lemma 2.4, for any given $\varepsilon >0$ and all sufficiently large $r$,
we have
\begin{equation}
m\left( r,\frac{f(z+c_{i})}{f(z+c_{l})}\right) =O\left( (\log r)^{\rho
_{\log }(f)-1+\varepsilon }\right) ,\ (i=0,1,...,n,i\neq l). \tag{3.4}
\end{equation
From the definition of $\lambda _{\log }\left( \frac{1}{A_{l0}}\right) $,
for any given $\varepsilon >0$ with sufficiently large $r$, we have
\begin{equation}
N(r,A_{l0})\leq \left( \log r\right) ^{\lambda _{\log }\left( \frac{1}{A_{l0
}\right) +1+\varepsilon }. \tag{3.5}
\end{equation
By using the assumptions $\left( 3.3\right) -\left( 3.5\right) $, we may
rewrite $\left( 3.2\right) $ a
\begin{equation*}
T(r,A_{l0})\leq \sum_{i=0,i\neq l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}T(r,A_{lj})+\sum_{j=0,j\neq b}^{m}T(r,A_{aj})+O\left( \log
^{+}T\left( 2r,f\right) \right)
\end{equation*
\begin{equation}
+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon }\right) +T\left(
r,F\right) +2T\left( 2r,f\right) +(\log r)^{\lambda _{\log }\left( \frac{1}
A_{l0}}\right) +1+\varepsilon }. \tag{3.6}
\end{equation
This proof is also divided into four cases:
\noindent \textbf{Case }$\left( \mathbf{i}\right) $\textbf{:} If $\max \{\mu
_{\log }(A_{ab}),\rho _{\log }(S)\}<\mu _{\log }(A_{l0}),$ then by the
definitions of $\mu _{\log }(A_{l0})$ and $\rho _{\log }(S)$ for any given
\varepsilon >0$ and all sufficiently large $r$, we have
\begin{equation}
T(r,A_{l0})\geq (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }, \tag{3.7}
\end{equation
\begin{equation}
T(r,g)\leq (\log r)^{\rho _{\log }(S)+\varepsilon },\quad g\in S. \tag{3.8}
\end{equation
By the definition of $\mu _{\log }(A_{ab})$ and Lemma 2.2, there exists a
subset $E_{1}\subset (1,+\infty )$ of infinite logarithmic measure such that
for any given $\varepsilon >0$ and for all sufficiently large $r\in E_{1}$,
we have
\begin{equation}
T(r,A_{ab})\leq (\log r)^{\mu _{\log }(A_{ab})+\varepsilon }. \tag{3.9}
\end{equation
We set $\rho =\max \{\mu _{\log }(A_{ab}),\rho _{\log }(S)\},$ then from
\left( 3.8\right) $ and $\left( 3.9\right) $, it follows
\begin{equation}
\max \left\{ T(r,A_{ab}),T(r,g)\right\} \leq (\log r)^{\rho +\varepsilon }.
\tag{3.10}
\end{equation
Also, from the definition of $\rho _{\log }(f)$ for any given $\varepsilon >0
$ and all sufficiently large $r$, we have
\begin{equation}
T(r,f)\leq \left( \log r\right) ^{\rho _{\log }(f)+\varepsilon }. \tag{3.11}
\end{equation
By substituting $\left( 3.7\right) ,$ $\left( 3.10\right) $ and $\left(
3.11\right) $ into $\left( 3.6\right) $, for any given $\varepsilon >0$ and
all sufficiently large $r\in E_{1}$, we ge
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq O\left( (\log r)^{\rho
+\varepsilon }\right) +O\left( \log \left( \log r\right) \right) +O\left(
(\log r)^{\rho _{\log }(f)-1+\varepsilon }\right)
\end{equation*
\begin{equation}
+O\left( \left( \log r\right) ^{\rho _{\log }(f)+\varepsilon }\right) +(\log
r)^{\lambda _{\log }\left( \frac{1}{A_{l0}}\right) +1+\varepsilon }.
\tag{3.12}
\end{equation
Now, we choose sufficiently small $\varepsilon $ satisfyin
\begin{equation*}
0<3\varepsilon <\min \left\{ \mu _{\log }(A_{l0})-\rho ,\mu _{\log
}(A_{l0})-\lambda _{\log }\left( \frac{1}{A_{l0}}\right) -1\right\} ,
\end{equation*
for all sufficiently large $r\in E_{1}$, it follows from $\left( 3.12\right)
$ tha
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-2\varepsilon }\leq \left( \log r\right)
^{\rho _{\log }(f)+\varepsilon },
\end{equation*
that means, $\mu _{\log }(A_{l0})-3\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Similarly, for the homogeneous case, by $\left( 1.1\right) $ and
\left( 3.3\right) -\left( 3.5\right) $, we obtai
\begin{equation*}
T(r,A_{l0})\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})+\sum_{j=1}^{m}T(r,A_{lj})
\sum_{j=0,j\neq b}^{m}T(r,A_{aj})
\end{equation*
\begin{equation}
+O\left( \log \left( \log r\right) \right) +O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) +(\log r)^{\lambda _{\log }\left( \frac{1}{A_{l0
}\right) +1+\varepsilon }. \tag{3.13}
\end{equation
Then, by substituting $\left( 3.7\right) $ and $\left( 3.10\right) $ into
\left( 3.13\right) $, for all sufficiently large $r\in E_{1}$, we hav
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq O\left( (\log r)^{\rho
+\varepsilon }\right) +O\left( \log \left( \log r\right) \right)
\end{equation*
\begin{equation}
+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon }\right) +(\log
r)^{\lambda _{\log }\left( \frac{1}{A_{l0}}\right) +1+\varepsilon }.
\tag{3.14}
\end{equation
For sufficiently small $\varepsilon $ satisfying
\begin{equation*}
0<3\varepsilon <\min \left\{ \mu _{\log }(A_{l0})-\rho ,\mu _{\log
}(A_{l0})-\lambda _{\log }\left( \frac{1}{A_{l0}}\right) -1\right\} ,
\end{equation*
and all sufficiently large $r\in E_{1}$, we deduce from $\left( 3.14\right) $
that
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-2\varepsilon }\leq \log r)^{\rho _{\log
}(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-3\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{ii}\right) $\textbf{: }If $\beta
=\rho _{\log }(S)<\mu _{\log }(A_{l0})=\mu _{\log }(A_{ab})$ and $\underline
\tau }_{\log }(A_{l0})>\underline{\tau }_{\log }(A_{ab}),$ then by the
definition of $\underline{\tau }_{\log }(A_{l0}),$ for any given
\varepsilon >0$ and all sufficiently large $r$, we have
\begin{equation}
T(r,A_{l0})\geq (\underline{\tau }_{\log }(A_{l0})-\varepsilon )(\log
r)^{\mu _{\log }(A_{l0})}. \tag{3.15}
\end{equation
Also from the definition of $\underline{\tau }_{\log }(A_{ab})$ and Lemma
2.3, there exists a subset $E_{2}\subset (1,+\infty )$ of infinite
logarithmic measure such that for any given $\varepsilon >0$ and for all
sufficiently large $r\in E_{2}$, we obtain
\begin{equation}
T(r,A_{ab})\leq (\underline{\tau }_{\log }(A_{ab})+\varepsilon )(\log
r)^{\mu _{\log }(A_{ab})}=(\underline{\tau }_{\log }(A_{ab})+\varepsilon
)(\log r)^{\mu _{\log }(A_{l0})}. \tag{3.16}
\end{equation
By substituting $\left( 3.8\right) ,\left( 3.11\right) ,\left( 3.15\right) $
and $\left( 3.16\right) $ into $\left( 3.6\right) $, for all sufficiently
large $r\in E_{2}$, we ge
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}\leq O\left( (\log r)^{\beta +\varepsilon }\right)
\end{equation*
\begin{equation*}
+(\underline{\tau }_{\log }(A_{ab})+\varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}+O\left( \log \left( \log r\right) \right) +O\left( (\log r)^{\rho
_{\log }(f)-1+\varepsilon }\right)
\end{equation*
\begin{equation}
+O\left( \left( \log r\right) ^{\rho _{\log }(f)+\varepsilon }\right) +(\log
r)^{\lambda _{\log }\left( \frac{1}{A_{l0}}\right) +1+\varepsilon }.
\tag{3.17}
\end{equation
Now, we choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\beta ,\mu _{\log }(A_{l0})-\lambda _{\log
}\left( \frac{1}{A_{l0}}\right) -1,\underline{\tau }_{\log }(A_{l0})
\underline{\tau }_{\log }(A_{ab})\},$ for all sufficiently large $r\in E_{2}
, it follows from $\left( 3.17\right) $ tha
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}(A_{ab})-2\varepsilon )(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq
\left( \log r\right) ^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Next, for the homogeneous case, by substituting $\left( 3.8\right)
,\left( 3.15\right) $ and $\left( 3.16\right) $ into $\left( 3.13\right) $,
for all sufficiently large $r\in E_{2}$, we hav
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}\leq O\left( (\log r)^{\beta +\varepsilon }\right) +(\underline
\tau }_{\log }(A_{ab})+\varepsilon )(\log r)^{\mu _{\log }(A_{l0})}
\end{equation*
\begin{equation}
+O\left( \log \left( \log r\right) \right) +O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) +(\log r)^{\lambda _{\log }\left( \frac{1}{A_{l0
}\right) +1+\varepsilon }. \tag{3.18}
\end{equation
Now, we choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\beta ,\mu _{\log }(A_{l0})-\lambda _{\log
}\left( \frac{1}{A_{l0}}\right) -1,\underline{\tau }_{\log }(A_{l0})
\underline{\tau }_{\log }(A_{ab})\},$ for all sufficiently large $r\in E_{2}
, we deduce from $\left( 3.18\right) $ tha
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}(A_{ab})-2\varepsilon )(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq
(\log r)^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{iii}\right) $\textbf{: }When $\mu
_{\log }(A_{ab})<\mu _{\log }(A_{l0})=\rho _{\log }(S)$ and
\begin{equation*}
\tau _{1}=\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log }(F)
\end{equation*
\begin{equation*}
=\tau +\tau _{\log }(F)<\underline{\tau }_{\log }(A_{l0}),\text{ }\tau
=\sum_{\rho _{\log }(A_{ij})=\mu _{\log }(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau
_{\log }(A_{ij}).
\end{equation*
Then, there exists a subset $J\subseteq \{0,1,\dots ,n\}\times \{0,1,\dots
,m\}\setminus \left\{ (l,0),(a,b)\right\} $ such that for all $(i,j)\in J,$
when $\rho _{\log }(A_{ij})=\mu _{\log }\left( A_{l0}\right) ,$ we have
\underset{(i,j)\in J}{\sum }\tau _{\log }\left( A_{ij}\right) <\underline
\tau }_{\log }\left( A_{l0}\right) -\tau _{\log }(F),$ and for $(i,j)\in \Pi
=\{0,1,\dots ,n\}\times \{0,1,\dots ,m\}\setminus \left( J\cup \left\{
\,(l,0),(a,b)\right\} \right) $ we have $\rho _{\log }\left( A_{ij}\right)
<\mu _{\log }\left( A_{l0}\right) .$ Hence, for any given $\varepsilon >0$
and all sufficiently large $r,$ we get
\begin{equation}
T\left( r,A_{ij}\right) \leq \left\{
\begin{array}{c}
\left( \tau _{\log }(A_{ij})+\varepsilon \right) \left( \log r\right) ^{\mu
_{\log }(A_{l0})},\text{ if }(i,j)\in J, \\
\left( \log r\right) ^{\rho _{\log }(A_{ij})+\varepsilon }\leq \left( \log
r\right) ^{\mu _{\log }(A_{l0})-\varepsilon },\text{ if }(i,j)\in \Pi
\end{array
\right. \tag{3.19}
\end{equation
and
\begin{equation}
T\left( r,F\right) \leq \left\{
\begin{array}{c}
\left( \tau _{\log }(F)+\varepsilon \right) \left( \log r\right) ^{\mu
_{\log }(A_{l0})},\text{ if }\rho _{\log }(F)=\mu _{\log }(A_{l0}), \\
\left( \log r\right) ^{\rho _{\log }(F)+\varepsilon }\leq \left( \log
r\right) ^{\mu _{\log }(A_{l0})-\varepsilon },\text{ if }\rho _{\log
}(F)<\mu _{\log }(A_{l0})
\end{array
\right. \tag{3.20}
\end{equation
By substituting $\left( 3.9\right) ,$ $\left( 3.11\right) ,$ $\left(
3.15\right) ,$ $\left( 3.19\right) $ and $\left( 3.20\right) $ into $\left(
3.6\right) $, for all sufficiently large $r\in E_{1},$ we ge
\begin{equation*}
\left( \underline{\tau }_{\log }(A_{l0})-\varepsilon \right) \left( \log
r\right) ^{\mu _{\log }(A_{l0})}\leq \underset{(i,j)\in J}{\sum }\left( \tau
_{\log }\left( A_{ij}\right) +\varepsilon \right) \left( \log r\right) ^{\mu
_{\log }\left( A_{l0}\right) }
\end{equation*
\begin{equation*}
+\underset{(i,j)\in \Pi }{\sum }\left( \log r\right) ^{\mu _{\log
}(A_{l0})-\varepsilon }+(\log r)^{\mu _{\log }(A_{ab})+\varepsilon }+O\left(
\log \left( \log r\right) \right) +O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right)
\end{equation*
\begin{equation*}
+\left( \tau _{\log }(F)+\varepsilon \right) \left( \log r\right) ^{\mu
_{\log }(A_{l0})}+O\left( \left( \log r\right) ^{\rho _{\log
}(f)+\varepsilon }\right) +(\log r)^{\lambda _{\log }\left( \frac{1}{A_{l0}
\right) +1+\varepsilon }
\end{equation*
\begin{equation*}
\leq \left( \tau _{1}+\left( mn+m+n\right) \varepsilon \right) \left( \log
r\right) ^{\mu _{\log }\left( A_{l0}\right) }+O\left( \log r\right) ^{\mu
_{\log }(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
+(\log r)^{\mu _{\log }(A_{ab})+\varepsilon }+O\left( \log \left( \log
r\right) \right) +O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon }\right)
\end{equation*
\begin{equation}
+O\left( \left( \log r\right) ^{\rho _{\log }(f)+\varepsilon }\right) +(\log
r)^{\lambda _{\log }\left( \frac{1}{A_{l0}}\right) +1+\varepsilon }.
\tag{3.21}
\end{equation
We may choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\mu _{\log }(A_{ab}),\mu _{\log
}(A_{l0})-\lambda _{\log }\left( \frac{1}{A_{l0}}\right) -1,\frac{\underline
\tau }_{\log }(A_{l0})-\tau _{1}}{mn+m+n+1}\},$ for all sufficiently large
r\in E_{1},$ by $\left( 3.21\right) $ we have
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau _{1}-\left( mn+m+n+1\right)
\varepsilon )(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq \left( \log
r\right) ^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Further, for the homogeneous case, by substituting $\left(
3.9\right) $, $\left( 3.15\right) ,$ $\left( 3.19\right) $ and $\left(
3.20\right) $ into $\left( 3.13\right) $, for all sufficiently large $r\in
E_{1}$, we ge
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau -\left( nm+m+n\right) \varepsilon
)(\log r)^{\mu _{\log }(A_{l0})}\leq O\left( (\log r)^{\mu _{\log
}(A_{l0})-\varepsilon }\right)
\end{equation*
\begin{equation}
+(\log r)^{\mu _{\log }(A_{ab})+\varepsilon }+O\left( \log \left( \log
r\right) \right) +O\left( \left( \log r\right) ^{\rho _{\log
}(f)-1+\varepsilon }\right) +(\log r)^{\lambda _{\log }\left( \frac{1}{A_{l0
}\right) +1+\varepsilon }. \tag{3.22}
\end{equation
We may choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\mu _{\log }(A_{ab}),\mu _{\log
}(A_{l0})-\lambda _{\log }\left( \frac{1}{A_{l0}}\right) -1,\frac{\underline
\tau }_{\log }(A_{l0})-\tau }{nm+m+n}\},$ for all sufficiently large $r\in
E_{1}$, by $\left( 3.22\right) $ we have
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau -\left( nm+m+n\right) \varepsilon
)(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq \left( \log r\right)
^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{iv}\right) $\textbf{: }When $\mu
_{\log }(A_{l0})=\mu _{\log }(A_{ab})=\rho _{\log }(S)$ and
\begin{equation*}
\tau _{3}=\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log }(F)
\underline{\tau }_{\log }(A_{ab})
\end{equation*
\begin{equation*}
=\tau _{2}+\tau _{\log }(F)<\underline{\tau }_{\log }(A_{l0}),
\end{equation*
\begin{equation*}
\tau _{2}=\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\underline{\tau
_{\log }(A_{ab}).
\end{equation*
Then, by substituting $\left( 3.11\right) $, $\left( 3.15\right) $, $\left(
3.16\right) ,$ $\left( 3.19\right) $ and $\left( 3.20\right) $ into $\left(
3.6\right) $, for all sufficiently large $r\in E_{1}$, we hav
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau _{3}-\left( mn+m+n+2\right)
\varepsilon )(\log r)^{\mu _{\log }(A_{l0})}\leq O\left( (\log r)^{\mu
_{\log }(A_{l0})-\varepsilon }\right)
\end{equation*
\begin{equation*}
+O\left( \log \left( \log r\right) \right) +O\left( \left( \log r\right)
^{\rho _{\log }(f)-1+\varepsilon }\right)
\end{equation*
\begin{equation}
+O\left( \left( \log r\right) ^{\rho _{\log }(f)+\varepsilon }\right) +(\log
r)^{\lambda _{\log }\left( \frac{1}{A_{l0}}\right) +1+\varepsilon }.
\tag{3.23}
\end{equation
Now, we may choose sufficiently small $\varepsilon $ satisfying
0<2\varepsilon <\min \{\mu _{\log }(A_{l0})-\lambda _{\log }\left( \frac{1}
A_{l0}}\right) -1,\frac{\underline{\tau }_{\log }(A_{l0})-\tau _{3}}{mn+m+n+
}\},$ for all sufficiently large $r\in E_{1},$ we deduce from $\left(
3.23\right) $ tha
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau _{3}-\left( mn+m+n+2\right)
\varepsilon )(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq \left( \log
r\right) ^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Further, for the homogeneous case, by substituting $\left(
3.15\right) $, $\left( 3.16\right) ,$ $\left( 3.19\right) $ and $\left(
3.20\right) $ into $\left( 3.13\right) $, for all sufficiently large $r\in
E_{1}$, we ge
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau _{2}-\left( mn+m+n+1\right)
\varepsilon )(\log r)^{\mu _{\log }(A_{l0})}\leq O\left( (\log r)^{\mu
_{\log }(A_{l0})-\varepsilon }\right)
\end{equation*
\begin{equation}
+O\left( \log \left( \log r\right) \right) +O\left( \left( \log r\right)
^{\rho _{\log }(f)-1+\varepsilon }\right) +(\log r)^{\lambda _{\log }\left(
\frac{1}{A_{l0}}\right) +1+\varepsilon }. \tag{3.24}
\end{equation
Therefore, for $\varepsilon $ satisfying $0<2\varepsilon <\min \{\mu _{\log
}(A_{l0})-\lambda _{\log }\left( \frac{1}{A_{l0}}\right) -1,\frac{\underline
\tau }_{\log }(A_{l0})-\tau _{2}}{mn+m+n+1}\}$ and for all sufficiently
large $r\in E_{1}$, by $\left( 3.24\right) $ we have
\begin{equation*}
(\underline{\tau }_{\log }(A_{l0})-\tau _{2}-\left( mn+m+n+1\right)
\varepsilon )(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq \left( \log
r\right) ^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$ The proof of Theorem 1.1 is complete.
\section{\textbf{Proof of Theorem 1.2}}
\noindent Let $f$ be a meromorphic solution of $\left( 1.2\right) $. If $f$
has infinite logarithmic order, then the result holds. Now, we suppose that
\rho _{\log }(f)<\infty $. \ By $\left( 3.1\right) $ and Remark 2.1, for
sufficiently large $r$, we hav
\begin{equation*}
m(r,A_{l0})\leq \sum_{i=0,i\neq l,a}^{n}\sum_{j=0}^{m}m(r,A_{ij})+m(r,A_{ab})
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}m(r,A_{lj})+\sum_{j=0,j\neq b}^{m}m(r,A_{aj})+\sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}m\left( r,\frac{f^{(j)}(z+c_{i})}{f(z+c_{i})}\right)
\end{equation*
\begin{equation*}
+\sum_{i=0,i\neq l,a}^{n}m\left( r,\frac{f(z+c_{i})}{f(z+c_{l})}\right)
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right)
+2m\left( r,\frac{f(z+c_{a})}{f(z+c_{l})}\right)
\end{equation*
\begin{equation*}
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right) +m\left(
r,\frac{F(z)}{f(z+c_{l})}\right) +O(1)
\end{equation*
\begin{equation*}
\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})+\sum_{j=1}^{m}T(r,A_{lj})
\end{equation*
\begin{equation*}
+\sum_{j=0,j\neq b}^{m}T(r,A_{aj})+\sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}m\left( r,\frac{f^{(j)}(z+c_{i})}{f(z+c_{i})}\right)
\end{equation*
\begin{equation*}
+\sum_{i=0,i\neq l,a}^{n}m\left( r,\frac{f(z+c_{i})}{f(z+c_{l})}\right)
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right)
+2m\left( r,\frac{f(z+c_{a})}{f(z+c_{l})}\right)
\end{equation*
\begin{equation}
+\sum_{j=1}^{m}m\left( r,\frac{f^{(j)}(z+c_{a})}{f(z+c_{a})}\right)
+T(r,F)+2T(2r,f)+O(1). \tag{4.1}
\end{equation
By substituting $\left( 3.3\right) $ and $\left( 3.4\right) $ into $\left(
4.1\right) $, for any given $\varepsilon >0$ and all sufficiently large $r$,
we obtain
\begin{equation*}
m(r,A_{l0})\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})+\sum_{j=1}^{m}T(r,A_{lj})
\sum_{j=0,j\neq b}^{m}T(r,A_{aj})
\end{equation*
\begin{equation}
+O\left( \log ^{+}T\left( 2r,f\right) \right) +O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) +T(r,F)+2T(2r,f). \tag{4.2}
\end{equation
Let us set
\begin{equation}
\delta =\delta (\infty ,A_{l0})>0. \tag{4.3}
\end{equation
Now, we divide this proof into four cases:
\noindent \textbf{Case }$\left( \mathbf{i}\right) $\textbf{:} If $\max \{\mu
_{\log }(A_{ab}),\rho _{\log }(S)\}<\mu _{\log }(A_{l0}),$ then by the
definition of $\mu _{\log }(A_{l0})$ and $\left( 4.3\right) $, for any given
$\varepsilon >0$ and all sufficiently large $r$, we have
\begin{equation}
m(r,A_{l0})\geq \frac{\delta }{2}T(r,A_{l0})\geq \frac{\delta }{2}(\log
r)^{\mu _{\log }(A_{l0})-\frac{\varepsilon }{2}}\geq (\log r)^{\mu _{\log
}(A_{l0})-\varepsilon }. \tag{4.4}
\end{equation
By substituting $\left( 3.10\right) ,$ $\left( 3.11\right) $ and $\left(
4.4\right) $ into $\left( 4.2\right) $, for all sufficiently large $r$, we
ge
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq O\left( (\log r)^{\rho
+\varepsilon }\right) +O(\log \left( \log r\right) )
\end{equation*
\begin{equation}
+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon }\right) +O\left( (\log
r)^{\rho _{\log }(f)+\varepsilon }\right) . \tag{4.5}
\end{equation
Now, we choose sufficiently small $\varepsilon $ satisfying $0<3\varepsilon
<\mu _{\log }(A_{l0})-\rho ,$ for all sufficiently large $r$, it follows
from $\left( 4.5\right) $ that
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-2\varepsilon }\leq (\log r)^{\rho _{\log
}(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-3\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Similarly, for the homogeneous case, by $\left( 1.1\right) $ and
\left( 3.3\right) $ and $\left( 3.4\right) $, we obtai
\begin{equation*}
m(r,A_{l0})\leq \sum_{i=0,i\neq
l,a}^{n}\sum_{j=0}^{m}T(r,A_{ij})+T(r,A_{ab})+\sum_{j=1}^{m}T(r,A_{lj})
\sum_{j=0,j\neq b}^{m}T(r,A_{aj})
\end{equation*
\begin{equation}
+O(\log \left( \log r\right) )+O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) . \tag{4.6}
\end{equation
Then, by substituting $\left( 3.10\right) $ and $\left( 4.4\right) $ into
\left( 4.6\right) $, for all sufficiently large $r$, we hav
\begin{equation}
(\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\leq O\left( (\log r)^{\rho
+\varepsilon }\right) +O(\log \left( \log r\right) )+O\left( (\log r)^{\rho
_{\log }(f)-1+\varepsilon }\right) . \tag{4.7}
\end{equation
For the above $\varepsilon $ and all sufficiently large $r$, we deduce from
\left( 4.7\right) $ tha
\begin{equation*}
(\log r)^{\mu _{\log }(A_{l0})-2\varepsilon }\leq (\log r)^{\rho _{\log
}(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-3\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{ii}\right) $\textbf{: }If $\beta
=\rho _{\log }(S)<\mu _{\log }(A_{l0})=\mu _{\log }(A_{ab})$ and $\delta
\underline{\tau }_{\log }(A_{l0})>\underline{\tau }_{\log }(A_{ab}),$ then
by the definition of $\underline{\tau }_{\log }(A_{l0})$ and $\left(
4.3\right) $, for any given $\varepsilon >0$ and all sufficiently large $r$,
we hav
\begin{equation*}
m(r,A_{l0})\geq (\delta -\varepsilon )T(r,A_{l0})\geq (\delta -\varepsilon )
\underline{\tau }_{\log }(A_{l0})-\varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}
\end{equation*
\begin{equation*}
\geq \left( \delta \underline{\tau }_{\log }(A_{l0})-(\underline{\tau
_{\log }(A_{l0})+\delta )\varepsilon +\varepsilon ^{2}\right) (\log r)^{\mu
_{\log }(A_{l0})}
\end{equation*
\begin{equation}
\geq \left( \delta \underline{\tau }_{\log }(A_{l0})-(\underline{\tau
_{\log }(A_{l0})+\delta )\varepsilon \right) (\log r)^{\mu _{\log }(A_{l0})}.
\tag{4.8}
\end{equation
By substituting $\left( 3.8\right) ,$ $\left( 3.11\right) ,$ $\left(
3.16\right) $ and $\left( 4.8\right) $ into $\left( 4.2\right) $, for all
sufficiently large $r\in E_{2}$, we get
\begin{equation*}
\left( \delta \underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}\left( A_{ab}\right) -(\underline{\tau }_{\log }(A_{l0})+\delta
+1)\varepsilon \right) (\log r)^{\mu _{\log }(A_{l0})}\leq O\left( (\log
r)^{\beta +\varepsilon }\right)
\end{equation*
\begin{equation}
+O(\log \left( \log r\right) )+O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) +O\left( (\log r)^{\rho _{\log }(f)+\varepsilon
}\right) . \tag{4.9}
\end{equation
Now, we choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\beta ,\frac{\delta \underline{\tau }_{\log
}(A_{l0})-\underline{\tau }_{\log }(A_{ab})}{\underline{\tau }_{\log
}(A_{l0})+\delta +1}\},$ for all sufficiently large $r\in E_{2}$, by $\left(
4.9\right) $, we obtain
\begin{equation*}
\left( \delta \underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}\left( A_{ab}\right) -(\underline{\tau }_{\log }(A_{l0})+\delta
+1)\varepsilon \right) (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
\leq (\log r)^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Next, for the homogeneous case, by substituting $\left( 3.8\right)
,$ $\left( 3.11\right) ,$ $\left( 3.16\right) $ and $\left( 4.8\right) $
into $\left( 4.6\right) $, for all sufficiently large $r\in E_{2}$, we hav
\begin{equation*}
\left( \delta \underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}\left( A_{ab}\right) -(\underline{\tau }_{\log }(A_{l0})+\delta
+1)\varepsilon \right) (\log r)^{\mu _{\log }(A_{l0})}\leq O\left( (\log
r)^{\beta +\varepsilon }\right)
\end{equation*
\begin{equation}
+O(\log \left( \log r\right) )+O\left( (\log r)^{\rho _{\log
}(f)-1+\varepsilon }\right) . \tag{4.10}
\end{equation
For the above $\varepsilon $ and all sufficiently large $r\in E_{2}$, from
\left( 4.10\right) ,$ we obtai
\begin{equation*}
\left( \delta \underline{\tau }_{\log }(A_{l0})-\underline{\tau }_{\log
}\left( A_{ab}\right) -(\underline{\tau }_{\log }(A_{l0})+\delta
+1)\varepsilon \right) (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
\leq (\log r)^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{iii}\right) $\textbf{: }When $\mu
_{\log }(A_{ab})<\mu _{\log }(A_{l0})=\rho _{\log }(S)$ and
\begin{equation*}
\tau _{1}=\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log
}(F)<\delta \underline{\tau }_{\log }(A_{l0}).
\end{equation*
Then, by substituting $\left( 3.9\right) ,$ $\left( 3.11\right) ,$ $\left(
3.19\right) ,$ $\left( 3.20\right) $ and $\left( 4.8\right) $ into $\left(
4.2\right) $, for all sufficiently large $r\in E_{1}$, we ge
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{1}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+1\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}
\end{equation*
\begin{equation*}
\leq O\left( (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\right) +(\log
r)^{\mu _{\log }(A_{ab})+\varepsilon }+O(\log \left( \log r\right) )
\end{equation*
\begin{equation}
+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon }\right) +O\left( (\log
r)^{\rho _{\log }(f)+\varepsilon }\right) . \tag{4.11}
\end{equation
We may choose sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
<\min \{\mu _{\log }(A_{l0})-\mu _{\log }(A_{ab}),\frac{\underline{\tau
_{\log }(A_{l0})-\tau _{1}}{\underline{\tau }_{\log }(A_{l0})+\delta
+mn+m+n+1}\},$ for all sufficiently large $r\in E_{1}$, by $\left(
4.11\right) $, we obtai
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{1}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+1\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
\leq (\log r)^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Further, for the homogeneous case, by substituting $\left(
3.9\right) ,$ $\left( 3.19\right) ,$ $\left( 3.20\right) $ and $\left(
4.8\right) $ into $\left( 4.6\right) $, for all sufficiently large $r\in
E_{1}$, we ge
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau -\left( \underline{\tau
_{\log }(A_{l0})+\delta +nm+m+n\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}
\end{equation*
\begin{equation}
\leq O\left( (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\right) +(\log
r)^{\mu _{\log }(A_{ab})+\varepsilon }+O(\log \left( \log r\right) )+O\left(
(\log r)^{\rho _{\log }(f)-1+\varepsilon }\right) . \tag{4.12}
\end{equation
For $\varepsilon $ sufficiently small satisfying
\begin{equation*}
0<2\varepsilon <\min \left\{ \mu _{\log }(A_{l0})-\mu _{\log }(A_{ab}),\frac
\underline{\tau }_{\log }(A_{l0})-\tau }{\underline{\tau }_{\log
}(A_{l0})+\delta +nm+m+n}\right\} ,
\end{equation*
and for all sufficiently large $r\in E_{1}$, from $\left( 4.12\right) $ we
conclude
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau -\left( \underline{\tau
_{\log }(A_{l0})+\delta +nm+m+n\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}\leq (\log r)^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1.$
\noindent \textbf{Case }$\left( \mathbf{iv}\right) $\textbf{: }When $\mu
_{\log }(A_{l0})=\mu _{\log }(A_{ab})=\rho _{\log }(S)$ and
\begin{equation*}
\tau _{3}=\sum_{\rho _{\log }(A_{ij})=\mu _{\log
}(A_{l0}),(i,j)\not=(l,0),(a,b)}\tau _{\log }(A_{ij})+\tau _{\log }(F)
\underline{\tau }_{\log }(A_{ab})<\underline{\tau }_{\log }(A_{l0}).
\end{equation*
Then, by substituting $\left( 3.9\right) ,$ $\left( 3.11\right) ,$ $\left(
3.19\right) ,$ $\left( 3.20\right) $ and $\left( 4.8\right) $ into $\left(
4.2\right) $, for all sufficiently large $r\in E_{1}$, we get
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{3}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+2\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}
\end{equation*
\begin{equation}
\leq O\left( (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\right) +O(\log
\left( \log r\right) )+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon
}\right) +O\left( (\log r)^{\rho _{\log }(f)+\varepsilon }\right) .
\tag{4.13}
\end{equation
Now, we may choose sufficiently small $\varepsilon $ satisfying
0<2\varepsilon <\frac{\delta \underline{\tau }_{\log }(A_{l0})-\tau _{3}}
\underline{\tau }_{\log }(A_{l0})+\delta +mn+m+n+2},$ for all sufficiently
large $r\in E_{1}$, we deduce from $\left( 4.13\right) $ that
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{3}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+2\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
\leq (\log r)^{\rho _{\log }(f)+\varepsilon },
\end{equation*
this means, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0}).$
\noindent Also for the homogeneous case, by substituting $\left( 3.9\right) ,
$ $\left( 3.19\right) ,$ $\left( 3.20\right) $ and $\left( 4.8\right) $ into
$\left( 4.6\right) $, for all sufficiently large $r\in E_{1}$, we hav
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{2}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+1)\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})}
\end{equation*
\begin{equation}
\leq O\left( (\log r)^{\mu _{\log }(A_{l0})-\varepsilon }\right) +O(\log
\left( \log r\right) )+O\left( (\log r)^{\rho _{\log }(f)-1+\varepsilon
}\right) . \tag{4.14}
\end{equation
Thus, for sufficiently small $\varepsilon $ satisfying $0<2\varepsilon
\frac{\delta \underline{\tau }_{\log }(A_{l0})-\tau _{2}}{\underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+1},$ for all sufficiently large $r\in E_{1}
, from $\left( 4.14\right) $ we obtai
\begin{equation*}
(\delta \underline{\tau }_{\log }(A_{l0})-\tau _{2}-\left( \underline{\tau
_{\log }(A_{l0})+\delta +mn+m+n+1\right) \varepsilon )(\log r)^{\mu _{\log
}(A_{l0})-\varepsilon }
\end{equation*
\begin{equation*}
\leq (\log r)^{\rho _{\log }(f)-1+\varepsilon },
\end{equation*
that is, $\mu _{\log }(A_{l0})-2\varepsilon \leq \rho _{\log }(f)-1$ and
since $\varepsilon >0$ is arbitrary, then $\rho _{\log }(f)\geq \mu _{\log
}(A_{l0})+1$ which completes the proof of Theorem 1.2.
\section*{Example}
The following example is for illustrating the sharpness of some assertions
in Theorem 1.2.
\quad
\noindent \textbf{Example 5.1} For Theorem 1.2, we consider the meromorphic
function
\begin{equation}
f(z)=\frac{1}{z^{5}} \tag{5.1}
\end{equation
which is a solution to the delay-differential equatio
\begin{equation*}
A_{20}(z)f(z-2i)+A_{11}(z)f^{\prime }(z+i)+A_{10}(z)f(z+i)
\end{equation*
\begin{equation}
+A_{01}(z)f^{\prime }(z)+A_{00}(z)f(z)=F(z), \tag{5.2}
\end{equation
where $A_{20}(z)=\frac{2}{3}(z-2i)^{4},$ $A_{11}(z)=2e,$ $A_{10}(z)=\frac{10
}{z+i},$ $A_{01}(z)=\frac{i}{2},$ $A_{00}(z)=\frac{5i}{2z}$ and $F(z)=\frac{
}{3(z-2i)}.$ Obviously, $A_{ij}(z)$ $\left( i=0,1,2,j=0,1\right) $ and $F(z)$
satisfy the conditions in Case $\left( \text{iii}\right) $ of Theorem 2.1
such that
\begin{equation*}
\delta (\infty ,A_{20})=1>0,
\end{equation*
\begin{equation*}
\mu _{\log }(A_{11})=0<\max \{\rho _{\log }(F),\rho _{\log
}(A_{ij}),(i,j)\not=(1,1),(2,0)\}=\mu _{\log }(A_{20})=1
\end{equation*
and
\begin{equation*}
\sum_{\rho _{\log }(A_{ij})=\mu _{\log }(A_{20}),(i,j)\not=(1,1),(2,0)}\tau
_{\log }(A_{ij})+\tau _{\log }(F)=3<\delta \underline{\tau }_{\log
}(A_{20})=4.
\end{equation*
We see that $f$ satisfies
\begin{equation*}
\mu _{\log }(f)=1=\rho _{\log }(A_{20}).
\end{equation*
The meromorphic function $f(z)=\frac{1}{z^{5}}$ is a solution of equation
\left( 5.2\right) $ for the coefficients $A_{20}(z)=3(z-2i)^{7},$ $A_{11}(z)
\frac{1}{z-i},$ $A_{10}(z)=\frac{5}{z^{2}+1},$ $A_{01}(z)=\frac{i}{2},$
A_{00}(z)=\frac{5i}{2z}$ and $F(z)=3(z-2i)^{2}.$ Clearly, $A_{ij}(z)$
\left( i=0,1,2,j=0,1\right) $ and $F(z)$ satisfy the conditions in Case
\left( \text{iv}\right) $ of Theorem 1.2 such that
\begin{equation*}
\delta (\infty ,A_{20})=1>0,
\end{equation*
\begin{equation*}
\mu _{\log }(A_{11})=\max \{\rho _{\log }(F),\rho _{\log
}(A_{ij}),(i,j)\not=(1,1),(2,0)\}=\mu _{\log }(A_{20})=1
\end{equation*
and
\begin{equation*}
\sum_{\rho _{\log }(A_{ij})=\mu _{\log }(A_{20}),(i,j)\not=(1,1),(2,0)}\tau
_{\log }(A_{ij})+\tau _{\log }(F)+\underline{\tau }_{\log }(A_{11})=6<\delta
\underline{\tau }_{\log }(A_{20})=7.
\end{equation*
We see that $f$ satisfies $\rho _{\log }(f)=1=\mu _{\log }(A_{20}).$
\begin{center}
{\Large References}
\end{center}
\noindent $\left[ 1\right] \ $B. Bela\"{\i}di, \textit{Growth and
oscillation of solutions to linear differential equations with entire
coefficients having the same order}. Electron. J. Differential Equations
2009, No. 70, 10 pp.
\noindent $\left[ 2\right] \ $B. Bela\"{\i}di, \textit{Growth of meromorphic
solutions of finite logarithmic order of linear difference equations}. Fasc.
Math. No. 54 (2015), 5--20.
\noindent $\left[ 3\right] \ $B. Bela\"{\i}di, \textit{Some properties of
meromorphic solutions of logarithmic order to higher order linear difference
equations}. Bul. Acad. \c{S}tiin\c{t}e Repub. Mold. Mat. 2017, no. 1(83),
15--28.
\noindent $\left[ 4\right] \ $B. Bela\"{\i}di, \textit{Study of solutions of
logarithmic order to higher order linear differential-difference equations
with coefficients having the same logarithmic order}. Univ. Iagel. Acta
Math. No. 54 (2017), 15--32.
\noindent $\left[ 5\right] \ $R. Bellaama and B. Bela\"{\i}di, \textit{Lower
order for meromorphic solutions to linear delay-differential equations}.
Electron. J. Differential Equations 2021, Paper No. 92, 20 pp.
\noindent $\left[ 6\right] $ T. B. Cao, J. F. Xu and Z. X. Chen, \textit{On
the meromorphic solutions of linear differential equations on the complex
plane}. J. Math. Anal. Appl. 364 (2010), no. 1, 130--142.
\noindent $\left[ 7\right] \ $T. B. Cao, K. Liu and J. Wang, \textit{On the
growth of solutions of complex differential equations with entire
coefficients of finite logarithmic order}. Math. Reports 15(65), 3 (2013),
249--269.
\noindent $\left[ 8\right] $ Z. Chen and X. M. Zheng, \textit{Growth of
meromorphic solutions of general complex linear differential-difference
equation}. Acta Univ. Apulensis Math. Inform. No. 56 (2018), 1--12.
\noindent $\left[ 9\right] $ T. Y. \ P. Chern, \textit{On the maximum
modulus and the zeros of a transcendental entire function of finite
logarithmic order.} Bull. Hong Kong Math. Soc. 2 (1999), no. 2, 271--277.
\noindent $\left[ 10\right] $ T. Y. \ P. Chern, \textit{On meromorphic
functions with finite logarithmic order}. Trans. Amer. Math. Soc. 358
(2006), no. 2, 473--489.
\noindent $\left[ 11\right] \ $Y. M. Chiang and S. J. Feng, \textit{On the
Neva\textit{nl}inna characteristic of }$f\left( z+\eta \right) $ \textit{and
difference equations in the complex plane. }Ramanujan J. 16 (2008), no. 1,
105--129.
\noindent $\left[ 12\right] \ $A. Ferraoun and B. Bela\"{\i}di, \textit
Growth and oscillation of solutions to higher order linear differential
equations with coefficients of finite logarithmic order}. Sci. Stud. Res.
Ser. Math. Inform. 26 (2016), no. 2, 115--144.
\noindent $\left[ 13\right] $ A. Goldberg and I. Ostrovskii, \textit{Value
distribution of meromorphic functions}. Transl. Math. Monogr., vol. 236,
Amer. Math. Soc., Providence RI, 2008.
\noindent $\left[ 14\right] \ $W. K. Hayman, \textit{Meromorphic functions}.
Oxford Mathematical Monographs, Clarendon Press, Oxford 1964.
\noindent $\left[ 15\right] $ J. Heittokangas, R. Korhonen and J. R\"{a}tt
\"{a}, \textit{Generalized logarithmic derivative estimates of
Gol'dberg-Grinshtein type}. Bull. London Math. Soc. 36 (2004), no. 1,
105--114.
\noindent $\left[ 16\right] \ $J. Heittokangas and Z. T. Wen, \textit
Functions of finite logarithmic order in the unit disc}. Part I. J. Math.
Anal. Appl. 415 (2014), no. 1, 435--461.
\noindent $\left[ 17\right] \ $J. Heittokangas and Z. T. Wen, \textit
Functions of finite logarithmic order in the unit disc}. Part II. \ Comput.
Methods Funct. Theory 15 (2015), no. 1, 37--58.
\noindent $\left[ 18\right] \ $R. G. Halburd and R. J. Korhonen, \textit
Difference analogue of the lemma on the logarithmic derivative with
applications to difference equations. }J. Math. Anal. Appl. 314 (2006
\textit{, }no. 2, 477--487.
\noindent $\left[ 19\right] \ $I. Laine and C. C. Yang, \textit{Clunie
theorems for difference and }$q$\textit{-difference polynomials}. J. Lond.
Math. Soc. (2) 76 (2007), no. 3, 556--566.
\noindent $\left[ 20\right] \ $K. Liu, I. Laine and L. Z. Yang, \textit
Complex delay-differential equations}. De Gruyter Studies in Mathematics 78.
Berlin, Boston: De Gruyter, 2021. https://doi.org/10.1515/9783110560565
\noindent $\left[ 21\right] \ $Z. Latreuch and B. Bela\"{\i}di, \textit
Growth and oscillation of meromorphic solutions of linear difference
equations}. Mat. Vesnik 66 (2014), no. 2, 213--222.
\noindent $\left[ 22\right] \ $J. Tu and C. F. Yi, \textit{On the growth of
solutions of a class of higher order linear differential equations with
coefficients having the same order}. J. Math. Anal. Appl. 340 (2008), no. 1,
487--497.
\noindent $\left[ 23\right] \ $Z. T. Wen, \textit{Finite logarithmic order
solutions of linear }$q$\textit{-difference equations}. Bull. Korean Math.
Soc. 51 (2014), no. 1, 83--98.
\noindent $\left[ 24\right] \ $S. Z. Wu and X. M. Zheng, \textit{Growth of
meromorphic solutions of complex linear differential-difference equations
with coefficients having the same order}. J. Math. Res. Appl. 34 (2014), no.
6, 683--695.
\noindent $\left[ 25\right] \ $C. C. Yang and H. X. Yi, \textit{Uniqueness
theory of meromorphic functions}. Mathematics and its Applications, 557.
Kluwer Academic Publishers Group, Dordrecht, 2003.
\end{document}
|
2,877,628,088,644 | arxiv | \section{Introduction}\label{sec:intro}
The flux density is one of the main observables of pulsars. The analysis of pulsars spectra provide information on both the radiation mechanism and the influence of the interstellar medium. The spectra of the majority of pulsars in frequency range between 100~MHz and 10~GHz are well characterized by the single power-law function with a mean spectral index of -1.6 \citep{1995Lorimer, 2000Maron, 2017Jankowski}. In recent years \citet{2007Kijak, 2011KijakA} found that some pulsars have spectra that exhibit turnovers between 0.5-1.5~GHz and proposed to name such cases as Gigahertz-peaked spectra (GPS) pulsars. The comprehensive study conducted by \citet{2017Jankowski} revealed that 21\% of pulsars spectra were either broken or curved\footnote{This publication report flux densities measured at 728, 1382 and 3100~MHz observed in the Southern hemisphere.}. However, the recent results of the Green Bank North Celestial Cap (GBNCC) pulsar survey shows that 99\% of pulsar spectra is well described by a simple power-law function\footnote{In their sample only four spectra had breaks and required two different power-law functional fits. They report measurements conducted at 350~MHz in the Northern hemisphere.} \citep{2020McEwen}. This discrepancy may be due to different frequency coverage of the surveys and may be also related to observations of different parts of the sky. Moreover, \citet{2020McEwen} reported several non-detections that could be very faint sources at 350~MHz.
In addition to the GPS phenomenon many recent observations confirm more common low-frequency (i.e. below 100~MHz) breaks or turnovers in pulsars spectra: Low Frequency Array (LOFAR) pulsar census shows that only 18-21\% of spectra were well described by the simple power-law function at low frequencies \citep{2020Bilous,2020Bondonneau}. Also the results of the GaLactic and Extragalactic All-sky MWA survey (GLEAM) revealed that the 52\% of investigated pulsars spectra above 72~MHz were either broken or curved \citep[see][and references therein]{2017Murphy}. The reduction of the pulsar flux density at low frequency regime could be either caused by some intrinsic mechanism related to the generation of pulsar emission or by the influence of the interstellar matter \citep[see, for example, ][]{2002Sieber}.
Previous studies suggest that the origin of high-frequency spectral turnovers is most probably extrinsic in nature. \citet{2011KijakA} pointed out that most of the GPS pulsars have interesting environments, such as a supernova remnant (SNR), pulsar wind nebula (PWN) or an H~II region. In the case of PSR~B1259$-$63 that orbits a Be star, \citet{2011KijakB} observed changes in the pulsar's spectrum during different observing sessions. When the pulsar is far away from the star its spectrum showed a typical power-law behaviour, but as it approaches closer to the star the spectrum is apparently changing into a broken type and then finally showed a curved behaviour with turnover. The most probable explanation is that the pulsar's radiation was absorbed in the strong stellar wind of the luminous Be star. This case was a key to formulate the hypothesis, that the observed flux density deficit below GHz frequencies is caused by some external mechanism. Another case strengthening this hypothesis is the flux density variability of the radiomagnetar Sagittarius $\mathrm{A}^{*}$ observed after its outburst in 2013, which again suggested that some external factors are responsible for observed high-frequency turnovers \citep[see][and references therein]{2015Lewandowski}. The low frequency part of the radio magnetar spectrum showed lower values a week after the outburst compared to the measurements a month later. The spectral shape continued to exhibit a deviation from a simple power-law behaviour even 100 days after the outburst \citep{2015Pennucci}. The observed spectral changes could be explained by the constant absorption of the radio emission by the matter in pulsar's vicinity and additional absorption caused by the matter released during the outburst. The GPS phenomenon was also visible in two other radiomagnetars \citep[see for details][]{2013Kijak}.
To summarize, currently we know of around 30 GPS pulsars, where the majority of them were classified by \citet[][and 2021 (in preparation)]{2018Basu, 2014Dembska, 2015DembskaB, 2011KijakA, 2017Kijak}, one was discovered by \citet{2013Allen} and three were identified by \citet{2017Jankowski}.
\citet{2015Lewandowski} proposed to use the free-free thermal absorption model to explain the observed turnover in the pulsar spectra. They showed that the observed absorption could be caused by a surrounding medium in the form of either dense SNR filament, bow-shock PWN (where the amount of absorption depends on the geometry of system) or a relatively cold H~II region. This model was applied to study the environmental conditions around a number of GPS pulsars \citep{2016Basu, 2017Kijak, 2018Basu, 2018Rozko, 2020Rozko}. Similar approach was also used by \citet{2016Rajwade} to explain turnover behaviour.
The main limitation of previous studies to estimate the low frequency spectrum and thereby constrain the GPS nature was a poor flux density measurements coverage in the low frequency domain. For some of the GPS candidates only two narrow-band flux density measurements exist at the low frequency range (below 1~GHz). A near continuous frequency coverage between 300 and 800~MHz, i.e. around expected peak frequency, should allow us to better characterize the shape of spectra. This motivated us to use the wide-band receivers of the Giant Metrewave Radio Telescope (GMRT) to study the low frequency spectral nature of candidate GPS pulsars using interferometric technique\footnote{The compatibility of the pulsars flux density measured in standard phase-resolved pulsar observations and imaging observations was shown for example by \citet{2016Basu}.}. Previous narrow-band observations using GMRT suggested that three pulsars: J1741$-$3016, J1757$-$2223 and J1845$-$0743 are likely to have GPS spectra. We have subsequently expanded on the earlier observations and used the wide-band receivers to measure the pulsars flux densities with the aim to ascertain the spectral nature of the three sources. In this work we present results of both kinds of observations: the narrow-band measurements for central frequency of 325~MHz, 610~MHz, and 1280~MHz; and wide-band measurements for two spectral bands: 250-500~MHz and 550-850~MHz.
The outline of the paper is as follows. In Section 2 we describe the observations and the calibration techniques used to estimate the flux density. In Section 3 we present analysis of the measured spectra in each pulsar and the implications on their respective environments. In Section 4 we summarise the results of narrow-band and wide-band observations.
\section{Observations and data analysis} \label{sec:obs}
The observations were conducted using GMRT which is an array of thirty 45-meters parabolic dishes. For many years GMRT was strictly a narrow-band instrument that allowed observations at five different frequency ranges centered around: 153~MHz, 235~MHz, 325~MHz, 610~MHz and 1280~MHz with a maximum bandwidth of 33~MHz \citep{2010Roy}. Recently its receiver system was upgraded and now provides a near continuous coverage at four wide frequency bands: $120-250$~MHz (band-2), $250-500$~MHz (band-3), $550-850$~MHz (band-4), $1050-1450$~MHz (band-5) \citep{2017Gupta}.
\begin{table}
\centering
\caption{Observing details}
\label{tab:obs_details1}
\begin{tabular}{lccc}
\hline
Obs Date & Frequency & Phase Cal & Calibrator Flux\\
& MHz & & Jy \\
\hline
2015 Aug 15 & 610 & $1714-252$ & $4.7 \pm 0.3$ \\
2015 Aug 19 & 610 & $1822-096$ & $6.1 \pm 0.4$\\
2015 Aug 28 & 610 & $1714-252$ & $4.5 \pm 0.3$ \\
2015 Aug 29 & 610 & $1822-096$ & $6.0 \pm 0.4$\\
2017 Aug 24 & 325 & $1822-096$ & $3.5 \pm 0.2$\\
2017 Aug 24 & 1280 & $1822-096$ & $5.4 \pm 0.4$\\
2017 Sep 19 & 325 & $1822-096$ & $3.6 \pm 0.2$\\
2017 Sep 22 & 1280 & $1822-096$ & $5.9 \pm 0.4$\\
2018 May 2, 30 & 348 & $1822-096$ & $3.4 \pm 0.2$ \\
& 392 & $1822-096$ & $3.6 \pm 0.2$ \\
& 416 & $1822-096$ & $3.7 \pm 0.1$ \\
& 441 & $1822-096$ & $3.6 \pm 0.1$ \\
2018, May 15 & 584 & $1822-096$ & $6.6 \pm 0.7$ \\
& 638 & $1822-096$ & $6.4 \pm 0.7$ \\
& 691 & $1822-096$ & $5.9 \pm 0.7$ \\
& 744 & $1822-096$ & $5.9 \pm 0.6$ \\
& 791 & $1822-096$ & $5.9 \pm 0.6$ \\
2018 June 11 & 584 & $1822-096$ & $6.6 \pm 0.7$ \\
& 638 & $1822-096$ & $6.4 \pm 0.7$ \\
& 691 & $1822-096$ & $6.1 \pm 0.6$ \\
& 744 & $1822-096$ & $5.9 \pm 0.6$ \\
& 791 & $1822-096$ & $6.2 \pm 0.7$ \\
\hline
\end{tabular}
\end{table}
The narrow-band observations at 610~MHz were conducted in 2015, August (project code: 28\_072), while the 325~MHz and 1280~MHz observations were carried out in August-September 2017 (project code: 32\_072). These observations were part of a larger project studying the GPS nature in pulsar spectra, which will be published in Kijak et al. (2021; in preparation). After the initial analysis of these observations pulsars PSR J1741$-$3016, PSR J1757$-$2223 and PSR J1845$-$0743 were identified as possible GPS candidates. These three sources were selected for the observations using the wide-band GMRT receivers that were conducted in May-June 2018 (project code: 34\_027).
A total of 2048 spectral channels over the entire frequency band were recorded during the wide-band observations at Band-3 ($250-500$~MHz) and Band-4 ($550-850$~MHz). Initial checks were carried out to identify the suitable sub-bands devoid of significant interference for subsequent image analysis and flux measurements. Band-3 was divided into six sub-bands each of approximately 30~MHz wide (256 channels) and Band-4 was divided into five sub-bands each of approximately 50~MHz wide (256 channels). After further inspection the sub-bands on the edges at the leading
and trailing ends of Band-3 sensitivity profile were excluded due to non-linear shape. The observing details together with central frequency of each sub-band are shown in Table~\ref{tab:obs_details1}. The subsequent analysis for the sub-bands were identical to the narrow-band observations as detailed below.
The flux calibrators 3C 286 and 3C 48 were observed during each observing run to calibrate the flux density scale. The phase calibrator $1822-096$ was observed at regular intervals to correct for the temporal variations and fluctuations in the frequency band (with exception of 2015, August 15 and 28 when phase calibrator 1714-252 was observed). All pulsars were observed for around 60-minutes during two observational sessions separated by a few weeks to take into account the possible influence of interstellar scintillations. The flux scales of 3C~48 and 3C~286 were set using the estimates from \citet{2013Perley}, which were subsequently used to calculate the flux density of the different phase calibrators during each observing session. The observing details, like the observation dates, the measurement frequency and estimated flux levels of the phase calibrators are summarized in Table~\ref{tab:obs_details1}. There were issues with the flux calibrator measurements during one of the observing session, 2018, May 30 at Band-3. However, identical observing setup was also used on 2018, May 2 and flux calibration from this day was used for setting the flux scale of the earlier observation as well. Additional checks, using the flux levels of background sources similar to \citet{2018Rozko}, were conducted to ensure the accuracy of flux scaling within measurements errors. The removal of bad data, calibration and image analysis was carried out using the Astronomical Image Processing System (AIPS) as described previously by \citet{2015DembskaB, 2017Kijak, 2018Basu}.
\section{Results} \label{sec:results}
In Table~\ref{tab:pulsars_fluxes}, we report the measured flux density of the pulsars from the three narrow-band observations (325~MHz, 610~MHz and 1280~MHz) as well as the four sub-bands in Band-3 and five sub-bands in Band-4. We used proportional errors of 20\% to account for variations in the flux scaling factors over the wide-band observations.
\begin{table}[ht!]
\centering
\caption{Pulsars Flux Measurements}
\label{tab:pulsars_fluxes}
\begin{tabular}{cccc}
\hline
Frequency & & Pulsars Flux & \\
\centering{MHz} & & mJy & \\
\hline
& J1741$-$3016 & J1757$-$2223 & J1845$-$0743 \\
325 & $1.8 \pm 0.9$ & $< 1.05$ & $1.8 \pm 0.3$ \\
348 & $2.1 \pm 0.4$ & $< 1.90$ & $2.4 \pm 0.2$ \\
392 & $2.5 \pm 0.4$ & $< 1.45$ & $2.5 \pm 0.2$ \\
416 & $2.4 \pm 0.3$ & $< 1.02$ & $2.8 \pm 0.1$ \\
441 & $2.3 \pm 0.2$ & $1.2 \pm 0.5$ & $2.7 \pm 0.1$ \\
584 & $5.5 \pm 0.5$ & $1.8 \pm 0.2$ & $4.8 \pm 0.4$ \\
610 & $3.2 \pm 0.3$ & $1.5 \pm 0.2$ & $4.3 \pm 0.8$ \\
638 & $5.1 \pm 0.5$ & $1.6 \pm 0.2$ & $4.9 \pm 0.4$ \\
691 & $5.3 \pm 0.6$ & $1.5 \pm 0.2$ & $4.6 \pm 0.4$ \\
744 & $3.8 \pm 0.4$ & $1.7 \pm 0.2$ & $4.9 \pm 0.4$ \\
791 & $3.8 \pm 0.5$ & $1.4 \pm 0.2$ & $4.6 \pm 0.4$ \\
1280 & $2.6 \pm 0.3$ & $1.0 \pm 0.1$ & $3.0 \pm 0.2$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\columnwidth]{J1741_pub2.png}
\includegraphics[width=0.98\columnwidth]{J1757_pub2.png}
\includegraphics[width=0.98\columnwidth]{J1845_pub2.png}
\caption{The figure shows the measured spectra of three pulsars: J1741$-$3016 (upper panel), J1757$-$2223 (middle panel) and J1845$-$0743 (lower panel), which exhibit GPS behaviour. In each case the spectral nature is approximated with a free-free thermal absorption model (dark line) along with $1-\sigma$ envelopes to the model fits (dotted lines). The different flux measurements shown in the figure are as follows: GMRT 2015 denotes narrow-band observations conducted at 610~MHz and GMRT 2017 denotes narrow-band measurements at 325 and 1250~MHz. The filled diamonds denotes the recent wide-band observations and the high frequency measurements are taken from works of \citet{2017Jankowski,2018Johnston}.
\label{fig:spectrum1}}
\end{figure}
Figure~\ref{fig:spectrum1} (top panel) presents the spectrum of PSR J1741$-$3016 with all available flux density measurements. The new measurements confirm the GPS characteristic of this spectrum. The observed discrepancy between band-3 measurements and flux density value at 610~MHz is likely due to the refractive interstellar scintillation (RISS). The dispersion measure (DM) of PSR J1741$-$3016 is equal to $382~\mathrm{cm}^{-3}\mathrm{pc}$ \citep{2002Morris}. For pulsars with such high dispersion measures the timescale of RISS can be within the range of months or even years and the modulation index near 600~MHz should be around 0.3 \citep{1990Rickett}. Thus RISS could be responsible for observed flux density fluctuations for measurements that are separated by a few years, but it should not affect the mean flux density values at each frequency obtained from two observational sessions separated by only a few weeks. In turn the diffractive interstellar scintillation (DISS) timescale in this case is very short (in order of a few minutes) and thus any intensity fluctuations caused by DISS should be completely averaged during each observational session.
The middle panel of Figure~\ref{fig:spectrum1} shows the spectrum of PSR J1757$-$2223, where at frequencies lower than 441~MHz the pulsar flux was below the detection limits (which are reported in Table~\ref{tab:pulsars_fluxes}). These results confirm the GPS classification of the spectrum and the wide-band measurements are consistent with flux densities obtained from narrow-band observations. The DM of PSR J1757$-$2223 is $239.3~\mathrm{cm}^{-3}\mathrm{pc}$ \citep{2002Morris}, and hence should be less affected by RISS. This case shows that for pulsars with expected flux density values between 1 and 2 mJy at low frequency band the observing time should be increased in future observations to improve the detection sensitivity.
In the case of PSR J1845$-$0743 (with $\mathrm{DM} = 280.93~\mathrm{cm}^{-3}\mathrm{pc}$, \citealt{2013Petroff}) both narrow-band and wide-band measurements confirm that the spectrum should be classified as GPS (see bottom panel in Fig.~\ref{fig:spectrum1}). The low-frequency wide-band measurements show some fluctuations, but are consistent within measurements errors.
\subsection{Physical Constraints on the Surrounding Medium}
In this work, similar to several earlier studies, we have used the free-free thermal absorption model to explain the observed turnovers in pulsars spectra \citep[for details see e.g.][]{2015Lewandowski, 2017Kijak}. This model was first proposed by \citet{1973Sieber} to explain low-frequency turnovers. In our approach we are using a simplified model of optical depth \citep{2013Wilson}, which gives us the following estimate of flux density (S) as a function of frequency ($\nu$):
\begin{equation}
S_{\nu} = A \left( \frac{\nu}{10 \mathrm{~GHz}} \right)^{\alpha} e^{-B\nu^{-2.1}}
\end{equation}
where $A$ is the intrinsic flux density at 10 GHz, $\alpha$ is the pulsar intrinsic spectral index and $B$ equals $0.08235 \times T_{\mathrm{e}}^{-1.35}~\mathrm{EM}$ (EM is emission measure and $T_{\mathrm{e}}$ is temperature of the absorber). Using the Levenberg-Marquardt last squares algorithm \citep{1944Levenberg, 1963Marquardt} we determined the parameters: $A$, $\alpha$ and $B$. We estimated the errors using $\chi^2$ mapping. Table~\ref{tab:results1} shows the results of the fits and Figure \ref{fig:spectrum1} shows the fitted model with $1 \sigma$ envelopes. Due to the lack of low frequency measurements, all three pulsars spectra were previously classified as typical power-law
\citep{2017Jankowski}. All observed pulsars may now be classified as new GPS sources: calculated peak frequency ($\nu_{\mathrm{p}}$), i.e. the frequency at which the spectrum exhibits a maximum, is 620~MHz for PSR J1741$-$3016, 640~MHz for PSR J1757$-$2223 and 650~MHz for PSR J1845$-$0743.
The main purpose of wide-band observations was to improve the model approximation and thus determine the maximum energy in the spectrum with higher accuracy. To check how the wide-band observations improved the quality of finding the peaks in spectra we compare the peak frequency obtained from all available flux measurements with those obtained from just the narrow-band observations within their error limits. The high frequency measurements in each case were obtained from \citet{2017Jankowski,2018Johnston}. The peak frequency from the purely narrow-band estimates is shown as $\nu_{\mathrm{p}}^{\mathrm{nb}}$ in Table~\ref{tab:results1}. No spectral turnover can be identified in PSR J1757$-$2223 from just the narrow-band observations due to lack of detection at 325~MHz. In the case of PSR J1741$-$3016 for the only narrow-band observations constrained peak frequency is $800^{+400}_{-400}$~MHz. In comparison the peak frequency obtained from wide-band data gives much better peak frequency estimate: $620^{+270}_{-220}$~MHz. In the case of PSR J1845$-$0743 both sets of measurements gives similar results (see Table~\ref{tab:results1}).
\begin{table}
\centering
\caption{Estimating the fitting parameters for the GPS using the thermal absorption model.}
\begin{tabular}{c c c c c c c} \hline
PSR name & A & B & $\alpha$ & $\chi^2$ & $\nu_{\mathrm{p}}$ & $\nu_{\mathrm{p}}^{\mathrm{nb}}$\\
& & & & & GHz & GHz\\ \hline
J1741$-$3016 & $0.03^{+0.05}_{-0.02}$ & $0.4^{+0.2}_{-0.2}$ & $-2.2^{+0.6}_{-0.8}$ & $1.52$ & $0.62^{+0.27}_{-0.22}$ & $0.80^{+0.40}_{-0.40}$ \\
J1757$-$2223 & $0.07^{+0.06}_{-0.04}$ & $0.3^{+0.2}_{-0.1}$ & $-1.4^{+0.3}_{-0.4}$ & $0.23$ & $0.64^{+0.29}_{-0.25}$ & - \\
J1845$-$0743 & $0.2^{+0.3}_{-0.2}$ & $0.3^{+0.1}_{-0.1}$ & $-1.4^{+0.4}_{-0.6}$ & $0.85$ & $0.65^{+0.29}_{-0.21}$ & $0.62^{+0.17}_{-0.13}$\\
\hline
\end{tabular}
\label{tab:results1}
\end{table}
\begin{table}
\centering
\caption{The constraints on the physical parameters of the absorbing medium.}
\begin{tabular}{c c c c } \hline
Size & n$_{\mathrm{e}}$ & EM & T$_{\mathrm{e}}$ \\
pc & cm$^{-3}$ & pc cm$^{-6}$ & K \\ \hline
\multicolumn{4}{c}{J1741$-$3016} \\
0.1 & $1910\pm{30}$ & $3650\pm{114}$ & 4170$^{+1680}_{-1440}$ \\
1.0 & $191\pm{3}$ & $365\pm{11}$ & 760$^{+305}_{-260}$ \\
10.0 & $19.1\pm{0.3}$ & $36.5\pm{1.1}$ & 137$^{+55}_{-48}$ \\ \hline
\multicolumn{4}{c}{J1757$-$2223} \\
0.1 & 1196$\pm{2}$ & 1431.6$\pm{4.8}$ & 2900$^{+1380}_{-1210} $ \\
1.0 & 119.6$\pm{0.2}$ & $143.16\pm{0.48}$ & $530^{+250}_{-220}$ \\
10.0 & 11.96$\pm{0.02} $ & $14.316\pm{0.048}$ & $96^{+46}_{-40}$ \\ \hline
\multicolumn{4}{c}{J1845$-$0743} \\
0.1 & $1404.6\pm{0.1}$ & $1973.0\pm{0.3}$ & $3570^{+1220}_{-1020} $ \\
1.0 & $140.46\pm{0.01}$ & $197.30\pm{0.03}$ & $650^{+220}_{-180}$ \\
10.0 & $14.046\pm{0.001} $ & $19.730\pm{0.003}$ & $118^{+40}_{-34}$ \\ \hline
\end{tabular}
\label{tab:results2}
\end{table}
\begin{table}[ht!]
\centering
\caption{The basic parameters of pulsars.\footnote{All values comes from the ATNF Pulsar Catalogue: https://www.atnf.csiro.au/research/pulsar/psrcat \\ \citep{2005Manchester}}}
\begin{tabular}{c c c c c} \hline
PSR name & Distance & Age & DM & $\nu_{\mathrm{p}}$\\
& kpc & Myr & pc cm$^{-6}$ & MHz \\ \hline
J1741$-$3016 & $3.870$ & $3.34$ & $382$ & $620^{+270}_{-220}$\\
J1757$-$2223 & $3.727$ & $3.75$ & $239.3$ & $640^{+290}_{-250}$\\
J1845$-$0743 & $7.113$ & $4.52$ & $280.93$ & $650^{+290}_{-210}$\\ \hline
\end{tabular}
\label{tab:results3}
\end{table}
Since there are no clear detection of known supernova remnants or pulsar wind nebulae in the vicinity of these pulsars, the discussion of potential absorbers is more speculative. Nonetheless, we decide to follow \cite{2016Basu} and \cite{2017Kijak} and used the information from the pulsars dispersion measure (DM) to get some constraints on the electron density and temperature of a potential absorber. Similar to these earlier works we assumed that half of the contribution to DM comes from the potential absorber and is used to calculate its electron density $n_{\mathrm{e}}$. Using that information we calculated the emission measure for three likely environments: dense supernovae remnant filament (with size equal to $0.1$ pc), the pulsar wind nebula (with size of $1.0$ pc) and a cold H II region (with size of $10.0$ pc). In each case the fitted value of parameter $B$ provided the constraints on the electron temperature. The results are shown in Table \ref{tab:results2}.
The expected value of the electron density $n_{\mathrm{e}}$ and the electron temperature $T_{\mathrm{e}}$ are:
\begin{itemize}
\item $n_{\mathrm{e}} \sim$ a few thousand cm$^{-3}$ for $T_{\mathrm{e}} \sim 5000$~K in case of a dense supernovae filament \citep[see e.g.][]{2013Lee};
\item $n_{\mathrm{e}} \sim 50-250$cm$^{-3}$ and $T_{\mathrm{e}} = 1500$~K for a bow-shock pulsar wind nebulae \citep[see][and references therein]{2002Bucciantini,2006Gaensler};
\item $n_{\mathrm{e}} \sim$ a several hundred cm$^{-3}$ and $T_{\mathrm{e}} = 1000 - 5000$~K for an H II region \citep[see][and references therein]{2006Shabala}.
\end{itemize}
Wide-band observations allow us to determine the shape of the spectrum with more accuracy and thus help to eliminate the likelihood of some of the possible absorbers. In all the cases the H~II region should be excluded since the obtained electron temperatures are too low. On the other hand, the electron densities calculated from DM are too low to sustain a dense supernovae remnant filament and the age of pulsars (see Table~\ref{tab:results3}) indicate that any supernovae remnant formed during their birth should have already dissipated. Thus the PWN scenario seems the most plausible here, although there are no clear detection of known PWN around any of these sources. This is not surprising since the angular size of structure of 1~pc diameter at the distance of each pulsar turns out to be between 0.5 to 0.9 arcseconds. This is well below the angular size that can be detected using an interferometer like GMRT, which has minimum angular resolution of around a few arcseconds.
Taking into account the basic pulsar parameters such as Age, Distance, DM (see Table~\ref{tab:results3}) together with observed turnovers in spectra we believe that these three pulsars are good candidates for the host of pulsar wind nebulae. Even if they have not been observed so far, the future advances in observing techniques with upcoming instruments like the square kilometer array (SKA) may enable their detection.
\section{Conclusions}
In this work we present the results of the wide-band observations of three pulsars using GMRT. We identified three new GPS pulsars taking advantage of the dense frequency coverage that improves the quality of estimating the low frequency spectrum. The wide-band observations are highly useful in estimating the GPS behaviour especially since the frequencies at which we observe are near the turnover in the spectrum. A more precise determination of the peak frequency allows us to better constrain on the nature of the surrounding medium and eliminate several potential absorbers.
Of the three pulsars selected for wide-band observations, all were found to exhibit GPS-type spectra, confirming that our methods and criteria for selecting potential candidates proved to be correct. The case of PSR J1757$-$2223 will also help us to prepare future observational projects related to the wide-band observations of the GPS pulsars: for pulsars with expected flux density between 1 and 2~mJy in Band-3 a pulsar should be observed for longer duration, in excess of 60 minutes that we used, to improve detection sensitivity.
Discussion of potential absorbers has shown that all three pulsars are good candidates for the search for pulsar wind nebulae. Even if such nebulae have not been discovered in current sky surveys, the improvement of observation techniques, both in the X-ray and radio range, should enable their detection in the future.
\begin{acknowledgments}
We thank the staff of the GMRT who have made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. This work was supported by the grant 2020/37/B/ST9/02215 of the National Science Centre, Poland.
\end{acknowledgments}
|
2,877,628,088,645 | arxiv | \section{Introduction}
Let $f:X\to Y$ be a semistable family of $n$-dimensional projective manifolds over a projective $m$-fold $Y$.
Let $\sL$ be a line bundle on $X$. The volume of $\sL$ is defined as
$$
v(\sL)=\limsup \frac{\dim(X)! \cdot \dim(H^0(X,\sL^\nu))}{\nu^{\dim(X)}}.
$$
If $\sL$ is nef, then \cite[Lemma 3.1]{V82} says that
$$
\dim(H^i(X,\sL^\nu))\leq a_i\cdot \nu^{\dim(X)-i}
$$
and hence the Hirzebruch-Riemann-Roch Theorem implies that
$v(\sL)=\ch_1(\sL)^{\dim(X)}$.
In this paper we study the upper bound of the volume of
$\omega_{X/Y}$. The main result can be stated as follows.
\begin{theorem}\label{mainresult1}
Let $f:X\to Y$ be a semistable non-isotrivial family of minimal
$n$-folds over a curve $Y$ of genus $b$. (i.e. that for all smooth
fibre $F_f$ of $f$ the canonical line bundle $\omega_{F_y}$ is
semiample.) Denote by
$s=\# S$ the number of singular fibres of $f$ over $S\subset Y$.
Then
\begin{align}\label{EQ1.1}
v(\omega_{X/Y}) \leq \frac{(n+1)n}{2} \cdot v(\omega_F) \cdot
\deg\Omega^1_Y(\log S).
\end{align}
In particular, if $b\geq 1$, then we get
\begin{align}\label{EQ1.2}
v(\omega_{X}) \leq v(\omega_F) \cdot \left( \frac{(n+1)(n+2)}{2}
v(\omega_Y) + \frac{n(n+1)s}{2}\right).
\end{align}
\end{theorem}
When $f: X\to Y$ is a non-trivial semi-stable family of curves of
genus $g\geq 2$, Vojta \cite{Vo88} shows the following canonical
class inequality by using the famous Miyaoka-Yau inequality,
\begin{align}
K_{X/Y}^2=v(\omega_{X/Y}) \leq \deg(\omega_F)
\cdot\deg\Omega^1_Y(\log S)= (2g-2) \cdot (2b-2+s),
\end{align}
which is a special case in Theorem \ref{mainresult1}. The second
author proved that Vojta's inequality is strict when $s\neq 0$
\cite[Lemma 3.1]{Ta95}, and generalized it to the non-semistable
case \cite[Theorem 4.7]{Ta96}. K. F. Liu \cite{Li96} proved that
Vojta's inequality is strict in any case by using differential
geometric method.
The idea of our proof of Theorem \ref{mainresult1} is to use
Arakelov type inequality to get canonical class inequality.
Viehweg and the third author \cite{VZ01, VZ05} get Arakelov
type inequality for $\mu(f_{*}{\omega}_{X/C}^{\otimes\nu})$,
\begin{align}
\mu(f_{*}{\omega}_{X/C}^{\otimes\nu})\le \frac{n\nu}{2}(2b-2+s),
\end{align}
where $\mu(f_{*}{\omega}_{X/C}^{\otimes\nu})$ is the
slope of the sheaf $f_{*}{\omega}_{X/C}^{\otimes\nu}$. The key point
of our proof of Theorem \ref{mainresult1} is to view the inequality
(\ref{EQ1.1}) as the limit of Viehweg-Zuo inequality when $\nu$
tends to infinity. We would like to mention that one can get the
Arakelov inequality for the case $n=1$ by combining Vojta's
inequality and Cornalba-Harris-Xiao's inequality \cite{CH88, Xi87},
$$
K_{X/Y}^2\geq \dfrac{4g-4}g\deg f_*\omega_{X/Y}.
$$
(\ref{EQ1.2}) gives an upper bound on $v(\omega_X)$. In fact,
Kawamata obtains a lower bound on $v(\omega_X)$ \cite[Theorem
7.1]{Zh07}
\begin{align}
v(\omega_X)\geq (n+1)\cdot v(\omega_Y)\cdot v(\omega_F).
\end{align}\\[.2cm]
The following theorem is an analog of Theorem 1.1 over higher
dimensional base.\\[.2cm]
\begin{theorem}\label{mainresult2}
Let $f:X\to Y$ be a family of $n$-folds over a projective manifold
$Y$ of dimension $m$, which is semi-stable in codimension one. Let
$S$ be a normal crossing divisor on $Y$ such that $\omega_Y(S)$ is
semi-ample and ample with respect to $Y\setminus S$.
Assume that $X$ is projective and that the fibres $F_y=f^{-1}(y)$
for $y\in Y\setminus S$ are minimal, i.e., $\omega_{F_y}$ is
semiample. Assume moreover that for some invertible sheaf $\sL$ on
$X$ with $\sL|_{F_y}$ ample and with Hilbert polynomial $h$ the
morphism $\varphi:Y_0 \to M_h$ is generically finite.
Let $l_0$ denote the smallest integer such that $|l_0\omega_Y(S)|$
defines a birational map. Then we have
\begin{align}
v(\omega_{X/Y})\leq c\cdot v(\omega_F)\cdot v(\omega_Y(S)),
\end{align}
where $c$ is a constant depending only on $n$, $m$ and
$l_0$.
\end{theorem}
Together with Eckart Viehweg we have thought about the Arakelov type inequality over higher dimensional base $Y$. The
generalized Arakelov type inequality plays an important role in
this paper which therefore should be considered as a joint work with Viehweg.
\textbf{Acknowledgements:}
This work was done while the third author was visiting Center of Mathematical Sciences
at Zhejiang University and East China Normal University. He
would like to thank both institutions' financial support and the hospitality.
The authors thank also professor De-Qi Zhang and professor Meng Chen for useful discussion.
\section{Arakelov Inequality}
Let $Y$ be a projective $m$-fold, $Y_0$ the complement of a normal crossing divisor $S$ with $\omega_Y(S)$ semi-ample and ample with respect to $Y_0$.
For a coherent sheaf $\sK$ on $Y$ we write $\mu(\sK)$ for the slope
$\ch_1(\sK)\cdot\ch_1(\omega_Y(S))^{m-1}/rk(\sK)$.
By Yau's fundamental theorem on the solution of Calabi-conjecture \cite{Y93} $\Omega^1_Y(\log S)$ carries a K\"ahler-Einstein metric. Hence, $S^m(\Omega^1_Y(\log S))$ is $\mu$- polystable for all $m.$
\begin{proposition}\label{prop1}
Let $f:X\to Y$ be a family of $n$-dimensional manifolds. Assume that $X$ is projective and that
the fibres $F_y=f^{-1}(y)$, for $y\in Y_0$ are minimal, i.e. that $\omega_{F_y}$ is semiample. Assume moreover that for some
invertible sheaf $\sL$ on $X$ with $\sL|_{F_y}$ ample and with Hilbert polynomial $h$ the morphism $\varphi:Y_0 \to M_h$ is generically finite, and that $f:X\to Y$ is semi-stable in codimension one.
Then there exists a constant $\rho=\rho(Y,S)\leq 1$ with:\\
Let $\sK_\nu$ be a saturated subsheaf of $(f_*\omega_{X/Y}^\nu)^{\vee\vee}$ for some $\nu \geq 2$. Then
$$
\mu(\sK_\nu) \leq \nu \cdot n\cdot \rho \cdot \mu(\Omega^1_Y(\log S)).
$$
\end{proposition}
\begin{proof}
Replacing $f:X\to Y$ by $f^r:X^{(r)}\to Y$ for a suitable non-singular model
of the $r$-fold fibre product, with $r=\rk(\sK_\nu)$ one finds
$$
\det(\sK_\nu)\subset \Big(\bigotimes^r f_*\omega_{X/Y}^\nu\Big)^{\vee\vee} = \big(f^r_*\omega_{X^{(r)}/Y}^\nu\big)^{\vee\vee}.
$$
Since $\mu(\det(\sK_\nu))=r\mu(\sK_\nu)$
we may assume that $\rk(\sK_\nu)=1$. Let us write $\sK=\sK_\nu$.
Choose a finite covering $\psi:Y''\to Y$ such that $\psi'^*\sK=\sH^\nu$, for an invertible sheaf
$\sH$ on $Y''$, and write $f'':X''\to Y''$ for the pullback family. For
$$\sL=\omega_{X''/Y''}\otimes {f''}^*\sH^{-1}$$
the inclusion $\sH^\nu\to f''_*\omega_{X''/Y''}^\nu$ induces a section $\sigma$ of
$\sL^\nu$. It gives rise to a cyclic covering of $X''$ whose desingularization
will be denoted by $\hat{W}$ (see \cite{EV92}, for example). Then for some divisor
$\hat{T}$ the morphism $\hat{h}:\hat{W}\to Y$ will be smooth over $Y\setminus\hat{T}$, but not semistable
in codimension one. Choose $Y'$ to be a covering, sufficiently ramified, such that the
pullback family has a semistable model over $Y'$ outside of a codimension two subscheme.
From now on we will no longer assume that $Y$, $Y''$ and $Y'$ are projective.
We will just use that those schemes are non-singular and that they are the complement of
subschemes of codimension $\leq 2$ in non-singular compactifications
$\bar{Y}$, $\bar{Y}''$ and $\bar{Y}'$. We will allow ourselves to choose those schemes
smaller and smaller, as long as this condition remains true. In this way, we may talk about
semistable reduction. Moreover, we may assume that all the discriminant divisors
are smooth. Also we can talk about the slopes in this set-up.
Next choose $W'$ to be a $\Z/\nu$ equivariant desingularization of $\hat{W}\times_YY'$, and $Z$ to be a desingularization of the quotient.
Finally let $W$ be the normalization of $Z$ in the function field of $\hat{W}\times_YY'$.
So we have a diagram of proper morphisms
\begin{equation}\label{eqco.1}
\begin{CD}
W @>\tau >> Z @> \delta >> X' @> \varphi' >> X'' @>\psi' >> X\\
@V h VV @V g VV @V f' VV @V f'' VV @V f VV\\
Y' @> = >> Y' @> = >> Y' @> \varphi >> Y'' @>\psi >> Y.
\end{CD}
\end{equation}
The $\nu$-th power of the sheaf $\sM=\delta^*\varphi'^*\sL$ has the section $\sigma'=\delta^*\varphi'^*(\sigma)$.
The sum of its zero locus and the singular fibres will become a normal crossing divisor after
a further blowing up. Replacing $Y'$ by a larger covering, one may assume that $Z\to Y'$ is semistable,
and that $Z$ and $D$ satisfy the assumption iii) stated below.
For a suitable choice of $T$ one has the following conditions:
\begin{enumerate}
\item[i.] $X'=X\times_YY'$, and $\tau:W\to Z$ is the finite covering obtained by taking the $\nu$-th root out of $\sigma' \in H^0(Z,\sM^\nu)$.
\item[ii.] $g$ and $h$ are both smooth over $Y'\setminus T'$
for a divisor $T'$ on $Y'$ containing $\varphi^{-1}(S+T)$.
Moreover $g$ is semistable and the local monodromy of $R^nh_*\C_{W\setminus h^{-1}(T')}$
in $t\in T'$ are unipotent.
\item[iii.] $\delta$ is a modification, and $Z\to Y'$ is semistable. Writing $\Delta'=g^*T'$ and $D$ for the zero divisor of $\sigma'$ on $Z$, the divisor
$\Delta'+D$ has normal crossing and $D_{\rm red}\to Y'$ is \'etale over $Y'\setminus T'$.
\item[iv.] $\delta_*(\omega_{Z/Y'} \otimes \sM^{-1})=\varphi^*(\sH)$,
\end{enumerate}
In fact, since $f:X\to Y$ is semistable, $X'$ has at most rational double points. Then
$$
\delta_*(\omega_{Z/Y'} \otimes \delta^*\varphi'^*\omega_{X/Y}^{-1})=
\delta_*(\omega_{Z/Y'} \otimes \delta^*\omega_{X'/Y'}^{-1})=\delta_*\omega_{Z/X'}= \sO_{X'},
$$
which implies iv). The properties i), ii) and iii) hold by construction.
$W$ might be singular, but the sheaf $\Omega_{W/Y'}^p(\log \tau^*\Delta')=\tau^*\Omega_{Z/Y'}^p(\log
\Delta')$ is locally free and compatible with desingularizations.
The Galois group $\Z/\nu$ acts on the direct image sheaves
$\tau_*\Omega_{W/Y'}^p(\log \tau^*\Delta')$. As in \cite{EV92} or \cite[Section 3]{VZ05} one has the following description of the sheaf of eigenspaces.
\begin{claim}
Let $\Gamma'$ be the sum over all components of $D$, whose multiplicity
is not divisible by $\nu$. Then the sheaf
$$
\Omega^p_{Z/Y'}(\log (\Gamma'+\Delta'))\otimes \sM^{-1} \otimes \sO_{Z}\big(\big[\frac{D}{\nu}\big]\big),
$$
is a direct factor of ${\tau}_*\Omega^p_{W/Y'}(\log {\tau}^*\Delta')$. Moreover the $\Z/\nu$ action on $W$
induces a $\Z/\nu$ action on
$$
\W=R^nh_*\C_{W\setminus \tau^{-1}\Delta'}
$$
and on its Higgs bundle. One has a decomposition of $\W$ in a direct sum of sub variations of Hodge structures, given by the eigenspaces for this action, and the Higgs bundle of one of them is of the form
$ G=\bigoplus_{q=0}^n G^{n-q,q}$ for
$$
G^{p,q}=R^qg_*\big(\Omega^{p}_{Z/Y'}(\log (\Gamma'+\Delta'))\otimes \sM^{-1}
\otimes \sO_{Z}\big(\big[\frac{D}{\nu}\big]\big)\big).
$$
The Higgs field $\theta_{p,q}:G^{p,q} \to G^{p-1,q+1}\otimes \Omega^1_{Y'}(\log T')$ is induced by the edge
morphisms of the exact sequence
\begin{multline}\label{eqco.2}
0\longrightarrow
\Omega^{p-1}_{Z/Y'}(\log (\Gamma'+\Delta'))\otimes {g}^* \Omega^1_{Y'}(\log T')\\
\longrightarrow {\mathfrak g} \Omega^{p}_{Z}(\log (\Gamma'+\Delta'))
\longrightarrow \Omega^{p}_{Z/Y'}(\log (\Gamma'+\Delta')) \longrightarrow 0,
\end{multline}
tensorized with $\sM^{-1} \otimes \sO_{Z}\big(\big[\frac{D}{\nu}\big]\big)$.
Here ${\mathfrak g} \Omega^{p}_{Z}(\log (\Gamma'+\Delta'))$ denotes the quotient of
$\Omega^{p}_{Z}(\log (\Gamma'+\Delta'))$ by the subsheaf
$\Omega^{p-2}_{Z}(\log (\Gamma'+\Delta'))\otimes {g}^* \Omega^2_{Y'}(\log T').$
\end{claim}
The sheaf
$$
G^{n,0}=g_*\big(\Omega^n_{Z/Y'}(\log (\Gamma'+\Delta'))\otimes \sM^{-1}
\sO_{Z}\big(\big[\frac{D}{\nu}\big]\big)\big)
$$
contains the invertible sheaf
$$
g_*\big(\Omega^n_{Z/Y'}(\log \Delta')\otimes \sM^{-1}\big)=
g_*(\omega_{Z/Y'} \otimes \sM^{-1})=\varphi^*(\sH).
$$
Let us write $\Omega=\varphi^*\psi^*\Omega_Y(\log S)$, and $\Omega^\vee$ for its dual.
\begin{claim}\label{Hsub}
Let
$$
(H=\bigoplus_{q=0}^n H^{n-q,q} ,\theta|_H)
$$
be the sub Higgs bundle of $(G,\theta)$, generated by $\varphi^*(\sH)$. Then there
is a map
$$
\varphi^*(\sH)\otimes S^{q}(\Omega^\vee)\longrightarrow H^{n-q,q}.
$$
which is surjective over some open dense subscheme.
\end{claim}
\begin{proof}
Writing $\Delta=f^*(S+T)$ consider the tautological exact sequences
\begin{equation}\label{eqco.3}
0\to
\Omega^{p-1}_{X/Y}(\log \Delta)\otimes {f}^* \Omega^1_{Y}(\log S+T)
\longrightarrow {\mathfrak g} \Omega^{p}_{X}(\log \Delta)
\longrightarrow \Omega^{p}_{X/Y}(\log \Delta) \to 0,
\end{equation}
tensorized with
$$
\omega_{X/Y}^{-1}=(\Omega^{n}_{X/Y}(\log \Delta))^{-1}.
$$
Taking the edge morphisms one obtains a Higgs bundle $H_0$ starting with the $(n,0)$ part $\sO_Y$. The sub
Higgs bundle generated by $\sO_Y$ has a quotient of $S^q(T^1_Y(-\log(S+T)))$ in degree $(n-q,q)$
On the other hand, the pullback of the exact sequence (\ref{eqco.3}) to $Z$ is a subsequence of
$$
0\to
\Omega^{p-1}_{Z/Y'}(\log \Delta')\otimes {g}^* \Omega^1_{Y'}(\log T')
\to {\mathfrak g} \Omega^{p}_{Z}(\log \Delta')
\to \Omega^{p}_{Z/Y'}(\log \Delta') \to 0,
$$
hence of the sequence (\ref{eqco.2}), as well. So the Higgs field of $\varphi^*H_0$
is induced by the edge morphism of the exact sequence (\ref{eqco.2}), tensorized with
$$
\varphi'^*\psi'^*(\omega_{X/Y}^{-1}).
$$
One obtains a morphism of Higgs bundles $\varphi^*(\sH\otimes\psi^*H_0)\to G$. By definition
$$\begin{CD}
\varphi^*(\sH\otimes\psi^*H_0^{n,0})= \varphi^*(\sH)= H^{n,0} @> \subset >> G^{n,0},
\end{CD}$$
and $H$ is the image of $\varphi^* H_0$ in $G$.
\end{proof}
Choose $\ell$ to be the largest integer with $H^{n-\ell,\ell}\neq 0$. Obviously $\ell \leq n$ and
$$
H^{n-\ell,\ell} \subset {\rm Ker}\big(H^{n-\ell,\ell} \to H^{n-\ell-1,\ell+1}\otimes \Omega_{Y'}(\log T')\big),
$$
hence $\mu(H^{n-\ell,\ell}) \leq 0$.
Since $\mu\Omega>0$ and $\ell\leq n,$
\begin{gather*}
\mu(\varphi^*\sH)-\mu(S^n (\Omega)) \leq
\mu(\varphi^*\sH)-\mu(S^\ell (\Omega).
\end{gather*}
Applying the Claim \ref{Hsub}
there
is a map
$$
\varphi^*(\sH)\otimes S^{\ell}(\Omega^\vee)\longrightarrow H^{n-\ell,\ell}.
$$
which is surjective over some open dense subscheme. The $\mu$-stability of $ \varphi^*(\sH)\otimes S^{\ell}(\Omega^\vee)$ by Yau's theorem and $\mu(H^{n-\ell,\ell}) \leq 0$ imply
$$ \mu(\varphi^*\sH)-\mu(S^\ell (\Omega)) \leq \mu (H^{n-\ell,\ell})\leq 0.$$
Putting the above two slope inequalities together we obtain
$$
\mu(\varphi^*\sH)-\mu(S^n (\Omega)) \leq 0.$$
\end{proof}
\begin{addendum}\label{addprop1}
If in Proposition \ref{prop1} $Y_0$ is a generalized Hilbert modular variety of dimension $m\geq 1$, then on may choose $\rho = \frac{m}{m+1}$.
\end{addendum}
\begin{proof}
If $Y_0$ is a Hilbert modular variety, then in the Claim \ref{Hsub} one has
an isomorphism
$$
H^{n-q,q}\cong \varphi^*(\sH)\otimes S^{q}(\Omega^\vee).
$$
This in turn implies that the slope of $H$ is
$$
\mu(\varphi^*\sH)-\mu(\bigoplus_{i=0}^n S^i (\Omega)) \leq 0.
$$
Note that $\bigoplus_{i=0}^n S^i (\Omega)=S^n(\sO_{Y'}\oplus \Omega)$.
Then
\begin{multline*}
\mu(\varphi^*\sH^\nu)\leq \nu \cdot \mu(\bigoplus_{i=0}^g S^i (\Omega))=
\nu \cdot \mu(S^n(\sO_{Y'}\oplus \Omega))= \nu \cdot n \cdot \frac{m}{m+1}\cdot
\mu(\Omega).
\end{multline*}
\end{proof}
\section{Canonical Class Inequality}
\begin{lemma}\label{onedimbase}
Let $f:X\to Y$ be a semi-stable non-isotrivial family of minimal $n$-folds of general type over a curve $Y$ of genus $b$ and with $s=\# S$ singular fibres
over $S$. Then
$$
v(\omega_{X/Y}) \leq \frac{(n+1)n}{2} \cdot\ch_1(\omega_F)^n\cdot\deg\Omega^1_Y(\log S)=
\frac{(n+1)n}{2} \cdot v(\omega_F) \cdot \deg\Omega^1_Y(\log S).
$$
If $b\geq 1$ then
$$
v(\omega_{X}) \leq v(\omega_F) \cdot \big( \frac{(n+1)(n+2)}{2} v(\omega_Y) +
\frac{n(n+1)s}{2}\big).
$$
\end{lemma}
\begin{proof}
The non-isotriviality implies that $f_*\omega^\nu_{X/Y}$ is ample for all $\nu \geq 2$
with $f_*\omega^\nu_{X/Y}\neq 0$. For $\nu$ large enough, and for all $\mu$ the multiplication maps
$$
S^\mu(f_*\omega^\nu_{X/Y}) \longrightarrow f_*\omega^{\nu\cdot \mu}_{X/Y}
$$
are surjective over some open dense subscheme. In particular for $\mu$ sufficiently large
there is an ample invertible sheaf $\sH$ of degree larger than $2g-1$, and a morphism
$$
\bigoplus \sH \longrightarrow f_*\omega^{\nu\cdot \mu}_{X/Y}
$$
which is again surjective over some open dense subscheme. This implies that
$$
H^1(Y,f_*\omega^{\nu}_{X/Y})=0
$$
for all large $\nu$. If $b\geq 1$ one also obtains that $H^1(Y,f_*\omega^{\nu}_{X}) = 0$.
By the Riemann-Roch theorem for vector bundles on curves the first vanishing implies that
$$
\dim(H^0(X,\omega_{X/Y}^\nu))=\dim(H^0(Y,f_*\omega_{X/Y}^\nu))=
\deg(f_*\omega_{X/Y}^\nu) + \rk(f_*\omega_{X/Y}^\nu)\cdot (1-b).
$$
The slope inequality in Proposition \ref{prop1}, together with the improvement obtained in the addendum
\ref{addprop1} imply that
$$
\dim(H^0(X,\omega_{X/Y}^\nu))\leq \rk(f_*\omega_{X/Y}^\nu)\cdot \big( \nu \cdot n \cdot \frac{1}{2} \cdot \deg(\Omega^1_Y(\log S)) +
(1-b)\big).
$$
Since $\rk(f_*\omega_{X/Y}^\nu)$ is given by a polynomial of degree $n=\dim(X)-1$ and with highest coefficient
$$
\frac{\nu^n}{n!}\cdot\ch_1(\omega_F)^n=\frac{\nu^n}{n!}\cdot v(\omega_F)
$$
one finds that
$$
v(\omega_{X/Y})\leq \frac{(n+1)n}{2} \cdot v(\omega_F) \cdot \deg\Omega^1_Y(\log S).
$$
For the second inequality we repeat the same calculation for $\omega_{X}$ instead of
$\omega_{X/Y}$, and obtain
\begin{multline*}
\dim(H^0(X,\omega_{X}^\nu)) = \deg(f_*\omega_{X}^\nu) + \rk(f_*\omega_{X/Y}^\nu)\cdot (1-b)\\
= \deg(f_*\omega_{X/Y}^\nu) + \rk(f_*\omega_{X/Y}^\nu)\cdot (\nu \cdot (2b-2) + (1-b))=\\
\deg(f_*\omega_{X/Y}^\nu) + \rk(f_*\omega_{X/Y}^\nu)\cdot (2\nu-1) \cdot (b-1) \\
\leq \rk(f_*\omega_{X/Y}^\nu)\cdot \big( \nu \cdot n \cdot \frac{1}{2} \cdot \deg(\Omega^1_Y(\log S)) +
(2\nu-1)\cdot(1-b)\big).
\end{multline*}
Again, taking the limit for $\nu\to\infty$ one obtains the inequality
$$
v(\omega_X) \leq (n+1) \cdot v(\omega_F)\cdot\big( \frac{n}{2} (2b-2+s) + (2b-2) \big).
$$
Since $(2b-2)=v(\omega_Y)$ one obtains the second inequality stated in Lemma \ref{onedimbase}.
\end{proof}
\begin{lemma}\label{higherdimbase}
Let $f:X\to Y$ be a family of $n$-fold over a base $Y$ of dimension $m$ and satisfying the
condition required in Prop. \ref{prop1}. Further more let $l_0$ be the smallest integer such that
$|l_0\omega_Y(S)|$ defines birational map. Then there exists a constant $c$ depending only on
$n$, $m$ and $l_0$ such that
$$v(\omega_{X/Y})\leq c\cdot v(\omega_F)\cdot v(\omega_Y(S)).$$
\end{lemma}
\begin{proof} We prove the statement for the case $m=2$. The general case follows from by taking hypersurface in
$|l_0\omega_Y(S)| $ and by induction on $\dim Y.$\\[.2cm]
We assume $l_0=1.$ For $f_*\omega^\nu_{X/Y}$ we take $n\nu+1$ smooth curves $C_1,\cdots C_{n\nu+1}$
from $|\omega_Y(S)|$ in the generic position, and let
$$D_\nu=\sum_{i=1}^{n\nu+1}C_i.$$
Consider the exact sequence
$$0\to H^0(Y,f_*\omega^\nu_{X/Y}(-D_\nu))\to H^0(Y,f_*\omega^\nu_{X/Y})\to H^0(D_\nu, f_*\omega^\nu_{X/Y}|_{D_\nu})\to\cdots$$
Then one has the vanishing
$$H^0(Y,f_*\omega^\nu_{X/Y}(-D_\nu))=0,$$
for otherwise there would there exists an invertible subsheaf
$$\mathcal O_Y(D_\nu)\to f_*\omega^\nu_{X/Y}.$$
But it contradicts to
$$(n\nu+1)\omega_Y(S)\cdot\omega_Y(S)=\omega_Y(S)\cdot D_\nu \leq \nu \cdot n\cdot \rho \cdot \omega_Y(S)\cdot\omega_Y(S).$$
Hence one has
$$ h^0(Y, f_*\omega^\nu_{X/Y})\leq h^0(D_\nu, f_*\omega^\nu_{X/Y}|_{D_\nu})\leq \sum_{i=1}^{n\nu+1}h^0(C_i, f_*\omega^\nu_{X/Y}|_{C_i}).$$
Note that
$$f_*\omega^\nu_{X/Y}|_{C_i}=f_*\omega^\nu_{X_{C_i}/C_i}$$
for the subfamily $f: X_{C_i}\to C_i.$
Since now all $C_i$ are curves with fixed genus, the vanishing for $H^1(C_i, f_*\omega^\nu_{X_{C_i}/C_i})$
in \ref{onedimbase} still holds true for $\nu>>1.$ Hence, as in \ref{onedimbase} we have
$$h^0(f_*\omega^\nu_{X_{C_i}/C_i})\leq h^0(F,\omega^\nu_F)\cdot\frac{n}{2}\cdot\nu\cdot\deg\Omega^1_{C_i}(S)=
h^0(F,\omega^\nu_F)\cdot\frac{n}{2}\cdot\nu\cdot 2\cdot\omega_Y(S)\omega_Y(S),$$
and
$$h^0(X,\omega^\nu_{X/Y})=h^0(Y, f_*\omega^\nu_{X/Y})\leq (n\nu+1)h^0(F,\omega^\nu_F)\cdot \frac{n}{2}\cdot\nu\cdot
2\cdot\omega_Y(S)\cdot\omega_Y(S).$$
Dividing the last inequality by $\nu^{n+2}$ and taking the limit for $\nu\to\infty$ we finish the proof.
\end{proof}
As an interesting application, one can get an inequality between $c_1$ and $c_3$ on the total space of a smooth family $f:X\to Y$
of minimal surfaces of general type over a curve $Y$.
\begin{corollary}
Let $f:X\to Y$ be a non-isotrivial smooth family of minimal surfaces of general type over a curve $Y$ of genus $b$.
Then we have
\begin{align*}
c_1^3(X)<18c_3(X).
\end{align*}
\end{corollary}
\begin{proof}
Lemma \ref{onedimbase} says that
$$c_1^3(X)\leq 6c_1^2(F)c_1(Y)=12(b-1)c_1^2(F),$$
where $F$ is a fiber.
Now Miyaoka-Yau inequality for $F$ says $c_1^2(F)\leq 3c_2(F).$
So we obtain
$$c_1^3(X)\leq 18c_2(F)c_1(Y).$$
By using the following exact sequence for $f: X\to Y$,
$$0\to f^*\Omega^1_Y\to \Omega^1_X\to \Omega^1_{X/Y}\to 0,$$
to compute the Chern class, one has
$c_3(X)=c_2(F)c_1(Y)=2(b-1)c_2(F)$.
Finally we get the inequality for Chern class
$c_1^3(X)\leq 18c_3(X)$.
Suppose that $c_1^3(X)=18c_3(X)$. Thus $F$ satisfies
$c_1^2(F)=3c_2(F)$, i.e., $F$ is a ball quotient surface. Then the
rigidity of ball quotient of dimension
$\geq 2$ implies
the isotriviality of $f$. It contradicts to our assumption. Therefore we get a
strict inequality $c_1^3(X)< 18c_3(X)$.
\end{proof}
|
2,877,628,088,646 | arxiv | \section{Introduction}\label{intro}
An inspection of planetary and satellite orbital data in the solar system\footnote{{\tt https://ssd.jpl.nasa.gov}, {\tt https://solarsystem.nasa.gov}\label{ft1}} reveals that major objects seem to cluster at intermediate areas of the radial distributions of orbiting bodies, and only smaller objects are found in the inner and the outer regions of these subsystems. The same arrangement of massive objects is also seen in multiplanet extrasolar systems. Keeping in mind that there may be more undetected planets farther out in these systems, some examples presently are: HD 10180 \citep{lov11}, Kepler-80 and 90 \citep{sha18}, TRAPPIST-1 \citep{del18,gri18}, HR 8832 \citep{vog15,joh16,bon20}, K2-138 \citep{chrsen18,lop19}, Kepler-11 \citep{lis11}, and even the four-planet systems of Kepler-223 \citep{mil16} and GJ 876 \citep{riv10,mil18}. Despite being a clue pertaining to the processes of massive planet and satellite formation and evolution, this conspicuous property has not been discussed in the past, and there have been no ideas about how we could possibly exploit it to learn from it.
Our approach to the problem has been single-minded from the outset. It was apparent to us that such large bodies have moved toward one another during early evolution, perhaps as soon as a few large solid cores emerged in these subsystems and the accretion disks dissipated away. In such a case, there must exist a generic physical mechanism that drives this type of convergence but eventually further migration is hindered when the mechanism ceases to operate. In this work, we formulate such a secular mechanism that relies on first principles and requires no additional conditions in order to operate. Some related calculations have been carried out by other researchers in the past \citep{ost69,pag74,lyn74,bal98,pap11}. Any small differences that we may point out concern the details of evolution and the physical interpretation of the results.
In \S~\ref{mri}, we describe the dynamical evolution of two interacting Keplerian fluid elements through nonequilibrium states that leads to a runaway dynamical instability. This analysis is applicable to magnetized accretion disks, but not to planets and satellites in which the integral of circulation is not conserved (not even approximately; these systems are topologically not simply-connected) precluding dynamical instability. In \S~\ref{analysis}, we describe the secular evolution of large individual gravitating bodies in Keplerian orbits around a central mass and under the influence of dissipation which leads to clustering of the bodies. In \S~\ref{dis}, we discuss our results in the context of planet and satellite evolution.
Many technical details are left to three self-contained appendices. In Appendix~\ref{n3}, we describe few-body systems evolving by exchanging angular momentum and lowering their mechanical energies. In Appendix~\ref{app1}, we formulate a self-consistent calculation of the charactiristic dissipation time $\tau_{\rm dis}$ and the corresponding velocity fluctuations $v_{\rm dis}$ in such systems. In Appendix~\ref{alast}, we analyze ``gravitational Landau damping'' of the tidal field in few-body systems, a unique new mechanism that is responsible for settling of the bodies near mean-motion resonances over times comparable to $\tau_{\rm dis}$, where they no longer exchange substantial amounts of angular momentum and so they send the mean tidal field around them to oblivion.
\section{Dynamical Evolution of Keplerian Interacting Fluid Elements}\label{mri}
\cite{bal98} introduced a mechanical analog of the magnetorotational instability (MRI) in gaseous accretion disks, two mass elements $m_1$ and $m_2$ in circular Keplerian orbits around a central mass $M\gg m_1, m_2$ with radii $r_{1}$ and $r_{2} > r_1$, respectively. The mass elements are connected by a weak spring with constant $k$ (representing a magnetic-field line) whose role is to allow for angular momentum transfer between the elements. When perturbed under the constraint of constant total angular momentum,\footnote{Constant circulation would be more precise, although the two integrals of motion are equivalent in axisymmetric fluid systems.} this model behaves just like gaseous accretion disks under the influence of viscosity \citep{lyn74}, except that the instability is dynamical: the masses spread out and their displacements reduce the total free energy of the system \citep{chr95}, leading to a runaway \citep{bal91,bal98,chr96,chr03}.
We use the phase-transition formalism of \cite{chr95} to describe the evolution of this system {\it out of equilibrium}: a change that lowers the free energy ($\Delta E<0$) while preserving the total angular momentum ($\Delta L = 0$) is viable and the system will transition to the new nonequilibrium state of lower energy; whereas if $\Delta E > 0$, the system will just oscillate about the initial equilibrium state characterized by total energy $E=E_{1}+E_{2}$ and total angular momentum $L=L_{1}+L_{2}$. We assume that the initial equilibrium orbits are perturbed by small displacements $\Delta r_1 \ll r_{1}$ and $\Delta r_2 \ll r_{2}$. Then the conservation of total angular momentum relates the displacements to first order by the equation
\begin{equation}
m_1 v_{1}\Delta r_1 + m_2 v_{2}\Delta r_2 = 0\, ,
\label{xy}
\end{equation}
where $v_{1}$ and $v_{2}$ are the equilibium azimuthal velocities, and the change in free energy to first order is found to be
\begin{equation}
\Delta E = L_{1}(n_{1} - n_{2})\frac{\Delta r_1}{2 r_{1}}\, ,
\label{de}
\end{equation}
where $n_1$ and $n_2 < n_1$ are the mean motions (orbital angular velocities) of the masses in their equilibrium state. The change in potential energy of the spring, $k(\Delta r_2 - \Delta r_1)^2/2$, is of second order and is omitted from equation~(\ref{de}). It is now apparent that for $\Delta r_1 < 0$, then $\Delta E < 0$ and $\Delta r_2 > 0$. The masses spread out and the resulting nonequilibrium configuration is unstable to more spreading that reduces further the free energy of the system.
The above dynamical instability (an analog of the MRI) does not operate in planetary and satellite systems. It is strictly applicable to perfect fluids in which circulation and angular momentum are both conserved \citep[as in][]{chr95}. Conservation of circulation is implicit in the above model; it can be readily seen in equation~(\ref{xy}) assuming that the mass elements are axisymmetric rings with equal masses, in which case the equation takes the form
\begin{equation}
v_{1}\Delta r_1 + v_{2}\Delta r_2 = 0\, ,
\label{xy2}
\end{equation}
to first order in the displacements.
In viscous unmagnetized disks, dissipative stresses destroy circulation slowly and the instability is then secular \citep[as in][]{lyn74}. In stellar and particle systems, there is no conservation law of circulation and equation~(\ref{xy2}) is invalid, even in approximate form, because all the elements of the stress tensor introduce gradients of comparable magnitude to the Jeans equations of motion \citep{bin87,chr95,bat00}. Therefore, the evolution of multiple planetary and satellite bodies requires a different mathematical approach, though still constrained by the applicable conservation laws of energy and angular momentum.
\section{Secular Evolution of Interacting Planets and Satellites}\label{analysis}
\cite{ost69} studied the secular evolution of a dynamically stable, uniformly-rotating pulsar subject to angular momentum and energy losses due to emission of multipolar radiation. Evolution takes place slowly over timescales much longer than the dynamical time (the rotation period) of the object. In this model, the pulsar is thought of as transitioning between quasistatic equilibrium states \citep[the Dedekind ellipsoids;][]{cha69} in which it maintains its uniform rotation albeit with a slowly changing angular velocity $\Omega$. Here, ``slowly'' is quantified by the condition that
\begin{equation}
\left\lvert\frac{d\Omega}{dt}\right\rvert \ll \Omega^2\, .
\label{cond}
\end{equation}
Under a series of assumptions, the strongest of which is inequality~(\ref{cond}), \cite{ost69} proved that the losses in angular momentum $L$ and kinetic energy $E$ are related by the equation
\begin{equation}
\frac{dE}{dt} = \Omega \frac{dL}{dt}\, ,
\label{dedt}
\end{equation}
where the time derivatives are both implicitly negative. The use of $E$ for rotational kinetic energy (their equation (7)) has caused some indiscretions in the literature. For example, \cite{pag74} call $E$ the ``energy-at-infinity'' (which is kinetic after all) and equation~(\ref{dedt}) ``universal'' despite having derived it under their assumption iv(a) which is essentially equivalent to inequality~(\ref{cond}); whereas \cite{pap11} treated $E$ as the mechanical energy of an orbiting planet within the same quasistatic approximation.
Below we also use equation~(\ref{dedt}) to follow the secular evolution of planets and satellites losing kinetic energy slowly due to the action of dissipative processes induced by the central object. First we revisit the approach of \cite{pap11} whose calculation is correct but his conclusion is wrong. Then we formulate the same problem as a variation of the free energy of the system undergoing quasistatic {\it out-of-equilibrium} evolution away from its initial equilibrium state.
\subsection{Papaloizou Approach}\label{pap}
We consider two gravitating bodies with masses $m_1$ and $m_2$ orbiting around a central mass $M\gg m_1, m_2$ in nearly circular Keplerian orbits with radii $r_1$ and $r_2 > r_1$, respectively. We assume that tides due to $M$ during orbit circularization are dissipated in the interiors of the bodies, causing small amounts of kinetic energy to be converted to heat $H$. The slow rate of dissipation is given by
\begin{equation}
{\cal L} = dH/dt > 0\, .
\label{dq}
\end{equation}
Here, ``slow'' is defined by inequality~(\ref{cond}) and by the condition that
\begin{equation}
H \ll T\, ,
\label{condt}
\end{equation}
where $T$ is the total kinetic energy. Then the evolution of the system is described by a sequence of {\it quasistatic equilibrium states} that are accessible to the bodies because equation~(\ref{dq}) along with energy conservation guarantee that the total mechanical energy of the bodies will decrease in time ($dE/dt < 0$).
The mechanical energy and angular momentum contents of each body are related by
\begin{equation}
E_i = -\frac{1}{2} n_i L_i\, ,
\label{el1}
\end{equation}
where $i=1, 2$ and $n_i$ is the mean motion of body $m_i$. Since $r_2 > r_1$, then $n_2 < n_1$ for the Keplerian orbits. Equation~(\ref{dedt}) is also valid here; under the quasistatic assumption~(\ref{cond}), it takes the form
\begin{equation}
\frac{dE_i}{dt} = -\frac{1}{2} n_i \frac{dL_i}{dt}\, .
\label{el2}
\end{equation}
The factor of $-1/2$ appears because $E_i$ represents the mechanical energy of each body which is implicitly negative. The negative sign cannot be absorbed in equations~(\ref{el1}) and~(\ref{el2}) because, unlike $dL/dt<0$ in equation~(\ref{dedt}) above, here the terms $dL_1/dt$ and $dL_2/dt$ have opposite signs.
Conservation of total angular momentum $L=L_1+L_2$ is expressed by the equation
\begin{equation}
\frac{d}{dt}\left(L_1 + L_2\right) = 0\, ,
\label{conl}
\end{equation}
and total energy conservation for the system gives
\begin{equation}
\frac{d}{dt}\left(E_1 + E_2\right) = -\frac{dH}{dt} = -{\cal L} < 0\, .
\label{cone}
\end{equation}
Using equations~(\ref{el2}), we rewrite equation~(\ref{conl}) in the form
\begin{equation}
\frac{1}{n_1}\frac{dE_1}{dt} + \frac{1}{n_2}\frac{dE_2}{dt} = 0\, .
\label{conl2}
\end{equation}
Thus, after considerable deliberations of the details, we have arrived at the equations adopted by \cite{pap11}.
It is obvious from equations~(\ref{cone}) and~(\ref{conl2}) that, as the system evolves quasistatically, the mechanical energy of one body will increase and that of the other body will decrease, but the overall change in $E_1+E_2$ will be a decrease by an amount of $dH$, allowing for the system to proceed to a neighboring quasistatic equilibrium state. But it is not prudent to solve these equations for the energy rates in order to deduce the details of the evolution. It is more sensible to look at the changes in angular momentum of the bodies: Combining equations~(\ref{el2})-(\ref{cone}), we find that
\begin{equation}
-\frac{dL_2}{dt} = \frac{dL_1}{dt} = \frac{2{\cal L}}{n_1 - n_2} > 0\, ,
\label{dL12}
\end{equation}
where ${\cal L} > 0$ and $n_1 > n_2$. We see now that the inner body 1 will gain angular momentum and will move outward, while the outer body 2 will lose angular momentum and will move inward. Overall, the two bodies will converge toward a common orbit in which they will share the total angular momentum equally. But in larger systems with 3 or more bodies, this convergence does not materialize once two-body interactions set in and such a common orbit proves to not be as important; especially since another critical orbit emerges characterized by the mean $\overline{n}$ of the mean motions $n_i$ of the bodies (see Appendix~\ref{n3}). For 3 or more bodies, this orbit is secularly unstable due to two- and three-body encounters between near-neighbors, but a body may remain in it for a long time, provided that another body does not come close. The significance and the repercussions of these results will be discussed in \S~\ref{dis} below.
\subsection{Free-Energy Variation Approach}\label{free}
Here we formulate the problem studied in \S~\ref{pap} as a variation of the free energy of the system of two bodies with masses $m_i$ ($i=1, 2$) orbiting around a central mass $M\gg m_i$ and stepping out of equilibrium and into a new state while still obeying conditions~(\ref{cond}), (\ref{condt}), and~(\ref{el2}). The two bodies can proceed to such a (generally nonequilibrium) state only if this state is characterized by lower free energy ($\Delta E<0$) and the same total angular momentum ($\Delta L = 0$). The total mechanical energy $E_1+E_2$ plays the role of the free energy \citep{chr95}, thus we have
\begin{equation}
\Delta\left(E_1 + E_2\right) < 0\, ,
\label{e1}
\end{equation}
and
\begin{equation}
\Delta\left(L_1 + L_2\right) = 0\, .
\label{e2}
\end{equation}
Combining these two relations with equations~(\ref{el2}) in the form $\Delta E_i = -(1/2) n_i\Delta L_i$, we find that
\begin{equation}
\left(n_2-n_1\right)\Delta L_2 = \left(n_1-n_2\right)\Delta L_1 > 0\, .
\label{e3}
\end{equation}
For $n_1>n_2$ (implying that the initial orbital radii obey $r_1 < r_2$), we find that $\Delta L_1>0$ and $\Delta L_2<0$, respectively. Thus, in order for the system to begin its search for a new equilibrium state of lower free energy, the inner body $m_1$ will move out and the outer body $m_2$ will move in.
\section{Discussion}\label{dis}
We have used the conservation laws of energy and angular momentum to describe and contrast the dynamical evolution of two interacting mass elements in a gaseous disk and the secular evolution of planets and satellites. Both types of subsystems were assumed to exhibit Keplerian orbital profiles around a dominant central mass and to exchange angular momentum via weak torques. Evolution however takes different paths in these two circumstances and the reason is the (non)conservation of circulation. In perfect-fluid disks (\S~\ref{mri}), circulation is conserved and the mechanical analog of the MRI turns out to be a dynamical instability \citep[as was first described by][]{bal98}; whereas in (extra)solar multi-body subsystems (\S~\ref{analysis}), there is no analogous conservation law and dissipative evolution proceeds secularly via a sequence of quasistatic equilibrium configurations \citep{ost69} or via nonequilibrium states, both of which have progressively lower mechanical energy compared to the preceding state.
Extending the analytical work of \cite{pap11} to more than 2 orbiting bodies, we demonstrate in Appendix~\ref{n3} that tidal dissipation induced by the central mass leads to clustering of many-body systems generally toward the mean $\overline{n}$ of their mean motions $n_i$ ($i=1, 2, \cdots, N$, where $N\geq 4$). On the other hand, $N=2$ or $N=3$ major bodies may try to converge toward a common orbit\footnote{The common orbit with $\overline{L}$ does not stand out in multiple-body systems because a body that may reach it first will soon move out as transfer of angular momentum continues on. Only $N=2$ bodies can approach this orbit synchronously.} characterized by the mean $\overline{L}$ of their angular momenta, except for the third body if it happens to be near the critical orbit with mean motion $\overline{n}$. Although secularly unstable, this critical orbit may host a massive body for a long time, at least comparable to the dissipation time $\tau_{\rm dis}$ that characterizes this part of the evolution of the system ($\tau_{\rm dis}$ is quantified in Appendix~\ref{app1}). A close encounter with another body can clear out the critical orbit, if the convergence of bodies continues unimpeded for a long enough time (Appendix~\ref{n3}). Convergence of bodies may seem surprising to the reader, but it did not come as a surprise to us. In fact, we anticipated such an outcome because we were impressed by observations of the radial distributions of bodies in solar subsystems and exoplanetary systems (\S~\ref{intro}); they all show an unmistaken clustering of several (4-7) massive bodies at intermediate orbital locations around the critical orbit with mean motion $\overline{n}$.
The next obvious question is, where and how does such clustering of bodies stop? After all, the observed massive planets and satellites seem to be currently on very long-lived, if not secularly stable, orbits and no pair appears to be close to merging into the same orbit. So the clustering process must be quelled somehow before the objects begin interacting strongly via close paired encounters. Although we do not have a complete answer yet, we believe that we are well on our way toward understanding the final stages of orbital evolution: The seminal paper of \cite{gol65} provided a substantial part of the answer long ago. \cite{gol65} showed that several ``special cases of commensurable mean motions [of satellites] are not disrupted by tidal forces.'' This means that when some of the more massive satellites of the gaseous giants reach near mean-motion resonances (MMRs), they do not exchange angular momentum efficiently any more, thus they maintain their orbital elements in long-lasting dynamical configurations (see also Appendix~\ref{alast} for gravitational Landau damping of the mean tidal field when massive bodies approach MMRs).
The most massive body must play a crucial role in the above process because it is the one that evolves tidally slower than all the other bodies, so it must be the body that lays out the resonant structure (i.e., the potential minima; see Appendix~\ref{alast}) of the tidal field for the entire subsystem. When other massive bodies reach close to nearby MMRs, their further evolution is impeded because the most massive body does not affect them tidally any longer; and they also refrain from interacting with smaller bodies. In this setting, the tidal field is thus severely damped and the remaining lower-mass objects that are trying slowly to converge will also be hampered, either because they encounter MMRs or they are simply too far away from the resonating massive bodies. In the end, the entire system will appear to be stable (no more substantial imbalances from exchanges of angular momentum) with all of its members lying in or near MMRs and the mean tidal field erased since the major bodies no longer contribute to it. At present, this is what is actually observed in all (exo)planetary and satellite subsystems, although we have not been able to communicate the results of our meta-studies yet (Christodoulou \& Kazanas, in prep.). For this reason, we clarify here what we perceive differently in reference to the volumes of work carried out about MMRs up until now\footnote{Page ~{\tt https://en.wikipedia.org/wiki/Orbital\_resonance}\hfill\break contains a comprehensive, albeit empirical, summary of orbital MMRs along with a listing of hyperlinks to $\sim$100 professional citations.\label{ft2}} \citep{roy54,gol65,wis80,wis86,mur99,mor02,riv10,lis11,fab14,chrsen18}: We believe that multiple-body resonances are {\it not} a local phenomenon; principal MMRs are global in each system and their locations are determined by the most massive object that used to dominate the mean tidal field spread out across the entire (sub)system. In such a global layout, it is inappropriate to use the relative deviations of orbital elements from exact nearby MMRs and set arbitrary thresholds for objects to be or not to be in resonance. Though unfortunately, we recognize this to be the current state of affairs in studies of phase angles of local MMRs between near-neighbors; for example, no-one else currently believes that the Earth is in the 1:12 resonance of Jupiter because its orbital period is 4.2 days longer than the exact resonant value of 361.05 d; and its phase angle would have to circulate slowly relative to the phase of Jupiter, so the same pattern would only repeat once every 87 years (see also the section on ``coincidental near MMRs'' in the citation of footnote~\ref{ft2} for the same argument). This of course is the wrong way to think about global resonances in a tidal field that appears nowadays to be severely damped. We defer further discussion of this rather complicated issue to Appendix~\ref{alast}.
The main result of this work has ramifications beyond the particular systems that we study. The orbits of the planets and satellites that we have in mind all have Keplerian radial profiles. The Keplerian profile is just a special case of a power law, a profile with no critical or inflection points, which makes it simple but featureless. But now, the dynamics of multiple bodies evolving by applying torques and exchanging angular momentum has given us a critical point in this profile, the mean $\overline{n}$ of the mean motions $n_i$, or equivalently, the harmonic mean $\overline{P}$ of the orbital periods $P_i$ ($i = 1, 2, \cdots, N$, where we take $N\geq 4$). Given $\overline{P}$, the critical orbital radius can be determined from Kepler's third law. We note however that perhaps not many bodies may be found occupying the critical orbits in their subsystems because all bodies may have {\it a priori} circularized their orbits at or near MMRs (unless of course the critical orbit coincides with an MMR, in which case the chances of finding a body there improve considerably).
Our planetary system and Jupiter's satellite subsystem each contain $N=4$ dominant adjacent orbiting bodies, the gaseous giant planets and the Galilean moons, respectively. For the gaseous giants, we find that
$$\overline{P} = 29.36~{\rm yr} ~~({\rm whereas}~P_{\rm Sa}=29.46~{\rm yr}),$$
so Saturn has settled just wide of the critical orbit as we see it presently. For the Galilean moons, we find that at present
$$\overline{P} = 3.82~{\rm d} ~~({\rm whereas}~P_{\rm Eu}=3.55~{\rm d}),$$
so Europa was trapped into the renowned Laplace resonance and could not expand its orbit farther out. We did not include inner low-mass bodies in these estimates for an obvious reason; their fates were fully determined by weak tidal forces exerted on them by the distant massive bodies, so they can be viewed as passive receivers of tiny amounts of angular momentum having slowly worked their way outward and toward the common goal. The Earth, in particular, may have taken angular momentum from nearby Mars, preventing the outward movement of this tiny planet.
For the Earth, it is interesting to examine where our planet finally settled at the end of the orbital evolution of the gaseous giants: our planet is currently orbiting just wide of the 1:12 principal MMR of Jupiter (as already mentioned, its orbital period is only 4.2 d longer). It is not surprising that the planet could not get rid of a small amount of angular momentum and fall back into the MMR. During secular evolution, it was only gaining tiny amounts of angular momentum working its way outward toward the common goal. Such slightly wider orbits are observed in many exoplanets as well \citep{lis11,fab14}. Those inner ones with orbital periods shorter than $\overline{P}$ may be understood along the same line of reasoning \citep[but see also][]{lit12,bat13}.
In extrasolar systems, K2-138 \citep{chrsen18,lop19} presents a transparent example of a planet on a critical orbit. For the six planets known in this system, we find that
$\overline{P} = 5.385~{\rm d} ~~({\rm whereas}~P_{d}=5.405~{\rm d}),$ so planet $d$ is effectively occupying the critical orbit. All planets are near global MMRs as determined from the orbital period of the largest planet $e$. In order of increasing orbital periods, these are 2:7, 3:7, 2:3 1:1, 3:2, 5:1, for planets $b$-$g$, respectively. In planets $b$-$f$, all adjacent pairs have local period ratios $P_{i+1}/P_i\simeq$ 3/2 \citep{chrsen18}; and the outermost planet $g$ resides in a higher-order harmonic, i.e., $P_{g}/P_e\simeq (3/2)^4$. The resonant chain is global, though not fully packed. If it were fully packed, then no planet would occupy the critical orbit.
Another example with the critical orbit being occupied is the TRAPPIST-1 system with seven planets in a very compact configuration \citep[$r_{\rm max}=0.062$ AU;][]{del18,gri18}. We find that
$\overline{P} = P_{d} = 4.050~{\rm d},$
so planet $d$ is on the critical orbit. All planets are near global MMRs as determined from the orbital period of the largest planet $g$. In order of increasing orbital periods, these are 1:8, 1:5, 1:3, 1:2, 3:4, 1:1, 3:2, for planets $b$-$h$, respectively. More details on how such systems came to be are included in Appendix~\ref{alast}.
|
2,877,628,088,647 | arxiv | \section{Introduction}
A crucial prerequisite for the use of clusters of galaxies for cosmological
studies is the knowledge of the scaling relations between cluster mass and an
observable proxy, such as X-ray luminosity, or X-ray gas temperature.
These relations
must be calibrated at varying redshifts, using theoretical modeling and
hydrosimulations, or simply empirical relations. Clusters at low and
intermediate redshifts are bright enough in
X-rays to measure the total gravitating mass by deprojecting their
temperature and density profiles. This is currently difficult for clusters
beyond $z\sim\! 1$ and subject to large statistical errors
\citep[e.g.][]{rosati+04}. In this high redshift regime, independent
mass estimates based on gravitational lensing (both in the weak and the strong
regimes) have long been invoked as the most effective method for calibrating
cluster masses. In particular, mass measurements based on multiple strong
lensing features are highly robust. Although such measurements are possible
only in the central region of the cluster, they are independent of model
asssumptions and can help to break the mass-sheet degeneracy of parameter-free
weak-lensing mass-reconstructions.
To obtain a well-defined sample of clusters at high redshift, $z > 0.8$, we
have started the XDCP \citep[XMM-Newton Distant Cluster
Project, ][]{boehringer+05,fassbender+08,mullis+05} based
on archival XMM-Newton X-ray data. The XDCP has been very successful, so far providing
18 clusters at $z>0.8$ and 10 clusters with $z>1$ including five redshift
confirmations from other projects \citep[for selected results see
e.g.][]{fassbender+08,santos+09,rosati+09}.
Here we report the discovery of a distant strong lensing cluster of galaxies
at redshift $z=1.082$ which shows a giant arc and other arc-like features
interpreted as gravitationally lensed images of more distant, background objects.
We study the properties of the lensing cluster and the lensed galaxies on the
basis of available optical and X-ray imaging and spectroscopy.
Throughout this paper we use a standard $\Lambda$CDM cosmology with
parameters $H_0=71$\,km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M}=0.27$, and
$\Omega_{\Lambda}=0.73$, which gives a scale of 8.188\,kpc arcsec$^{-1}$
at redshift 1.082.
All magnitudes in this paper are given in the AB system.
\begin{figure*}[t]
\begin{minipage}{0.62\hsize}{}
\resizebox{\hsize}{!}{\includegraphics{14430_f1a.ps}}
\end{minipage}
\hfill
\begin{minipage}[]{0.37\hsize}{}
\resizebox{\hsize}{!}{\includegraphics{14430_f1b.ps}}
\caption[]{{\it} (left) UBVR image of the field around XMMU\,J1007\ with X-ray
contours overlaid and tentatively identified lensing features indicated. The
cross indicates the X-ray position and the
square the size of the color image shown right. {\it (right)} Color
composite of the central region of XMMU\,J1007\ with mean (Rz)V(UB) images
in the RGB channels. North is top and East to the left in both
frames.
\label{f:opt_composit}
}
\end{minipage}
\end{figure*}
\section{Observations, analysis and results}
\subsection{XMM-Newton X-ray observations}
XMMU\,J100750.5+125818 (hereafter XMMU\,J1007) was found serendipitously
as an extended X-ray source in our systematic search for distant clusters of
galaxies.
The cluster candidate was detected in
a field with nominal exposure time of \(22.2 \mathrm{ksec}\) at an off-axis
angle of \(11.7 \mathrm{'}\) (OBSID: 0140550601).
Following \citet{pratt+arnaud03},
time intervals with high background were excluded by applying a two-step flare
cleaning procedure firstly in a hard energy band ($10-14$\,keV) and
subsequently in the $0.3-10$\,keV band, after which 21.4 ksec of
clean exposure time remained for the two MOS cameras and
\(18.0\,\mathrm{ksec}\) for the PN instrument.
Images and exposure maps were created in the $0.35-2.4$\,keV detection band,
which was chosen to maximize the signal-to-noise ratio of the X-ray
emission of massive galaxy clusters at \(z>0.8\) compared to the background
components \citep{scharf02}. Source detection was performed
with the \texttt{SAS} task \texttt{eboxdetect} followed by the maximum
likelihood fitting task \texttt{emldetect} for the determination of source
parameters. XMMU\,J1007\ was found as an extended X-ray source with low surface
brightness at position $\alpha(2000) =10^h 07^m 50\fs5, \delta(2000)=+12\degr
58' 18\farcs1$ with a total of about 200 source photons and a core radius of
\(24 \mathrm{''}\) at a significance level of DET$_{\rm ML} \sim 42$ and an
extent likelihood of EXT$_{\rm ML} \sim 24$\footnote{DET$_{\rm ML}$, EXT$_{\rm
ML}$: detection likelihood and extent likelihood of the source, $L = - \ln P$,
where $P$ is the probability the detection (the extent) is spurious due to a
Poissonian fluctuation}.
XMMU\,J1007\ is not listed in the 2XMM catalog \citep{watson+09}, since
its detection likelihood after the first step of the detection chain
(\texttt{eboxdetect}) remained below the threshold to perform the
second step (\texttt{emldetect}). It was revealed as a cluster candidate in
this work, however, owing to the use of a different detection band
and by lowering the detection threshold to perform \texttt{emldetect}.
\subsection{Optical imaging with VLT/FORS2 and LBT/LBC}
Neither DSS nor the SDSS revealed a convincing counterpart to the unique
X-ray source XMMU\,J1007\footnote{We regard the NVSS radio source
NVSS J100751+125901 at $10^h07^m51\fs3, +12\degr59'01''$ (J2000.0), marked
in Fig.~\ref{f:opt_composit}, and
confirmed to be point-like by FIRST ($10^h07^h51\fs289s, +12\degr59'3\farcs34$)
with an integrated flux of $5.59\pm0.19$\,mJy (positional offset of $48\arcsec$)
as unrelated to the X-ray source}.
The field containing XMMU\,J1007\ was thus observed with FORS2 at the VLT in
January/February 2007 through ESO filters R\_SPECIAL$+$76 and z\_GUNN$+$78 with
total integration times of 1920\,s and 960\,s, respectively.
The combination of filters was chosen to allow an unambiguous redshift
determination up to $z\sim 0.9$ through the identification of a cluster red
sequence (CRS).
All the imaging data of this paper were reduced with an AIP-adaptation
of the GaBoDS-pipeline described in \citet{erben+05}. It comprises
all the pre-processing steps (bias- and flatfield-, and fringe-correction)
as well as super-flat correction, background subtraction, and creation of a
final mosaic image using SWarp and ScAMP \citep{bertin06}.
The photometric calibration of the $R$-band image was achieved through
observations of \citet[][]{stetson00}
standard fields, the photometry of the z-band image was tied to the
SDSS. The non-standard z-band cut-on filter in use at the VLT leads to an
estimated systematic calibration uncertainty of 0.05 mag.
The measured image seeing of the VLT R- and z-band images
is $0\farcs7$ and $0\farcs56$, respectively.
Object catalogues were generated with SExtractor
\citep{bertin+arnouts96} in double image mode.
The magnitudes of catalogued objects were corrected for
Galactic foreground extinction. From the observed drop of the number-flux
relations with respect to a power-law expectation we estimate a
50\% completeness of the catalogues of \(R_{\mathrm{lim}} \sim 25.9\) and
\(z_{\mathrm{lim}} \sim 24.6\).
Following \citet[][and its erratum]{mullis+05} we used fixed apertures with
diameter \(3.5 \times \mathrm{FWHM_{seeing}}\) to determine object colors.
A mere two-band false-color composite of the VLT imaging data proved
sufficient to locate an overdensity of red, early type galaxies at the
position of the extended X-ray emission.
Color- and location-selected possible cluster members form a red
sequence in the color-magnitude diagram of Fig.~\ref{f:cmd}.
The red sequence
color was estimated from the average color of all galaxies within 30\arcsec of
the X-ray position (see box in Fig.~\ref{f:cmd}), $R - z = 1.91$, which hints
to a cluster redshift beyond $\sim$0.9.
Further imaging observations were performed with the Large
Binocular Telescope (LBT) equipped with the Large Binocular Camera (LBC) through
Bessel B,V filters and the U$_{\rm spec}$ filter.
Table \ref{lbt_images} lists the details of the LBT imaging observations.
The photometric zeropoints for the LBC images were derived by using
the SDSS photometry of stellar objects in the field.
A multi-color composite of the center of the field including VLT/FORS2- and
LBT/LBC-data is shown in Fig.~\ref{f:opt_composit} with X-ray contours
overlaid. Residual astrometric uncertainties in the XMM-Newton X-ray image
were removed on the basis of identified X-ray point sources in the FORS- and
LBT-images, respectively.
The cluster has its brightest galaxy (BCG) at position
\(\alpha = 10^\mathrm{h}07^\mathrm{m}49.9^\mathrm{s}\) and
\(\delta = +12^\mathrm{\circ}58^\mathrm{'}40^\mathrm{''}\)
between structures B1 and B2 (see Fig.~\ref{f:opt_composit}), i.e.~away from the
apparent center of the galaxy distribution by about $20''$ and away from the
X-ray position.
The BCG has an apparent magnitude $z_\mathrm{BCG} = 20.51 \pm 0.06$, an
osberved color $(R-z)_\mathrm{BCG} = 1.92 \pm 0.08$ and an absolute magnitude
$M_{\rm BCG, z} = -23.8$.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip=]{14430_f2.ps}}
\caption[MD of XMMU\,J1007]{Color-magnitude diagram of XMMU\,J1007. The size of
the symbols encodes vicinity to the X-ray center of the cluster (within
30\arcsec, and 60\arcsec\ and beyond). Objects within the dotted box were used
to estimate the color of the red sequence (solid horizontal line). Objects
framed with squares are spectroscopically confirmed cluster members. The
shaded region top right indicates $>50$\% incompleteness.}
\label{f:cmd}
\end{figure}
The spatial distribution of galaxies that form the red sequence of
Fig.~\ref{f:cmd} appears
stretched in a band from SSE to NNW with a centroid about $10''$
away from the centroid of the cluster X-ray emission. However, the best-fit
X-ray position (Maximum Likelihood fit of an assumed $\beta$-profile folded
with the PSF of the X-ray telescope) lies well within the galaxy assembly (see
Fig.~\ref{f:opt_composit}).
\subsection{Optical spectroscopy}
Multi-object spectroscopy of XMMU\,J1007\ was performed with VLT/FORS2 using the
MXU option in December 2007 and January 2008. One mask was
prepared targeting 37 candidate objects selected as possible cluster members
based on color and stellarity index. Individual exposures of 1308\,s were combined
to yield a total integration time per object of 2.9\,h. Spectra were obtained with
grism 300I; they cover the wavelength range $6000-11000$\,\AA\ with a scale of
3.2\,\AA/pixel at the chosen $2\times2$\,binning.
We obtained 32 classifiable spectra (among them two late-type stars) and
measured the redshifts of galaxies by fitting double Gaussians with fixed
wavelength ratio and same width to the Ca H\&K lines.
The measured redshifts together with coordinates, brightness, and color of
the 32 objects are listed in Table~\ref{t:cluster_members} and marked on
Fig.~\ref{f:chart_spectra} (appendix).
We found 19 concordant redshifts between $1.075$ and $1.088$ with a weighted mean
of 1.08103 and a median of 1.08207. We assume a cluster redshift of 1.082 in
the following. The redshifts in this interval are Gaussian distributed, a
one-sample KS-test reveals a probability of rejecting the null hypothesis of
1.3\%. The distribution of all redshifts is shown in Fig.~\ref{f:zhist}.
Following \citet{danese+80} we calculate a line-of-sight velocity
dispersion of $\sigma_{\rm p} = 572$\,km s$^{-1}$ with a 90\% confidence range between
437\,km s$^{-1}$ and 780\,km s$^{-1}$.
Although the cluster does not appear to be relaxed, we may estimate a mass on
the virial assumption. Using $R_{200} =\sqrt{3}/10 \sigma/H(z), M_{200} =
4/3\pi R_{200}^3 \times 200\rho_c$ \citep{carlberg+97a}, we obtain $R_{200} =
790^{+280}_{-185}$\,kpc, and $M_{200} = 1.8^{+2.8}_{-1.0} \times
10^{14}$\,M$_\odot$\ where the given errors correspond to the 90\% confidence
interval of the velocity dispersion. Using the dark matter halo virial scaling
relation by \citet{evrard+08} instead, one obtains $M_{200} = 1.2^{+1.8}_{-0.7} \times
10^{14}$\,M$_\odot$.
\subsection{X-ray spectroscopy}
X-ray spectra were extracted from the calibrated photon event lists of all
three EPIC cameras onboard XMM-Newton. An aperture with 60\arcsec\
radius ($\sim$500\,kpc at the cluster distance) was used to extract source and
background photons. The X-ray spectrum of XMMU\,J1007\
contains $\sim$200 net photons in the full XMM-band after background subtraction.
An attempt was made to constrain the plasma temperature with a thin thermal
plasma model \citep[a MEKAL model in XSPEC terms,][]{mewe+85}.
The column density of absorbing matter was fixed to its galactic value,
$N_\mathrm{H} = 3.7 \times 10^{20}$\,cm$^{-2}$, the metal abundance to
$Z = 0.3 Z_\mathrm{\sun}$, and the redshift to $z=1.082$.
Despite fixing these parameters, the X-ray temperature could only be roughly
constrained. The fit shown in Fig.~\ref{f:xray_spectra} converges at a
temperature of $T = 5.7$\,keV ($kT > 2.05$\,keV with 90\%\,confidence)
implying a bolometric X-ray luminosity of
$L_\mathrm{X} = 1.3 \times 10^{44}$\,erg\,s$^{-1}$
($L_{\rm X} > 0.8\times 10^{44}$\,erg\,s$^{-1}$ at 90\% confidence,
$L_\mathrm{X}(\mathrm{0.5-2.0\,keV}) = 4 \times 10^{43}$\,erg\,s$^{-1}$).
Following \citet[][]{pratt+09} we estimate the cluster mass using
the luminosity-mass scaling relation. We use their BCES orthogonal fit to the
Malmquist-bias corrected L-M relation, assume self-similar evolution,
$h(z)^{-7/3}$, and obtain $M_{500} = 1.0\times 10^{14}$\,M$_\odot$\
(assuming our aperture covers $R_{500}$).
If we instead use the temperature-mass relation of \citet{vikhlinin+09}
we find $M_{500}=2.1\times10^{14}$\,M$_\odot$\ with a lower limit of
$M_{500}=4.3\times10^{13}$\,M$_\odot$.
\subsection{Lensing properties}
Our follow-up imaging observations confirmed the existence of lensing arcs and
further lensing features in the image. We used the imaging data in the
five passbands (UBVRz) from the LBT and the VLT to calculate photometric
redshifts of the lensed background objects. Due to the distorted morphology of
the lensing arcs we manually defined apertures matching the shape of each
feature. The fluxes of the objects were then extracted using the same aperture
in each image and converted to AB magnitudes.
\begin{table}[thb]
\caption[LBT imaging]{Photometric redshifts of lensing features. All redshifts
were forced to be larger than the cluster redshift.}
\centering
\begin{tabular}{llll} \hline\hline
Image & $V_{\rm AB}$ & $z_{\rm phot}$ & $z$ range (90\%) \\ \hline
A1 & 24.3 & 2.72 & 2.63 - 2.82 \\
A2 & 25.1 & 2.63 & 2.54 - 2.73 \\
B1 & 24.5 & 1.39 & 1.35 - 1.52 \\
B2 & 25.7 & 1.63 & 1.08 - 1.74 \\
B3 & 24.7 & 1.94 & 1.70 - 2.12 \\
C1 & 24.3 & 3.36 & 2.98 - 3.54 \\
C2 & 23.6 & 3.20 & 2.70 - 3.40 \\
\hline
\end{tabular}
\label{zphot}
\end{table}
We used the publicly available {\tt hyperz} code \citep{bolzonella+00}
to compare the spectral energy distributions of the lensing features with the
synthetic galaxy SEDs from \citet{bruzualcharlot93}. The parameters for
the construction of the SED data cube were galaxy class, star forming age,
internal reddening , and redshift. For the lens features we allowed only
redshifts beyond the redshift of the cluster ($z=1.082$) and combinations of age
and redshift consistent with our adopted cosmological parameters.
Table~\ref{zphot} shows the results of the SED fitting for the image
components as indicated in Fig.~\ref{f:opt_composit}. Column (2) gives the best
fitting photometric redshift and column (3) the 90\% confidence limits of
the redshift within the best fitting SED template.
The resulting photometric redshifts confirm our tentative identification of
multiple lensed components.
Components A1/A2 and C1/C2 are likely to be images of the same background
objects at $z\sim 2.7$ and $z\sim{3}$, respectively.
The situation is less clear for components B1, B2, and B3, where the
uncertainties in the photometric redshifts are large.
We estimate a lensing mass assuming a circularly symmetric lens (a special
case is the singular isothermal sphere -- SIS).
The projected mass inside a tangential arc then becomes $M(\theta) = \Sigma_{\rm cr} \pi
(D_{d}\theta)^2 \simeq 1.1 \times 10^{14} \left(
\frac{\theta}{30''}\right) \left(
\frac{D}{1\mathrm{Gpc}}\right)$\,M$_\odot$. The effective distance $D$ becomes
$D = \frac{D_{\rm d} D_{\rm ds}}{D_{\rm s}} = 724$\,Mpc for $z_{\rm d} = 1.082$
and $z_{\rm s} = 2.7$.
Difficulties arise from
the faintness of the lensing features and from the badly determined cluster
center. A trace of the feature A1 implies an Einstein radius of $\theta \sim
8''-9''$ and a corresponding mass of $(2.3\pm0.4)\times10^{13}$\,M$_\odot$\
(a 15\% uncertainty in radius is assumed).
If one uses uses the distance between A1 and the X-ray center ($21\farcs5$)
instead, the mass inside this ring becomes $5.7\times10^{13}$\,M$_\odot$.
\section{Discussion and conclusions}
We have presented results of an initial study of the X-ray selected cluster
XMMU\,J1007. We determine a redshift of $z=1.082$ based on 19 spectroscopic
confirmed members. As most prominent property, the cluster shows several
strong lensing features. It is
an optically rich cluster; the absence of a dominant BCG and the elongated
distribution of member galaxies suggest a not yet relaxed
structure. The bolometric X-ray luminosity is \(L_\mathrm{X} \approx 1.3
\times 10^{44} \mathrm{ergs/s}\).
Estimates of the cluster mass were obtained via strong lensing, X-ray
spectroscopy and the velocity dispersion of member galaxies; the values
obtained at different radii are summarized in Fig.~\ref{f:nfw}.
One obtains overlapping error bars for the mass at $\sim$$R_{200}$ ($\sim$virial
radius) and at $\sim$$R_{500}$ ($\sim$X-ray aperture) due to insufficient data
and uncertainties in the scaling relations.
If one takes $M_{\rm vir} \simeq M_{200} = 1.8\times10^{14}$\,M$_\odot$\ at face
value, our strong lensing result seems to be discrepant with an
NFW-profile. All data can be made compliant at
a mass of about $4\times10^{14}$\,M$_\odot$, higher than but not excluded by the
velocity dispersion and the X-ray temperature, whereas a weak-lensing mass is key to
fix the halo profile at large radii, a deep X-ray observation is necessary to
determine the X-ray morphology and the gas temperature. The modelling of
the strong lensing system is important to probe the inner density profile.
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics[clip=]{14430_f3.ps}}
\caption[]{Comparison of different mass estimates for XMMU\,J1007\ in units of
$10^{14}$\,M$_\odot$. Labels indicate results based an scaling relations by
\citet{vikhlinin+09,pratt+09,carlberg+97a,evrard+08}. The dashed and solid
lines indicate universal density profiles \citep{nfw97} with concentration
parameters of 3.86 and 3.45 for virial masses 1.8 and $4.6\times
10^{14}$\,M$_\odot$\ at an assumed $R_{\rm vir} = 780$\,kpc, respectively \citep{bullock+01}.
}
\label{f:nfw}
\end{figure}
Interestingly,
in a study of a complete sample of 12 MACS clusters \citet{zitrin+10} find the
observed Einstein radii to be larger and hence the central
density of cluster higher than predicted by simulations and interpret their
finding as a challenge to cluster formation in a $\Lambda$CDM model. They invoke
the possibility that the formation of clusters started at earlier epochs than
currently assumed, leading to higher central Dark Matter concentrations.
Also, \citet{jee+09}
suggest this scenario as a possible explanation for the discovery of
unexpectedly massive clusters at $z>\sim$1 in the moderate survey volume
probed by the XMM-Newton serendipitous surveys \citep[like
e.g.~XMMU\,J2235-2557 and 2XMM\,J083026+524133,][]{mullis+05,lamer+08,rosati+09}.
However, in a recent study of the strong lensing clusters in the {\sc Mare Nostrum
Universe}, Meneghetti et al.~(2010, subm.~to A\&A) find the
concentration and the X-ray luminosity to be biased high, and an excess of
kinetic energy within the virial radius among the strong lensing
clusters. XMMU\,J1007\ is a highly suited target to confront theory and
observation.
The elongated distribution of the cluster member galaxies
may point to the principal direction of merging or accretion
of XMMU\,J1007\ \citep{dubinski98}.
The merging hypothesis is consistent with the system being a strong lens
\citep[see e.g.~the chain distribution of the brightest member galaxies
in the strong-lensing main component of the Bullet Cluster,][]{bradac+06}.
On the other hand, the chain distribution of the bright member galaxies
of XMMU\,J1007\ may suggest that the infall of these galaxies
happens along filaments with small impact parameters,
so that dynamical friction is particularly efficient in dragging them
to the cluster center and originate a dominant BCG within a short time
\citep{donghia+05}.
In analogy with the formation of fossil galaxy groups
studied by the latter authors, the small impact parameters
of the filaments may lead to an early assembly of the gas in the center,
thus to an early start of its cooling.
The resulting system may exhibit an enhanced X-ray luminosity
with respect to the optical one.
This efficient, early formation may offer an alternative explanation
to merging for the reason why particularly X-ray luminous clusters
with chain distribution of their bright members are detected
at high redshifts.
\begin{acknowledgements}
We thank J.~Wambsganss for providing software code describing the lensing
geometry. We thank S. Schindler and W. Kausch for an early analysis of the
lensing features. HQ thanks the FONDAP Centro de Astrofisica for partial
support. This work was supported by the DFG under grants Schw
536/24-2 and BO 702/16-3, and the German DLR under grant 50 QR 0802.
We thank our referee, Florence Durret, for constructive criticism.
Based on data acquired using the Large Binocular Telescope (LBT). The
LBT is an international collaboration among institutions in the United
States, Italy, and Germany. LBT Corporation partners are: The University
of Arizona on behalf of the Arizona university system; Istituto
Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany,
representing the Max-Planck Society, the Astrophysical Institute
Potsdam, and Heidelberg University; The Ohio State University, and the
Research Corporation, on behalf of The University of Notre Dame,
University of Minnesota, and University of Virginia.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,648 | arxiv |
\section{Our System and Model (Gated BERT-UNet)}
\input{source/fig/model}
The proposed oxygen estimation solution operates in two steps: (1) Extracting breathing signals from RF signals and (2) Estimating SpO2 from breathing signals. Please refer to Appendix \ref{sup:br-from-rf} for the details of step one. We focus on step two in this section.
We formulate the oxygen saturation prediction from respiration signals as a sequence-to-sequence regression task. \Figref{fig:model} left shows an example of respiration signals and corresponding blood oxygen saturation. The model takes the breathing signal $x \in \mathbb{R}^{1 \times f_{b} T}$ over a $T$-second interval and predicts the oxygen time series $y \in \mathbb{R}^{1 \times f_{o} T}$ over the same period, where $f_b$ and $f_o$ are the sampling frequencies of respiration and oxygen, respectively. In this study, the default sampling rates for respiration and oxygen are $f_b=10Hz$ and $f_o=1Hz$.
As shown in \Figref{fig:model}(a), our backbone model is a combination of a BERT module~\cite{bert} and UNet~\cite{unet}. The \emph{BERT-UNet} backbone consists of an encoder $E$ and an oxygen predictor $F$. The encoder has convolutional layers and bidirectional attention modules while the decoder is fully convolutional and has skip-links from the encoder. The backbone is simply trained by $L_1$ loss and correlation loss. Please refer to Appendix \ref{model:basemodel} for the details of model architecture and loss functions.
Our full model, \emph{Gated BERT-UNet} augments BERT-UNet with multiple predictive heads to adapt the model's prediction to different medical indices. It is important for boosting the model's performance since the relationship between breathing and oxygen saturation has been shown related to many medical indices. For example, gender is an influential factor as men and women have differences in their oxygen transport systems~\cite{reybrouck1999gender}. Another example is that sleep stage affects a person's resting oxygen levels~\cite{choi2016severity}.
To do the adaptation, our Gated BERT-UNet selects the most suitable head for a person via a gate controlled by the subject's categorical indices.
Specifically, as shown in \Figref{fig:model}(b), it has a gate function $G(v,u): {\mathcal{V}} \times {\mathcal{U}} \to \{1,2,\cdots,N\}$ where $v \in {\mathcal{V}}$ and $u \in {\mathcal{U}}$ are accessible/inaccessible variables, $N$ is the number of gate statuses. We use the term \emph{accessible variable} for variables easily available during inference time (e.g., gender, race) and the term \emph{inaccessible variable} for information that is not available during inference, but available during training, like a person's sleep stages.
The prediction for inaccessible variable is learned concurrently with the main task under full supervision.
The model has $N$ heads $\{F_i\}_{i=1}^N$ that adapt the prediction $\hat{y}_i=F_i(E(x))$ to the gate status. It also has an extra predictor $F_u$ to infer inaccessible variables $u$. During testing time, based on the accessible variables $v$ and estimated inaccessible variables $\hat{u}$, we evaluate the gate status $s = G(v,\hat{u})$. Please see Appendix \ref{sec:fullmodel} for the details of (1) the loss function used to train our full model; (2) the exact construction of the gate function $G(v,u)$.
\section{Extracting Breathing Signals from RF Signals}\label{sup:br-from-rf}
We leverage past work on extracting breathing from the RF signals. Specifically, our system is equipped with a multi-antenna Frequency-Modulated Continuous-Wave (FMCW) radio, which is commonly used in passive health monitoring~\cite{rahman2015dopplesleep, yue2018extracting,fan2020home}. The radio transmits a very low power RF signal and captures its reflections from the environment. We process these reflections using the algorithm in~\cite{yue2018extracting} to infer the subject's breathing signal. Past work shows that breathing signals extracted in this manner are highly accurate. Specifically, their correlation with an FDA-approved breathing belt on the person ranges from $91\%$ to $99\%$, depending on the distance from the radio and the distance between people~\cite{yue2018extracting}.
\section{Backbone Model: BERT-UNet}\label{model:basemodel}
Our backbone model a combination of a BERT module~\cite{bert} and UNet~\cite{unet}. As shown in \Figref{fig:model}(a), our \emph{BERT-UNet} model consists of an encoder $E(\cdot;\theta_e)$ and an oxygen predictor $F(\cdot;\theta_f)$. The encoder is composed of a fully convolutional network (FCN) followed by a bidirectional-transformer (BERT) module~\cite{bert}. The FCN extracts local features from the raw respiration signals, then the BERT module captures long-term temporal dependencies based on those features.
The predictor $F$ is composed of several deconvolutional layers, which up-sample the extracted features to the same time resolution of oxygen saturation.
Formally, we have $E: {\mathbb{R}}^{1\times f_{b}T} \to {\mathbb{R}}^{n \times \alpha f_{b}T}$ and $F: {\mathbb{R}}^{n \times \alpha f_{b}T} \to {\mathbb{R}}^{1\times f_{o}T}$ where $n$ is the dimension of the respiration feature and $\alpha$ is the down-sampling factor ($\alpha = 1/240$ in our experiments).
The model is trained with a combination of the $L_1$ loss and the correlation loss given below:
\vspace{-2mm}
\begin{equation}
\resizebox{0.45\hsize}{!}{$
{\mathcal{L}}(\hat{y}, y) = \frac{\|\hat{y} - y \|_1}{f_o T} -
\lambda \frac{\sum_{i} (\hat{y}^i - \mu_{\hat{y}})(y^i - \mu_{y})}{\sqrt{\sum_{i}(\hat{y}^i - \mu_{\hat{y}})^2\sum_{i}(y^i - \mu_{y})^2}}.
$}
\label{eq:l1-loss}
\end{equation}
Here $\hat{y}=F(E(x; \theta_e);\theta_f)$ is the model prediction, $y$ is the ground truth oxygen, $\mu_{y}$ and $\mu_{\hat{y}}$ are the mean values of $y$ and $\hat{y}$, and $\lambda$ is a hyper-parameter to balance the two loss terms. We choose the $L_1$ loss over other regression loss functions, since it is more robust to outliers and empirically has better performance. We also use the correlation loss to help in matching the fluctuations in the predicted oxygen with the fluctuations of the ground truth.
\paragraph{Architecture Specifications.}\label{sup:implement}
The encoder has nine 1-D convolutional layers (Conv-BatchNorm-RReLU) that shrink the features' temporal dimension by 240 times. It is then followed by several bi-directional multi-head self-attention layers (BERT)~\cite{bert} to aggregate the temporal information at the bottleneck. We use 8 layers, 6 heads with hidden-size of 256, intermediate-size of 512 for self-attention, and the max position embeddings is 2400. The decoder contains 7 layers of 1-D de-convolutional layers (DeConv-Norm-RReLU). We also use a skip connection~\cite{unet} by concatenating the convolutional layers in the encoder to the de-convolutional layers in the predictor. Figure~\ref{fig:vanilla} illustrates the overall network architecture.
\input{source/fig/vanilla}
\section{Full Model: Gated BERT-UNet}\label{sec:fullmodel}
In medical applications, there are useful side variables. Adapting to such variables will likely improve performance and make the results more personalized. In many cases, the relevant variables are binary or categorical. For example, the relevant variables for oxygen saturation include gender, whether the person is a smoker, whether they have asthma, etc. The categorical nature of these variables induces \textbf{discontinuity} in the learned function over the physiological indices. Specifically, consider oxygen saturation $y=g_s(x)$ as a function of respiration $x$ and gender $s$ ($0$ for male and $1$ for female). $g_0(x)$ and $g_1(x)$ can be two different functions since men and women have differences in their oxygen transport systems~\cite{reybrouck1999gender}.
To handle such discontinuity for better leveraging the categorical variables, we propose \emph{Gated BERT-UNet}, a new model that augments BERT-UNet with multiple predictive heads. It selects the most suitable head for a person via a gate controlled by the subject's categorical indices. The model supports both variables available at the time of inference (e.g., gender), as well as dense categorical variables concurrently learned from the input signals (e.g., sleep stages).
\Figref{fig:model}(b) illustrates the model. It has a gate function $G(v,u): {\mathcal{V}} \times {\mathcal{U}} \to \{1,2,\cdots,N\}$ where $v \in {\mathcal{V}}$ and $u \in {\mathcal{U}}$ are accessible/inaccessible variables, $N$ is the number of gate statuses. We use the term \emph{accessible variable} for variables easily available during inference time, e.g., gender, and the term \emph{inaccessible variable} for information that is not available during inference, but available during training, like a person's sleep stages.
Inaccessible variables are typically dense time series (e.g., sleep stages). Their prediction are learned concurrently with the main task under full supervision.
The construction of the gate function $G(v,u)$ is described in the next sub-section.
The model has $N$ heads $\{F_i\}_{i=1}^N$ that adapt the prediction $\hat{y}_i=F_i(E(x))$ to the gate status. It also has an extra predictor $F_u$ to infer inaccessible variables $u$. During testing time, based on the accessible variables $v$ and estimated inaccessible variables $\hat{u}$, we evaluate the gate status $s = G(v,\hat{u})$.
In the case of oxygen prediction, $\hat{y}_i$ and the gate status $s$ are time series. As shown in \Figref{fig:model}(b), the final prediction at each time step, is the gated combination of every head's output, i.e.
$ \forall t=1,\dots, f_oT, \hat{y}^t=\sum_{i=1}^{N}\bm{1}[s^t=i]\hat{y}^t_{i}$ .
We train Gated BERT-UNet (GBU) with the following loss,
\vspace{-1mm}
\begin{equation}
{\mathcal{L}}_{\texttt{GBU}}(\hat{y},\hat{u}, y, u) = {\mathcal{L}}(\hat{y}, y) + \frac{\lambda_u}{f_oT}\sum_{t=1}^{f_oT}{\mathcal{L}}_{\texttt{CE}}(\hat{u}^t, u^t),
\end{equation}
where ${\mathcal{L}}$ is the main loss defined in \Eqref{eq:l1-loss}, ${\mathcal{L}}_{\texttt{CE}}$ is the cross-entropy loss to train the branch for predicting inaccessible variables and $\lambda_u$ is a balancing hyperparameter.
\paragraph{Mapping Variables to Heads.}\label{sec:grad-sim}
The number of heads in a Gated BERT-UNet model puts an upper bound on the number of possible gate states. For example, if the model has 6 heads, the gate can take only 1 of 6 states. Typically, we have many more variable states than gate states. To find a proper mapping $G(v,u)$ from variable state to gate state, we rely on gradient similarity. For example, if we want to check whether male smokers should be in the same group as female smokers, we take a pretrained backbone BERT-UNet and compute its averaged gradient (w.r.t the loss function) over all male smokers and all female smokers in the dataset. Then we check the cosine similarity between the two gradients. If the gradients are similar, which means the two categories move the loss function in the same direction, we can use the same predictor for them. On the other hand, if the gradients are vastly different, it is preferable to separate such categories and assign them to different gate statuses.
\input{source/exp_dataset}
\section{Baselines.}\label{sec:baseline}
We compare the following neural network architectures for our backbone model: (a)~\textit{CNN} is a fully convolutional model composed of eight 1-D convolutional layers and seven 1-D deconvolutional layers; (b)~\textit{CNN-RNN} augments the CNN model with a recurrent unit (one layer LSTM) in the bottleneck to better captures the long-term temporal relationships of the data. (c)~\textit{BERT-UNet} further makes two improvements on the CNN-RNN model. First, it uses an attention module to replace recurrent unit for temporal modelling. Second, it adds skip links between encoding convolutional layers and decoding deconvolutional layers at the same temporal scale to better capture the signal's local information.
To evaluate our design for incorporating side information, we compare the following models:
(a) \textit{BERT-UNet + VarAug}, which uses BERT-UNet as backbone and takes accessible variables as extra inputs and inaccessible variables as auxiliary tasks. (b) Our \textit{Gated BERT-UNet} model, which uses a multi-head model gated by physiological variables.
\section{Training and Evaluation Protocols}\label{sec:protocol}
\paragraph{Train/Valid/Test Spilt.} Due to the limited data amount in the RF dataset, the RF data is all hold out for testing. In our study, all models are trained on the medical datasets and evaluated on both the medical datasets and the RF dataset. Collectively, the medical datasets have about 48,000 hours of data from 5,765 subjects in total. We randomly split subjects, 70\% for training and validation, and 30\% for testing, and fix the splits in all experiments. We train each model on the union of the training sets from the three medical datasets, and tested on each test set.
\paragraph{Side Variable Specifications.} For models that incorporate side variables, we use \emph{gender} as the accessible variable and \emph{sleep stages} as the inaccessible variable. In Gated BERT-UNet, we use gradient similarity to map gate status as described in the method section, which results in the following $6$ categories: (male, awake), (male, REM sleep), (male,
non-REM sleep), (female, awake), (female, REM sleep), (female, non-REM sleep). The sleep stages themselves are learned from the input since they are an inaccessible variable. In the baseline VarAug, the gender variable is provided as an additional input and the sleep stages are used as an auxiliary task in a multitask model.
\paragraph{Implementation Details.}
We implement all models in PyTorch. All experiments are carried out on a NVIDIA TITAN Xp GPU with 12 GB memory. The number of parameters for the Gated BERT-UNet model is 26,821,113 and the model size is 107.28MB. In the training process, we use the Adam optimizer with a learning rate of $2 \times 10^{-4}$, and train the model for 500 epochs. Due to the varying input length, we set the batch size for all models to 1 (i.e., one night of respiration signal and the corresponding oxygen time series).
\section{Results for Different Skin Colors}\label{sec:diffskin}
Since pulse oximeters rely on measuring light absorbance through the finger, they are known to be affected by skin color and tend to overestimate blood oxygen saturation in subjects with dark skin~\cite{feiner2007dark,sjoding2020racial}. A large study that looked at tens of thousands of white and black COVID patients found that the ``reliance on pulse oximetry to triage patients and adjust supplemental oxygen levels may place black patients at increased risk for hypoxaemia"~\cite{sjoding2020racial}. In contrast, breathing and RF signals have no intrinsic bias against skin color. \Figref{fig:race-result} shows the distributions of the oximetry-based ground-truth oxygen and the Gated BERT-UNet prediction for different races, for the union of all datasets. The ground-truth measurements from oximetry show a clear discrepancy between black and white subjects. In particular, black subjects have higher average blood oxygen. This is compatible with past findings that pulse oximeters overestimate blood oxygen in dark-skinned subjects~\cite{feiner2007dark}. In contrast, the breathing-based oxygen prediction corrects or reduces this bias, and shows more similar oxygen distributions for the two races.
\input{source/fig/race_result}
\section{Extra Results}\label{sec:extra-results}
\subsection{Example of Same Breathing Rate but Different Oxygen Level}
One might wonder whether our model predicts oxygen merely by the breathing frequency. We show it is not the case. Our model understands that the same breathing frequency does not translate to the same oxygen level. \Figref{fig:same-breath} shows two people breathing at the same frequency of 14 BPM, but the model correctly realizes that one of them has high and stable oxygen of 98 while the other has a relatively low oxygen level of 93. The model even follows the dynamic change in their oxygen levels from one second to the next. This is possible because the model analyzes the full details of the breathing signal and its dynamics, which is a dense 1D input, not just one variable like the rhythm. To see that, consider again the example in \Figref{fig:same-breath}. While the two people breathe at the same frequency, the person whose breathing is plotted in blue suddenly starts taking deeper breaths. This is an indication that the person is low on oxygen and is trying to increase his oxygen intake. When the person takes such deep breaths, the oxygen level increases. In contrast, the person whose oxygen is plotted in orange is not struggling with low oxygen and his breathing is steady.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.9\columnwidth]{./figures/appendix/same-breath.pdf}
\vspace{-4mm}
\caption{Example of same breathing rate but different oxygen saturation.}
\label{fig:same-breath}
\end{figure}
\subsection{Qualitative Results in Different Datasets}
\label{sec:visual}
We include more visualizations of the breathing signals and the corresponding oxygen saturation predicted by the \textit{Gated BERT-UNet} model for the medical datasets: MESA (\Figref{fig:mesa1} and \Figref{fig:mesa2}), MrOS (\Figref{fig:mros1} and \Figref{fig:mros2}). We also visualize the model's prediction on unhealthy subjects with various diseases including chronic obstructive pulmonary disease (COPD), asthma, diabetes. In the plots, The background color indicates different sleep stages. The `dark grey', `light grey' and `white' correspond to `Non-REM sleep', `REM' and `Awake', respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/mesa_eg2-v2-new.pdf}
\caption{\textit{MESA} Example 1.}
\label{fig:mesa1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/mesa_eg1-v2-new.pdf}
\caption{\textit{MESA} Example 2.}
\label{fig:mesa2}
\end{figure}
\paragraph{MESA.}
The example in \Figref{fig:mesa1} shows our model's ability to capture the fluctuations of oxygen saturation. At the same time, the example in \Figref{fig:mesa2} shows that our model accurately detects the region of low oxygen saturation, which highlights its usefulness in monitoring patients.
\paragraph{MrOS.}
From the zoomed-in regions (a) and (b) in \Figref{fig:mros1}, we see that the model exhibits a larger error when the ground truth SpO2 reading is very low. It is mainly caused by the imbalanced labels in the training set since subjects usually experience much less time of having low oxygen level (e.g., below 90\%) than having a normal oxygen level between 94\% to 100\%. \Figref{fig:mros2} shows another example of the dynamics. As shown in the zoomed range, oxygen fluctuations correlate with one's sleep stage: in `REM' (colored by light gray), the oxygen fluctuates drastically while in `Non-REM sleep' (colored by dark gray), the oxygen is much more stable and fluctuates in a small range. Our model makes accurate predictions since it leverages the sleep stage information.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/mros_eg2-v3-crop-new.pdf}
\caption{\textit{MrOS} Example 1.}
\label{fig:mros1}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/mros_eg1-v2-new.pdf}
\caption{\textit{MrOS} Example 2.}
\label{fig:mros2}
\end{figure}
\newpage
\subsection{Results for Relevant Diseases}
As we mentioned earlier, using breathing as the input to the neural network model allows us to both train and test on large respiration dataset from past sleep studies. Since these datasets contain diverse people with a variety of diseases, it makes it possible to check how the model generalizes to unhealthy individuals. Particularly we are interested in diseases that interact with oxygen saturation including pulmonary diseases such as chronic obstructive pulmonary disease (COPD), chronic bronchitis, asthma, emphysema, and others like diabetes and coronary heart disease.
Diabetes is a disease in which the patient's blood sugar levels are too high. Research has shown that diabetes is a risk factor for severe nocturnal Hypoxemia in obese patients~\cite{lecube2009diabetes}. Further diabetic patients tend to have 3\% to 10\% lower lung volumes than adults without the disease. \Figref{fig:diabetes2} shows an example of a diabetic patient who has an oxygen level that keeps oscillating between 85\% and 95\%. From the zoomed-in region, we can see our model captures the oscillating oxygen dynamics and accurately predicts the oxygen values.
Chronic obstructive pulmonary disease (COPD) refers to a chronic inflammatory lung disease that obstructs airflow from the lungs. Severe COPD can cause hypoxia, an extremely low oxygen level. \Figref{fig:copd1} shows an example of a COPD patient who experiences several oxygen droppings during the REM period (indicated by the orange box). As we can see, our model successfully predicts the events of oxygen dropping.
Chronic bronchitis refers to long-term inflammation of the bronchi. Chronic bronchitis patients can have shortness of breath which affects oxygen levels. \Figref{fig:cb2} is an example of a chronic bronchitis patient whose oxygen level keeps osculating between normal and low during the night. Our model captures the trend well.
Asthma is a condition in which a person's airways narrow, swell, and produce extra mucus. An asthma patient's oxygen levels can be irregular due to the breathing difficulty caused by the disease. As shown in the example in \Figref{fig:asthma1}, the patient has a normal oxygen level for most of the time, but the oxygen occasionally drops to low levels. Our model works well on detecting those abnormal oxygen levels from the person's respiration.
Emphysema refers to a lung condition in which the air sacs in the person's lung are damaged. Patients with emphysema usually have breathing issues that affect oxygen saturation. \Figref{fig:emphysema2} shows an example of an emphysema patient. The patient experiences a long period of low oxygen during sleep (as highlighted by the orange box). Our model successfully predicts such unusual oxygen dynamics.
Coronary heart disease develops when the arteries of the heart are too narrow to deliver enough oxygen-rich blood to the heart. A deficiency in providing oxygen-rich blood can lead one's oxygen saturation to deviate from normal level. \Figref{fig:chd2} is an example of a person with coronary heart disease who experiences several severe oxygen drops during sleep. Our model accurately tracks their oxygen level and detects the oxygen reduction events.
\begin{figure}[t]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/mros-visit2-aa5258_diabetes_full-crop_new2.pdf}
\caption{Diabetes Patient Example. The first row shows the full-night oxygen saturation, while the second row zooms into the first row's orange box region.
The black and blue curves are the ground-truth oxygen and our prediction. The background color indicates the subject's sleep stages. The dark grey, light grey and white corresponds to Sleep, REM and Awake respectively.}
\label{fig:diabetes2}
\end{figure}
\newpage
\begin{figure}[t]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/shhs2-205399_copd_full-crop_new2.pdf}
\caption{Chronic Obstructive Pulmonary Disease (COPD) Patient Example. The first row shows the full-night oxygen level, while the second row zooms into the first row's orange box region. The black and blue curves are the ground-truth oxygen and our prediction. The background color indicates the subject's sleep stages. The dark grey, light grey and white means Sleep, REM and Awake.}
\label{fig:copd1}
\vspace{-4mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/shhs2-201315_bronchitis_full-crop_new2.pdf}
\caption{Chronic Bronchitis Patient Example. The first row shows the full-night oxygen saturation, while the second row is a zoomed-in visualization of the orange box region in the first row.
The black and blue curves are the ground-truth oxygen and our prediction. The background color indicates the subject's sleep stages. The dark grey, light grey and white means Sleep, REM and Awake.}
\label{fig:cb2}
\vspace{-4mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/shhs2-201544_asthma_full-crop_new2.pdf}
\caption{Asthma Patient Example. The first row shows the full-night oxygen saturation, while the second row is a zoomed-in visualization of the orange box region in the first row.
The black and blue curves are the ground-truth oxygen and our prediction. The background color indicates the subject's sleep stages. The dark grey, light grey and white means Sleep, REM and Awake.}
\label{fig:asthma1}
\vspace{-4mm}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/shhs2-200858_emphysema_full-crop_new.2.pdf}
\caption{Emphysema Patient Example. The first row shows the full-night oxygen saturation, while the second row is a zoomed-in visualization of the orange box region in the first row.
The black and blue curves are the ground-truth oxygen and our prediction. The background color indicates the subject's sleep stages. The dark grey, light grey and white means Sleep, REM and Awake.}
\label{fig:emphysema2}
\vspace{-4mm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.86\columnwidth]{./figures/appendix/disease/shhs2-201607_CHD_full-crop_new2.pdf}
\caption{Coronary Heart Disease Patient Example. The first row is the full-night oxygen saturation prediction results, where the black curve indicates the ground-truth oxygen saturation and the blue curve indicates our predicted oxygen saturation. The second row is the zoom-in version of the orange box region in the first row. The background color indicates sleep stages. The dark grey, light grey and white corresponds to non-REM Sleep, REM and Awake, respectively.}
\label{fig:chd2}
\vspace{-4mm}
\end{figure}
\input{source/discuss}
\section{Introduction}
Remote health monitoring and telehealth are increasingly popular because they reduce costs and facilitate access to healthcare, particularly for people in remote locations~\cite{al2019remote}. Further, remote health monitoring can track the long-term physiological state of a patient or an older person who lives alone at home, and enable family and professional caregivers to provide timely help~\cite{celler1995remote,rf-fall,fan2020home, yang2022artificial}. Delivering such services, however, depends on the availability of solutions that continuously measure people's physiological signals at home, with minimal overhead to patients.
Oxygen saturation is an important physiological signal whose at-home monitoring would benefit very old adults and individuals at high risk for low blood oxygen ~\cite{moss2005compromised}.
Oxygen saturation refers to the amount of oxygen in the blood – that is the fraction of oxygen-saturated hemoglobin relative to the total blood hemoglobin. Normal oxygen levels range from 94\% to 100\%. Lower oxygen can be dangerous, and if severe, lead to brain and lung failure~\cite{diaz2002usefulness,lapinsky1999safety}.
Today, measuring oxygen saturation requires the person to wear a pulse oximeter on their finger, and actively measure themselves. While pulse oximeters are very helpful, they can be impractical in some at-home monitoring scenarios. In particular, old people in their late 80’s and 90’s are at high risk for low blood oxygen~\cite{nlm}, and should regularly monitor their oxygen. Many of them however may suffer from dementia or cognitive impairment that prevent them from measuring themselves. COVID and pneumonia patients recovering at home can suffer from delirium~\cite{han2020}, which can affect their reasoning and ability to measure their oxygen levels. Additionally, blood oxygen tends to drop during sleep, making it particularly important to track oxygen overnight~\cite{palma2008oxygen,gries1996normal}. Yet people cannot actively measure themselves while asleep.
The above use cases motivate us to try to complement pulse oximetry with a new approach that can work passively and continuously, assessing blood oxygen throughout the night without requiring the person to wear a sensor or actively measure themselves.
Prior attempts at estimating blood oxygen passively, without a wearable sensor, rely on cameras~\cite{mathew2021remote,van2016new,van2019data,shao2015noncontact,bal2015non,guazzi2015non}.
For example, \cite{mathew2021remote} proposes a convolution neural network that analyzes a video of the person's palm to estimate his/her blood oxygen. While not requiring wearable sensors, it cannot work continuously since the user cannot keep their hand in front of the camera for a long time. It also cannot operate in dark settings and thus cannot monitor one's oxygen during sleep. To bypass the limitations imposed by cameras, we propose a different sensing modality, \emph{radio-frequency (RF) signals}.
We propose to monitor oxygen saturation by analyzing the radio signals that bounce off a person's body. Recent research has demonstrated the feasibility of monitoring breathing, heart rate, and even sleep stages by transmitting a very low power RF signal and analyzing its reflections off a person's body~\cite{adib2015smart,yue2018extracting}.
The medical literature shows an inherent dependence and dynamic interaction between the breathing signal and oxygen saturation~\cite{hanson1975, na1961, Parkins1998}. Building on these advances, we use RF signals to track a person's breathing signal and train a neural network to infer oxygen from respiration.
Such a design can measure a person's oxygen without any physical contact or wearable devices. Thus, it does not burden the patient or interfere with their sleep.
Using breathing as a mediator has an additional side benefit. It would be very hard to collect a large dataset of radio signals and the corresponding oxygen levels. Luckily however there are multiple large medical datasets that contain continuous breathing signals paired with oxygen measurements. This allows us to train a model to infer oxygen from breathing and test it directly on breathing extracted from radio signals.
While designing our model, motivated by personalized medicine, we aim for a model that can adapt to the patient's medical indices (e.g., gender, disease diagnoses). We observe that many medical indices are binary or categorical. To leverage such variables, we propose \emph{Gated BERT-UNet}, a new transformer model that has multiple predictive heads. It selects the most suitable head for each person via a gate controlled by the person's categorical indices.
We evaluate our model on medical and RF datasets. Experiments show that our model's average absolute error in predicting oxygen saturation is 1.3\%, which is significantly lower than the state-of-the-art (SOTA) camera-based models~\cite{mathew2021remote}.
\section{Datasets and Metrics} \label{sec:datasets}
\paragraph{Medical Datasets.}
We leverage three public medical datasets: Sleep Heart Health Study (\textit{SHHS})~\cite{SHHS2}, Multi-Ethnic Study of Atherosclerosis (\textit{MESA})~\cite{MESA}, and Osteoporotic Fractures in Men Study (\textit{MrOS})~\cite{MROS}.
The datasets were collected during sleep studies. For each subject, they include the respiration signals throughout the night along with the corresponding blood oxygen time series.
The breathing signals are collected using a breathing belt around the chest or abdomen, and the oxygen is measured using a pulse oximeter.
The datasets also contain side variables including sleep stages, which for every time instance assign to the subject one of the following: Awake, Rapid Eye Movement (REM) or non-REM stage.
We note that the subjects in these studies have an age range between 40 and 95, and some of them suffer from a variety of diseases such as chronic bronchitis, cardiovascular diseases, and diabetes. This allows for a wider range of oxygen variability beyond the typical range of healthy individuals.
\begin{wrapfigure}{r}{0.3\textwidth}
\centering
\vspace{-4mm}
\includegraphics[width=0.3\columnwidth]{./figures/method/RF_Device.pdf}
\vspace{-4mm}
\caption{The radio device to collect RF signals.}
\vspace{-6mm}
\label{fig:radio}
\end{wrapfigure}
\paragraph{RF Datasets.} We collected a dataset of RF signals paired with SpO2 measurements. The data is collected from two sleep labs. In total, there are 400 hours of data from 49 overnight recordings of 32 subjects. The dataset contains subjects of different genders and races. Some subjects are healthy volunteers while others are patients with sleep problems. Thus, the ground truth oxygen saturation distribution is wider than normal ranges. A radio device is installed in the room to collect RF signals, as shown in Fig.~\ref{fig:radio}. The radio signals are synchronized with the SpO2 measurements and processed with the algorithm in~\cite{yue2018extracting} to extract the person's breathing signals.
\paragraph{Metrics.}
Let $\hat{y}$, $y$ denote the predicted and ground truth oxygen saturation. Following the previous work~\cite{mathew2021remote}, we use three standard metrics for evaluation: (1)~Correlation: \resizebox{0.2\hsize}{!}{$\frac{\sum_{t} (y^t-\mu_y)(\hat{y}^t-\mu_{\hat{y}}) }{\sqrt{\sum_{t} (y^t-\mu_y)^2 \sum_t(\hat{y}^t-\mu_{\hat{y}})^2}}$}, where $\mu_y$ and $\mu_{\hat{y}}$ are the averaged oxygen saturations; (2)~MAE: \resizebox{0.15\hsize}{!}{$\frac{1}{T} \sum_{t=1}^T |y^t - \hat{y}^t|$}; and (3)~RMSE: \resizebox{0.15\hsize}{!}{$\sqrt{\frac{1}{T} \sum_{t=1}^T (y^t - \hat{y}^t)^2}$}.
The models take a night of breathing as input and predict the corresponding oxygen levels. During the evaluation, we divide the model's prediction and ground truth oxygen into 240-second segments to compute the metrics. We note that those metrics are sensitive to the segment's length. For a fair comparison, we follow \cite{mathew2021remote} and use 240-second intervals.
\subsection{Datasets} \label{sec:datasets}
\hh{\Tabref{tab:dataset} summarizes the statistics of datasets used in our experiments. SHHS, MESA, MrOS refer to the three public medical datasets: Sleep Heart Health Study~\citep{SHHS2}, Multi-Ethnic Study of Atherosclerosis~\citep{MESA}, and Osteoporotic Fractures in Men Study~\citep{MROS} developed for sleep study. They are large datasets containing thousands of subjects with different gender, age, health conditions. We train our model on the union of these three datasets. We randomly split the data such that 70\% of the subjects are used for training while the rest 30\% of subjects are reserved for testing. We also collected a small dataset using our radio device to validate our system in the real application of contactless oxygen monitoring.
}
\noindent \textbf{Medical Datasets.}
\hh{The medical datasets contain rich physiological indices of the subjects. We mainly use the following during our experiments: 1) Respiration signals measured by breathing belt; 2) Oxygenation measured by pulse oximetry; 3) Sleep stage which indicates the stage of the subject during sleep. There are five stages: Awake, Rapid Eye Movement (REM) and three different levels of sleep~(N1, N2, and N3).
We note that the subjects in these studies have an wide age range and suffer from a variety of diseases such as chronic bronchitis, cardiovascular diseases, and diabetes. This allows for a wider range of oxygen variability beyond the typical range of healthy individuals.
}
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\columnwidth]{./figures/method/RF_Device.pdf}
\vspace{-2mm}
\caption{Radio device used to collect RF signals. The monitoring is done in a contactless manner and without imposing any overhead on the patient. }
\vspace{-4mm}
\label{fig:radio}
\end{figure}
\noindent \textbf{RF Dataset.}
\red{We collect a small RF dataset from the sleep labs of a university and a hospital. It contains 49 nights of data from 32 subjects.
\Figref{fig:radio} shows an example data collection setting:
a radio device is installed in the room to collect RF signals; The subject is asked to wear pulse oximetery that record containing oxygen saturation data during the sleep. As we mentioned in the method section, the radio signals are processed to extract the subject's respiration signals. We later synchronize the respiration signals and oxygenation data and build the RF dataset.
}
\section{Related Work}
\noindent \textbf{Contactless Health Sensing with Radio Signals} The past decade has seen a rapid growth in research on passive sensing using radio frequency (RF) signals. Early work has demonstrated the possibility of sensing one's breathing and heart rate using radio signals~\citep{adib2015smart}. Building on this work, researchers have found that by carefully analyzing the RF signals that bounce off the human body, they can monitor a variety of health metrics including sleep, respiration signal, heart rate, gait, falls, and even human emotions~\citep{nguyen2016continuous,wang2017tensorbeat,yue2018extracting, wigait,wifall,rf-fall,eq-radio,jiang2018smart}. Our work adds a new method to the general area of contactless sensing of physiological signals from radio waves.
\noindent \textbf{Predicting Oxygen Saturation}
No past work predicts oxygen levels from breathing signals or radio waves. Past work on oxygen prediction focuses on extrapolating or interpolating oxygen measurements. Specifically, several papers use auto-regressive models that take past oxygen readings and extrapolate oxygen level for the next few minutes~\citep{elmoaqet2013predicting,elmoaqet2014novel,elmoaqet2016multi}. Other work uses current oxygen measurements to predict changes in oxygen in the five minutes after adjusting ventilator setting~\citep{ghazal2019using}. Some past work models the photoplethysmogram (PPG) signal, i.e., the raw data used by pulse oximeters. For example, \citep{martin2013stochastic} proposes a stochastic model of PPG that reconstructs missing oxygen readings from observed values. Thus, all past methods estimate future or missing oxygen values from available oxygen measurements. Our work differs from all of this prior work in that we predict oxygen values from other modalities like respiration or radio waves, and enable oxygen sensing in a contactless manner.
\noindent \textbf{Adaptation of Machine Learning Models to Medical Indices}
Prior deep learning models operating on physiological signals typically do not adapt to a person's medical indices. For example, the literature has models that infer sleep stages from respiration~\citep{zhao2017learning}, detect arrhythmia from ECG~\citep{kiyasseh2020clocs,kiyasseh2020clops}, and classify emotion from EEG signals~\citep{murugappan2010classification}. A recent survey~\citep{rim2020deep} collected 147 papers about learning with physiological signals. None of the deep models therein adapts to a person's medical indices.
The deep learning literature includes a few approaches for leveraging auxiliary variables. If the variable is available at inference time, past work typically takes it as an extra input feature~\cite{narayan2017neural,shen2016automatic}. Some variables are only accessible during training. In this case, they are typically used as an extra supervisor to regularize the model via multi-task learning~\cite{liu2019auxiliary,mordan2018revisiting,valada2018deep}.
We analytically and empirically show that GNP is a better design for adapting to categorical variables, and outperforms methods that take variables as input and those that leverage them through multitask learning.
\section{Experiments}
\label{sec:exp}
We present the results of empirical evaluation on medical datasets and RF data. For the error metric, we use the average absolute error between the predicted oxygen saturation $\hat{y}$ and the ground-truth $y$ at any point in time. The error is first averaged for every night and each subject, i.e., $\frac{1}{T} \sum_{t=1}^T \|y^t - \hat{y}^t\|$. We then report the mean and standard deviation of per-night averaged errors of the dataset.
\input{source/exp_dataset}
\subsection{Compared Models}
\label{sec:models}
We would like to evaluate both our backbone \textit{BERT-UNet} architecture, and our \textit{GNP} design for incorporating physiological variables.
To evaluate our backbone design without side variables, we compare our \textit{BERT-UNet} model to prior models for learning from physiological signals. Specifically, we compare the following models: (a) \textit{Linear Regression}: we use six hand-crafted time series statistics proposed by \cite{picard2001toward} as breathing features and run linear regression. (b) \textit{CNN-RNN}: we adapt the CNN-RNN proposed in \cite{zhao2017learning} to our oxygen prediction task. It contains a feature encoder (CNN-RNN) and an oxygen predictor. (c) \textit{BERT-UNet}: our backbone model described in \ref{model:basemodel}.
To evaluate our design for incorporating side information, we compare the following models:
(a) Our \textit{Gated Neural Predictor (GNP)} model which uses a multi-head model gated by physiological variables.
(b) \textit{VarAug}: This model uses BERT-UNet as backbone and takes accessible variables as extra inputs and inaccessible variables as auxiliary tasks. (c) \textit{PCGrad-Adapted}: we adapt the PCGrad~\cite{PCGrad} to our task. The original PCGrad was proposed for improving multi-task learning and deal with scenarios where the tasks have divergent gradients. The model shares one encoder across different tasks, and projects the divergent gradients along consistent directions during training. To adapt this design to our task we augment the BERT-UNet model with PCGrad applied to subgroups with different values for their medical indices. To train the single predictor $F$, in each iteration, we compute losses on different subgroups. Each subgroup $i$ has an oxygen prediction loss ${\mathcal{L}}_i$. We compute the predictor's gradients $\nabla_{\theta_f} {\mathcal{L}}_i$ for each loss, and calibrate them by PCGrad. We then use the calibrated gradients to update the parameters of the predictor.
\input{source/fig/visualize_shhs}
\input{source/table/main-result-v5}
\subsection{Evaluation on Medical Datasets}
We empirically evaluate the models in Sec.~\ref{sec:models} on the medical datasets. All models are trained on the union of the training sets from the three medical datasets, and tested on each test set, and on the union of the three test sets. For models that incorporate side variables, we use gender as the accessible variable and sleep stages as the inaccessible variable. In GNP, we use the gradient to pick gate status as described in Sec.~\ref{sec:grad-sim}, which results in the following categories: (male, awake), (male, REM), (male, N1+N2+N3), (female, awake), (female, REM), (female, N1+N2+N3). The sleep stages themselves are learned from the input since they are an inaccessible variable. In the baselines, the gender variable is provided as an additional input and the sleep stages are used as an auxiliary task in a multitask model.
\textbf{Results.}
The results are shown in \Tabref{tab:res-main}. The upper rows in the table show the results of models that do not leverage side variables. The table shows that the BERT-UNet model has an average prediction error of 1.62\%, and consistently outperforms the linear regression model and the CNN-RNN model on all datasets.
The bottom rows in the table show the results of models leveraging accessible and inaccessible medical variables. VarAug, PCGrad, and our GNP are all built using our BERT-UNet architecture. The objective is to compare their ability to leverage side variables. The table shows that VarAug, PCGrad, and GNP have an overall average prediction error of 1.58\%, 1.69\%, and 1.55\%, respectively. VarAug and the GNP perform better than the backbone BERT-UNet demonstrating the benefit of leveraging side variables. PCGrad performs worse than the backbone BERT-UNet showing that design could not utilize the side variables effectively. Our GNP outperforms both baselines on all three datasets, demonstrating that a gated multi-head approach works best for such categorical side variables. More generally, these results show for the first time that respiration signals can be used to infer oxygen saturation. Further, the errors are comparable to the accuracy of consumer pulse oximeters (wearable devices with an average error ranges from 0.4\% to 3.5\% ~\citep{lipnick2016accuracy}).
\textbf{Visualization.}
To better understand how the model works, we visualize an example of its prediction.
Figure~\ref{fig:visualize_shhs2} visualizes the predicted oxygen saturation of the GNP model and the VarAug model on a male subject in the SHHS dataset. As the ground-truth oxygen saturation are integers, we rounded the predicted oxygen values. The background color indicates different sleep stages. The `light grey' and `white' corresponds to `REM' and `Awake', respectively. For clarity, we use one color `dark grey' for all three non-REM `Sleep' stages, N1, N2, and N3. We observe that GNP consistently outperforms VarAug over the whole night. The small panel focus on different sleep stages. In general, different sleep stages tend to show different behavior and hence the importance of using a gated model. Specifically, oxygen is typically more stable during non-REM `Sleep' than REM and Awake stages. The figures shows that GNP can track the ups and downs in oxygen and is significantly more accurate than VarAug. This experiment demonstrates that the way we incorporate the sleep stages into the model improves performance across different sleep stages. Please see the supplement for visualizations on the other datasets.
\input{source/fig/fig_breathing_rate_vs_full_breathing}
\subsection{Evaluation on RF Dataset}
As mentioned earlier, the RF dataset is not large enough to train the model on RF signals. Hence, we train our model to infer oxygen from respiration signals. We then test the model directly on RF signals. Specifically, we evaluate the GNP model on the RF dataset. We test the model on both RF-based respiration signals and the corresponding breathing signals measured by a breathing belt. The results are shown in \Tabref{tab:umass}. We observe that the model works well on the
RF dataset with a small prediction error, no matter which breathing signal is used. The prediction error of using the RF-respiration signals ($1.28\%$) is even slightly lower than using the respiration signals from the breathing belt ($1.30\%$). This is because the noise in the respiration measured using RF signal is different from the noise in the breathing belt measurements. The noise in the RF-based measurements tends to increase with motion, and hence we see that the model prediction error is higher for the awake stage. In contrast, the error in the breathing belt depends on the sleep posture and the positioning of the belt on the body. Also, comparing with the error from the previous section, we find the model is performing better on the RF dataset than on the medical datasets, i.e., $1.30\%$ vs. $1.55\%$. We believe the reason is that the RF dataset is collected with healthy individuals and hence has less complexity and is easier to predict. These results show the first demonstration of predicting oxygen saturation from radio signals without any body contact.
\input{source/table/umass-result}
\subsection{Results for Different Skin Colors}
\input{source/fig/race_result}
One important issue to keep in mind is that our model learns from ground-truth oxygen saturation measured using pulse oximeters (today oximetry is the only way for continuous noninvasive monitoring of blood oxygen). Hence, to understand the prediction errors and their potential causes, we plot in \Figref{fig:race-result} the distributions of the ground-truth oxygen and the GNP predicted oxygen for different races, for all datasets. Interestingly, the ground-truth measurements from oximetry show a clear discrepancy between black and white subjects. In particular, they show that black subjects have higher blood oxygen. This is compatible with past findings that pulse oximeter overestimates blood oxygen in dark-skinned subjects~\citep{feiner2007dark}. In contrast, our model predictions show much more similar oxygen distributions for the two races. It is interesting that while the model learned from what seems to be biased ground-truth, the results show that the model is able to correct, to some extent, for this bias. Admittedly, these observations are inductive since there is no way to measure the exact errors in the ground-truth labels.
The distributions in \Figref{fig:race-result} also show that the model tends to miss the very high and very low oxygen values. This is expected given that only a very small percentage of the ground-truth data falls in the tails. One would expect that the performance on very high and very low values would improve if there is sufficient training data in those ranges.
\vspace{-0.1in}
\subsection{Breathing Signals vs. Breathing Rate}
It is important to note that we use the breathing signals instead of the breathing rates. Breathing signals contain richer information about oxygen saturation than the breathing rate. Figure \ref{fig:breathing_rate} shows several pairs of subjects who have the same breathing rate but quite different oxygen saturation values. It also shows that our model is able to predict the correct oxygen levels and is not confused by the fact that the subjects have the same breathing rates.
\vspace{-0.1in}
\subsection{Analysis of Physiological Variables}\label{sec:exp_diff_var}
We further study the benefits of a gated model for incorporating side information from different variables. We conduct experiments on the SHHS dataset since it has a rich collection of medical indices. For every single variable we choose, we compare the GNP model with VarAug. Recall that the only difference between these two models is in the way they leverage categorical variables. Table \ref{tab:diff_var} shows the comparison regarding Sleep Stage (awake vs. REM vs. Sleep), Gender (male vs. female), Smoke States (non-smoker vs. smoker), Asthma (having Asthma or not) and Aspirin (taking aspirin or not). The result shows that our model consistently outperforms VarAug, indicating that GNP is a better solution to deal with discontinuous physiological variables. This supports our theoretical analysis in Sec.~\ref{sec:theory}. We believe the applicability of this result extend beyond the specific task of oxygen prediction to other tasks that involve adaptation to categorical or binary variables. As a reference, the first row in \tabref{tab:diff_var} shows the error with no variables, i.e., the errors of the backbone BERT-UNET model. Note that the errors in \tabref{tab:diff_var} are larger than BERT-UNet's errors shown in \tabref{tab:res-main}. This is because the models here are trained on the SHHS dataset while in the models in
\tabref{tab:res-main} are trained on the three datasets combined.
\vspace{-4mm}
\input{source/table/diff_variables}
\vspace{-1mm}
\subsection{Results for Different Skin Colors}
Since pulse oximeters rely on measuring light absorbance through the finger, they are known affected by skin color and tend to overestimate blood oxygen saturation in subjects with dark skin~\cite{feiner2007dark,sjoding2020racial}. This issue was also the topic of a recent news outbreak after a large study that looked at tens of thousands of white and black COVID patients found that the ``reliance on pulse oximetry to triage patients and adjust supplemental oxygen levels may place black patients at increased risk for hypoxaemia"~\cite{sjoding2020racial}.
In contrast, since neither breathing nor RF signals is affected by skin color, a model that estimates oxygen saturation from breathing or RF signals has no intrinsic bias against skin color.
Hence as it trains, it tries to learn a general mapping that fits all skin colors.
To understand the prediction errors and their relation to skin color, we plot in \Figref{fig:race-result} the distributions of the ground-truth oxygen and the GNP predicted oxygen for different races, for the union of all datasets. Interestingly, the ground-truth measurements from oximetry show a clear discrepancy between black and white subjects. In particular, they show that black subjects have higher average blood oxygen. This is compatible with past findings that pulse oximeter overestimates blood oxygen in dark-skinned subjects~\cite{feiner2007dark}. In contrast, our model kind of corrects or reduces the bias -- the predictions reduce the average oxygen saturation for the black subjects and show much more similar oxygen distributions for the two races.
\input{source/fig/race_result}
\section{Concluding Remarks} \label{sec:conclusion}
This paper introduces the new task of inferring oxygen saturation from radio signals. It develops a new gated transformer architecture to deliver this application and adapt deep models to auxiliary categorical variables. We note the work have some limitations. First, the paper focuses on special use cases (e.g., at-home oxygen monitoring in very old adults or during sleep), but is not suitable for other use cases (e.g., measuring oxygen to optimize performance during exercise). Second, the results in the paper provide an initial proof of concept that the shape and dynamics of the breathing signal include sufficient clues to infer a useful estimate of oxygen saturation. However, before this system can be used in clinical care, one needs clinical studies to quantify the performance for different disease conditions.
\section{Related Work}
\paragraph{Monitoring Oxygen Saturation.}
The most accurate measurements of oxygen saturation are invasive and require arterial blood samples. The non-invasive and widely-used method for measuring oxygen saturation~(SpO2) uses a pulse oximeter, a small device worn on the finger.
To enable remote SpO2 measurements, past work has investigated the use of cameras~\cite{van2016new,van2019data,shao2015noncontact,bal2015non,guazzi2015non}. However those methods have limitations like susceptibility to noise, sensitivity to motion, and a need for ambient light.
Recently, deep learning has been considered to aid SpO2 monitoring using cameras. \cite{ding2018measuring} tried to monitor SpO2 using smartphones. But their solution requires the fingertip to be pressed against the camera, and hence cannot provide continuous overnight measurements. A more recent work~\cite{mathew2021remote} has estimated SpO2 in a contactless way with regular RGB cameras. Their method first extracts the region of interest from the video of the person's palm, then uses a CNN model to estimate SpO2. While this approach is contactless, it still requires the user to keep their hand in front of the video camera for the duration of the monitoring, which is not practical for continuous or overnight monitoring. Our work differs from all of these prior works in that we predict oxygen values from breathing or radio waves, which allows for continuous oxygen sensing in a contactless and passive manner.
\paragraph{Contactless Health Sensing with Radio Signals.}
The past decade has seen a rapid growth in research on passive sensing using RF signals. Early work has demonstrated the possibility of sensing one's breathing and heart rate using radio signals~\cite{adib2015smart}. Later, researchers have shown that by analyzing the RF signals that bounce off the human body, they can monitor a variety of health metrics including sleep, gait, falls, and even human emotions~\cite{zhao2017learning, wigait,rf-fall,eq-radio,fan2020learning,Li_2022_WACV,yang2022artificial,li2020addressing}. We build on this work to enable SpO2 monitoring with radio waves.
\paragraph{Adaptation of ML Models to Medical Indices.}
Prior deep learning models~\cite{wang2022tokencut,wang2022self,li2022targeted,yuan2019marginalized,yuan2017temporal} do not adapt to a person's medical indices. For example, the literature has models that infer sleep stages from respiration~\cite{zhao2017learning}, detect arrhythmia from ECG~\cite{kiyasseh2020clops}, and classify emotion from EEG signals~\cite{murugappan2010classification}. A recent survey~\cite{rim2020deep} collected 147 papers about learning with physiological signals. None of the deep models therein adapt to a person's medical indices.
The deep learning literature includes a few approaches for leveraging auxiliary variables. If the variable is available at inference time, typically it is taken as an extra input~\cite{narayan2017neural,shen2016automatic}. Variables accessible only during training are typically used as extra supervisors to regularize the model via multi-task learning~\cite{liu2019auxiliary,mordan2018revisiting}.
We propose a gating mechanism to handle categorical variables and show that it performs better.
\section{Experiments}
\section{Experiments and Results}
\label{sec:models}
\textbf{Dataset and Metrics.} We conduct experiments on three medical datasets: SHHS~\cite{SHHS2}, MrOS~\cite{MROS} and MESA~\cite{MESA}, and a self-collected RF dataset. We consider three evaluation metrics: correlation (Corr), mean-absolute error (MAE) and rooted mean-squared error (RMSE). Please refer to Appendix~\ref{sec:datasets} for more details on datasets and metrics.
\textbf{Baselines.} We compare our model with the following baselines: (a)~\textit{CNN}, a fully convolutional model; (b)~\textit{CNN-RNN}, an augmentation of CNN with RNN units in the bottleneck; (c) \textit{BERT-UNet + VarAug}, which takes BERT-UNet as the backbone and uses medical variables as extra inputs or outputs. Appendix \ref{sec:baseline} explains more details of the baselines.
\textbf{Training and Evaluation Protocols.} Since the RF dataset is too small for training, we train models on breathing signals from the medical datasets, and test them directly on the respiration signals extracted from the RF dataset as well as medical datasets' test sets. More details are in Appendix \ref{sec:protocol}.
\subsection{Results on Medical Datasets}
\input{source/table/medical_results}
\noindent \textbf{Quantitative.}
The results on medical datasets are shown in \Tabref{tab:res-medical}.
Since there is no past work that predicts oxygen from breathing or radio signals, all models in the table refer to variants of our neural network.
The table shows that all variants achieve relatively low prediction errors with an average MAE that ranges from 1.58 to 1.73 percent, and an average RMSE that ranges from 1.70 to 1.78 percent. Such a relatively low RMSE shows our model rarely has predictions that largely deviate from the ground truth. All variants also achieve reasonable high correlations ranging from 0.47 to 0.53. Such a correlation level indicates our model's prediction capture the dynamics of the ground truth SpO2 which is also visually shown in \Figref{fig:visualize_shhs2}.
These quantitative results highlight that our system can be useful for continuous monitoring of patients' oxygen at home.
The upper rows in the table show the variants that do not leverage side variables. The table shows that the BERT-UNet model consistently outperforms the CNN model and the CNN-RNN model on all datasets, in all metrics. This indicates that BERT-UNet is a preferable architecture for this task.
The bottom rows in the table show the results of models leveraging accessible and inaccessible medical variables.
The table shows that {BERT-UNet + VarAug} and {Gated BERT-UNet} outperform models that do not leverage physiological indices, demonstrating the benefit of leveraging such variables.
In addition, {Gated BERT-UNet} outperforms {VarAug} on all three datasets, demonstrating that a gated multi-head approach works best for such categorical side variables.
\input{source/fig/visualize_shhs}
\noindent \textbf{Qualitative.}
Figure~\ref{fig:visualize_shhs2} visualizes the predicted oxygen saturation of the Gated BERT-UNet model and the VarAug model on a male subject in the SHHS dataset. As the ground-truth oxygen saturation are integers, we round the predicted oxygen values. The background color indicates different sleep stages. The `dark grey', `light grey' and `white' correspond to non-REM `Sleep', `REM' and `Awake' stage, respectively. We observe that Gated BERT-UNet consistently outperforms VarAug over the whole night. The small panel focuses on different sleep stages. In general, different sleep stages tend to show different behavior and hence the importance of using a gated model. Specifically, oxygen is typically more stable during non-REM `Sleep' than during REM and Awake stages. The figures show that Gated BERT-UNet can track the ups and downs in oxygen and is significantly more accurate than VarAug. This experiment demonstrates that the way we incorporate the sleep stages into the model improves performance across different sleep stages.
\subsection{Results on RF Dataset}
The results on the RF dataset are shown in \Tabref{tab:umass}. All model variants have low prediction error and high correlation. Among the models that do not leverage physiological variables, {CNN-RNN} and {Bert-UNet} perform better than the vanilla {CNN}, which shows the importance of modeling temporal information. When using physiological variables, the performance of {BERT-UNet+VarAug} is similar to that of {BERT-UNet}, while {Gated BERT-UNet} is better than all other variants. This demonstrates that the gating design leverages auxiliary variables better. Overall, the MAEs, RMSEs and correlations on the RF dataset are comparable to those on the medical datasets. This indicates that our model is directly applicable to respiration signals from RF.
We have also visualized the prediction results in \Figref{fig:visualize_rf}. As shown, our model can accurately track the fluctuation of ground truth SpO2.
\input{source/fig/sidebyside_rf_visrf}
\subsection{Visualization of Breathing-Oxygen Patterns}\label{sec:diffpattern}
\input{source/fig/fig_breathing}
We demonstrate several visual results on different patterns of breathing signals and the corresponding ground truth/predicted oxygen saturation, as shown in Figure~\ref{fig:breathing}. \Figref{fig:breathing}(a) shows a normal breathing pattern, which leads to constant oxygen saturation. In contrast, \Figref{fig:breathing}(b,c) present two different abnormal breathing signals and the resulting fluctuated oxygen predictions. These figures show the diversity of the oxygen and breathing patterns as well as the complexity of their relationship. The model however is able to capture this relationship for highly diverse patterns.
\subsection{Comparison with Past Works}\label{sec:cmppast}
We compare our approach with two recent deep-learning camera-based SpO2 monitoring methods. The first method~\cite{ding2018measuring} asks the user to press his/her finger against a smartphone camera, and uses a CNN to estimate SpO2 from the video. The second method~\cite{mathew2021remote} uses a CNN to estimate oxygen from a video of the person's hand. The input of these systems is different from ours (camera vs. RF); so to compare them we follow the setup in~\cite{mathew2021remote}.
In their setting, test samples are 180-240 seconds and vary between normal breathing to no or minimal breathing. Similarly, we divide the RF test dataset into non-overlapping 240-second segments containing both regular breathing and shallow breathing (i.e., apnea or hypopnea) and compute metrics on them.
Table~\ref{tab:compare_w_previous_work} reports the output of the three models. The results of the baselines are taken from~\cite{mathew2021remote}, and the results of our RF-based model are computed as described.
The results show that our model improves the correlation and reduces the MAE and RMSE in comparison to past work.
\input{source/table/compare_with_previous_work}
More results are in the appendix:
Section \ref{sec:diffskin} analyzes (predicted) oxygen value distributions in different race groups;
Section \ref{sec:extra-results} has more visualization on our model's performance on different datasets, patients with different relevant diseases.
|
2,877,628,088,649 | arxiv | \section{Introduction}
The use of digital images has become increasingly ubiquitous in all types of publications. What comes with the growing importance of digital images is the development of image tampering techniques. In the past, modifying or concealing the content of an image would require dedicated personnel and tools. Today, however, image tampering is much easier with state-of-the-art image processing software. This trend has affected many aspects of our society, as we see prominent forgery cases occur in journalism and academia \cite{farid2016photo}. Consequently, many detection techniques have been developed for these scenarios (see \cite{ISI:000265093400004}). Only recently, however, attention has been paid to image manipulation in scientific publications \cite{gilbert2009science}. Although it is possible to use existing methods on scientific images directly, we hypothesize that significant adaptations must be made due to the fact that they usually possess distinctive statistical patterns, formats and resolutions. In this work, we aim at developing a scientific-specific image manipulation detection technique, which we test on a novel scientific image manipulation dataset of western blots and microscopy imagery---there are no datasets openly available about scientific image manipulation yet (but see \cite{beck2016shaping}). Thus, as most scientific images increasingly come in digital form, the detection of possible manipulations should also get at the same level of quality as other fields that use digital images only.
It is undeniable that an increasing amount of tampered images are finding their ways into scientific publications. Bik, Casadevall and Fang \cite{bik2016prevalence} examined 20,621 biomedical research papers from 1995 to 2014, where they find that at least 1.9 percent are subject to deliberate image manipulation. The fact that these suspicious papers went through the careful reviewing process suggests how difficult it is to examine image tampering in scientific research manually. Because the large quantity of digital images present in submitted manuscript, it is be crucial for publishers to be able to identify image manipulation in an automated fashion.
The scientific research context sets a different tolerance for image manipulation. Many operations, including resizing, contrast adjusting, sharpening, and white balancing are generally acceptable as part of the figure preparation process. However, some others types of tampering, especially the ones that alter the image content semantically, are strictly prohibited. These manipulations include copy-move (without proper attribution), splicing, removal, and retouching\footnote{\url{https://ori.hhs.gov/education/products/RIandImages/guidelines/list.html}}. Acuna, Brookes and Kording \cite{acuna2018bioscience} developed a method to detect figure element reuse across a paper database. Intra-image copy-move can be detected rather robustly with SIFT features and pattern matching \autocite{huang2008detection}. However, detection of image manipulation that does not involve reuse is significantly more challenging. A comprehensive scientific image manipulation detection pipeline should include manipulation detection.
As scientific papers are reviewed by experts, we reckon that articles containing manipulations that incur in contextual inconsistencies (e.g., brain activation patterns from fMRI in the middle of a microscopy image) will be easily picked out. What humans \emph{cannot} see properly is the noise pattern within an image---and scientists seeking to falsify images exploit this weakness. Therefore, we propose a novel image tampering detection method for scientific images, which is based on uncovering noise inconsistencies. Specifically, our proposed method contains the following features:
\begin{enumerate}
\item It is based on supervised learning, which is capable of learning from existing databases and new instances.
\item It works for images of different resolutions and from different devices.
\item It is not restricted to any specific image format.
\item It is capable of generating good predictions with a small training set.
\item It is flexible and can be fine-tuned for different fields of science.
\end{enumerate}
In section \ref{sec::related_works}, we briefly summarize previous work on digital image forensics. In section \ref{sec::overall_design}, we discuss the design of our proposed method. In section \ref{sec::experiments}, we introduce our scientific image manipulation datasets and present the test results of our method on them. In section \ref{sec::conclusion}, we conclude by discussing limitations and future extension of our method.
\section{Previous Work}\label{sec::related_works}
There have been a large amount of previous research on image tampering detection, but very few of them focus on scientific images. The first class of tampering detection methods aims at detecting a specific type of manipulation, the most common being resizing and resampling \autocite{popescu2005exposing, kirchner2008fast, dalgaard2010role, feng2012normalized, mahdian2008blind}, median filtering \autocite{kirchner2010detection, kang2013robust, cao2010forensic, chen2011median}, contrast enhancement \autocite{stamm2010forensic1, yao2009detect, stamm2008blind, stamm2010forensic2}, blurring \autocite{liu2008image}, and multiple JPEG compression \autocite{bianchi2011detection, bianchi2012image, neelamani2006jpeg, qu2008convolutive}. Many of these manipulations are valid in the scientific research context, and it can be non-trivial to merge results from single detectors in order to build a comprehensive one.
The second class of tampering detection methods aims at general-purpose image tampering detection. Dirik and Memon \cite{dirik2009image} try to catch the inconsistency of Color Filtering Array (CFA) patterns within images taken by digital cameras---a signal generated by digital cameras. However, scientific images are not necessarily taken by digital cameras. Wang, Dong and Tan \cite{wang2014exploring} leverage the characteristics of the DCT coefficients in JPEG images to achieve tampering localization, but the method is confined to a specific format. Mahdian and Saic \cite{mahdian2009using} propose a method that predicts tampered regions based on wavelet transform and noise level estimation. All these methods are unable to learn from data, which limits their abilities to generalize to different fields. Another group of methods combines steganalysis tools \autocite{fridrich2012rich, pevny2010steganalysis} with Gaussian Mixture Models (GMM) to identify potentially manipulated regions \autocite{fan2015general, cozzolino2015splicebuster}. These unsupervised-learning-based methods are also unable to learn from existing database effectively and therefore tend to underperform in practice.
Because of the occurrence of large image datasets, neural-network-based tampering detection methods are likely to yield good performance \autocite{bappy2017exploiting}, especially those based on Convolutional Neural Networks (CNN) \autocite{bayar2016deep, bayar2018constrained, zhou2018learning}. They usually target high resolution natural images. It is unclear, however, whether they can be transitioned for the scientific scenario. For example, it is challenging to train such a network for scientific images exclusively as they usually require tens of thousands of images as training data, which to the best of our knowledge is not yet available.
\section{Our Proposed Method}\label{sec::overall_design}
Our method is based on a combination of several heterogeneous feature extractors that are later combined to produce single predictions for patches (Figure \ref{fig::method_work_flow}). At first, an input image will go through a variable amount of residual image generators. The type and amount of these generators can be chosen based on the application. Each type of residual image will have its own feature extractor, which is based on our proposed feature extraction scheme with (possibly) different configurations. The features are then fed into a classifier after post-processing.
\begin{figure*}[htpb]
\centering
\includegraphics[width=0.9\linewidth]{fig1.pdf}
\caption{\csentence{Overall design of our proposed method.} The input image goes through several residual generators and feature extractors in parallel. All extracted features will be merged in a postprocessing step and then fed to a classifier.}
\label{fig::method_work_flow}
\end{figure*}
The proposed method works on residual images, which are essentially image after filtering or the difference between an image and its interpolated version. It is a way to discard content and emphasize noise pattern within an image, which is widely used in image manipulation detection practice. However, in many previous works, only one type of residuals is used \cite{dirik2009image, cozzolino2015splicebuster, zhou2018learning}. Because each residual may have different sensitivity levels to different types of manipulation, using only one not only limits the method's ability to detect a wide variety of manipulation, but also renders the method more vulnerable against adversaries. Therefore, we decide to combine a number of residuals in our method to increase the robustness.
Because our feature extraction method drastically reduces the dimensionality of image data, which relieves the need of a huge amount of training data, it is possible to use a light-weight classifier as the back end, such as logistic regression or support vector machine (SVM). As there are many ways to generate residual images, and that the feature extraction method comes with a number of parameters to decide, our image manipulation detection method possesses high degree of flexibility. Unlike the parameters in neural networks, for example, which are rather obscure for human beings, the underlying meanings of the parameters in our feature extraction method are straightforward. Therefore, it is easier for one to manually adapt our method for different fields.
\subsection{Residual Image Generators}\label{sec::choice_of_filters}
There are numerous ways of generating residual images, we list the following ones because they are functional for a wide range of applications. Note that the capability of our method is significantly influenced by the choice of residuals. However, it is possible to design new residual image generators for specific scenarios.
\begin{enumerate}
\item Steganalytic Filters
Steganalysis (techniques used for detecting hidden messages in communications) has been used in image tampering detection practice extensively. This type of analysis aims to expose hidden information planted in images by steganography techniques. Although it is not directly linked to image tampering detection, it is suggested that that the tasks of image forensics and steganalysis are very much alike when the action of data embedding in steganography is treated as image manipulating \cite{qiu2014universal}.
Similar to the rich model strategy proposed in \autocite{fridrich2012rich}, we can apply many different filters and see which one can spot inconsistencies. In our work, we use several filters that provide a relatively comprehensive view of potential inconsistencies (Figure \ref{fig::high_pass_filters}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{fig2.pdf}
\caption{\csentence{High-pass filters selected in our experiment.}}
\label{fig::high_pass_filters}
\end{figure}
The filters selected are high-pass because we want to throw away information about the image content and emphasize noise patterns as much as possible. The residual image in this case is the image after convolution. An example of steganalytic filtering residual is shown in Figure \ref{fig::stega_res_demo}.
\item Error Level Analysis (ELA)
ELA is an analysis technique that targets JPEG compression. The idea behind it is that the amount of error introduced by JPEG compression is nonlinear: a 90-quality JPEG image resaved at quality 90 is equivalent to a one-time save of quality 81; a 90-quality JPEG image resaved at quality 75 is equivalent to a one-time save of quality 67.5 \autocite{krawetz2007picture}; and so on. If some part of a JPEG-compressed image is altered with a different JPEG quality factor, when it is compressed again, the loss of information of that part will differ from other regions. To uncover the inconsistency, ELA residual is computed by intentionally resaving the image in JPEG format with a particular quality (e.g. 90) and then computing the difference of the two images. An example of ELA residual is shown in Figure \ref{fig::ela_res_demo}.
\item Median Filtering Residual
Median filtering can suppress the noise of an image. When applying median filtering to a tampered image, the tampered part may possess a different noise pattern and therefore respond differently. The median filtering residual is the difference between the original image and median filtered image. An example is shown in Figure \ref{fig::mf_res_demo}.
\item Wavelet Denoising Residual
Wavelet denoising is a type of denoising method that represents an image in wavelet domain and cancels the noise based on that representation. Similar to the median filtering residual's case, the tampered region may react differently compared to the rest of the image and therefore give away its own identity. It is also suggested by Dirik and Memon \cite{dirik2009image} that using wavelet denoising can uncover the sensor noise inconsistency of digital cameras. The wavelet denoising residual is given by the difference between the original image and the denoised image. An example is shown in Figure \ref{fig::wavelet_res_demo}.
\end{enumerate}
It is worth noticing that the tampered images in the demonstrations are selected so that the manipulation pattern is visible in the specific residual. However, in practice, this may not always be the case. Usually it is necessary to examine multiple residual images before drawing a conclusion.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.95\linewidth]{fig3.pdf}
\caption{\csentence{Demonstration of steganalytic residual.}}
\label{fig::stega_res_demo}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.95\linewidth]{fig4.pdf}
\caption{\csentence{Demonstration of ELA residual.}}
\label{fig::ela_res_demo}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.95\linewidth]{fig5.pdf}
\caption{\csentence{Demonstration of median filtering residual.}}
\label{fig::mf_res_demo}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.95\linewidth]{fig6.pdf}
\caption{\csentence{Demonstration of wavelet denoising residual.}}
\label{fig::wavelet_res_demo}
\end{figure}
\subsection{Feature Extraction}\label{sec::feature_extraction}
Our method is patch-based, which means it will generate a prediction for each patch in the image. Using patches instead of single pixels to represent an image not only shrinks the scale of computation, but also enriches the amount of statistical information within each smallest unit. At the limit, the patch size can be chosen so that pixel-based and patch-based become almost the same. After deciding on the patch size, the feature extraction step will generate a corresponding feature vector for each patch in the image. In this section, we discuss how these features vectors are computed.
\subsubsection{Patch Reinterpretation}
Residuals reduce the complexity of image data, but they still have the same dimensionality as the original image. To further compress data for classification, we propose a new feature extraction method for image tampering detection. Intuitively, an image region is considered to be \emph{tampered} not because it is unique itself, but mainly due to the fact that it is different from \emph{the rest of the image}. Therefore, an ideal feature design should contain sufficient amount of global information. We add global information by reinterpreting an image region using the rest of the image.
\begin{table}[htpb]
\caption{List of symbols used in feature extraction}
\centering{\scriptsize
\begin{tabular}{cm{0.45\linewidth}}
\toprule
Symbol & \multicolumn{1}{c}{Description} \tabularnewline \midrule
$(h, w)$ & size of the image \tabularnewline \midrule
$(m, n)$ & dimension of each patch \tabularnewline \midrule
$(s, t)$ & dimension of the patch grid \tabularnewline \midrule
$l_{ij}$ & the likelihood function of the grid cell on $i$th row and $j$th column \tabularnewline
\bottomrule
\end{tabular}}
\label{table::symbols}
\end{table}
First, an input image of size $(h, w)$ will be divided into patches of size $(m, n)$. If the shapes are not divisible, the image will be cropped to the nearest multipliers of each dimension. Therefore, an image of size $(h, w)$ will be divided into a patch matrix of size $(\lfloor h/m \rfloor ,\lfloor w/n \rfloor)$.
Then, the patch matrix will be split into a rectangular patch grid of size $(s, t)$, where each cell contains a certain number of patches. The number of patches in most cells is
\begin{align*}
\left\lfloor \frac{\lfloor h/m \rfloor}{s} \right\rfloor \times \left\lfloor \frac{\lfloor w/n \rfloor}{t} \right\rfloor,
\end{align*}
except for those cells on the edges, which may have fewer patches.
For each cell in the grid, we fit an outlier detector that is capable of telling the likelihood of a new sample being an outlier. Given a patch $\bm{p}$, it can be reinterpreted by a vector $\bm{v}$, which is given by
\begin{gather*}
\bm{v}=(l_{11}(\bm{p}),~l_{12}(\bm{p}),~l_{13}(\bm{p}),~\ldots,~l_{1t}(\bm{p}),\phantom{).}\\
\phantom{v=(}l_{21}(\bm{p}),~l_{22}(\bm{p}),~l_{23}(\bm{p}),~\ldots,~l_{2t}(\bm{p}),\phantom{).}\\
\phantom{v=(}l_{31}(\bm{p}),~l_{32}(\bm{p}),~l_{33}(\bm{p}),~\ldots,~l_{3t}(\bm{p}),\phantom{).}\\
\cdots\\
\phantom{v=(}l_{s1}(\bm{p}),~l_{s2}(\bm{p}),~l_{s2}(\bm{p}),~\ldots,~l_{st}(\bm{p})).
\end{gather*}
An illustration of this reinterpretation method is shown in Figure \ref{fig::feature_ext_illu}, where black blocks represent patches, red blocks represent grid cells and the yellow region represents the tampered region. In this case, $(s, t) = (3, 4)$. Because the tampered region has a different residual pattern, and its contaminated patches concentrate in one of the cells, the outlier detector of that cell will learn a distinct decision boundary compared to other ones. As a result, an authentic patch $\bm{p}_a$ will have lower outlier likelihood in all components except for $l_{23}(\bm{p}_a)$; a tampered patch $\bm{p}_t$ will have higher outlier likelihood in all components except for $l_{23}(\bm{p}_t)$. This difference in structure allows us to distinguish between authentic and tampered patches. In practice, we use the histogram of $\bm{v}$ (denoted by $\bm{v}_h$), which not only encodes the structure in summary-statistics space, but also becomes position invariant.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.95\linewidth]{fig7.pdf}
\caption{\csentence{Patch reinterpretation illustration.} The parameters are described in Table \ref{table::symbols}.}
\label{fig::feature_ext_illu}
\end{figure}
\subsubsection{Feature Design}
Besides $\bm{v}_h$, we include some other information in order to concentrate more global information within the feature. The final feature of a patch contains the following components:
\begin{enumerate}
\item $\bm{v}_h$: the histogramed patch reinterpretation. After generating all histogramed reinterpretations of an image, we normalize them to $[0, 1]$.
\item Proximity information: how much the patch differs from its neighborhood. We choose the Euclidean distance between the histogramed reinterpretation of the patch and those of its surrounding neighbors'.
\item Global information: how much the patch differs from the entire image. After computing the histogramed reinterpretations for all patches within an image, we apply $k$-means clustering on them, which generates a set of weights and cluster centroids. The additional global information of a patch is given by the Euclidean distance between the reinterpretation and the cluster centroids, as well as the corresponding weights of the centroids.
\end{enumerate}
\section*{Experiments}\label{sec::experiments}
Due to the lack of science-specific image manipulation detection databases, we synthesize our own database for the experiments.
\subsection*{Datasets}
Our novel scientific image manipulation datasets mainly consist of the following three types of manipulations:
\begin{enumerate}
\item Removal: covering an image region with a single color or with noise. We manually select a rectangular region to be removed from the image. Then we select another rectangular region to sample the color or noise to fill the removal region, where we can compute the mean $\mu$ and standard deviation $\sigma$ of the pixels. We generate four images for each pair of selection
according to the configuration given in Table \ref{table::removal_config}.
\begin{table}[htpb]
\caption{Image generation configuration of removals. There mean of the removal region is equal to that of the sample region's, but we vary the standard deviation from zero (pure color) to two standard deviations to create different visual effects.}
\centering{\scriptsize
\begin{tabular}{cc}
\toprule
\makecell{Removal Region\\ Mean} & \makecell{Removal Region\\ Standard Deviation}\\ \midrule
$\mu$ & 0\\
$\mu$ & $0.5\sigma$\\
$\mu$ & $\sigma$\\
$\mu$ & $2\sigma$\\
\bottomrule
\end{tabular}}
\label{table::removal_config}
\end{table}
\item Splicing: copying content from another image. We randomly choose a small region from the foreground image and paste it at an arbitrary location on the background image. To create noise inconsistency, the region will either be recompressed with JPEG or processed with sharpening filters.
\item Retouching: modifying the content of the image. We will randomly choose a small region within an image and apply Gaussian blurring to it.
\end{enumerate}
These manipulations are selected because we believe that they are more prevalent in problematic scientific papers.
We build two datasets that contain western blot images and microscopy images, respectively. We choose images around these two topics because of their frequency in the literature huge, and they are more susceptible to manipulation. We also create a natural image dataset to compensate for the lack of microscopy images for training. It is only used in the training phase. The details of datasets are shown in Table \ref{table::sci_img_dataset}. The meanings of tampering type abbreviations are shown in Table \ref{table::type_abbrev}.
\begin{table}[htpb]
\caption{Tampering type abbreviations}
\centering
{\scriptsize
\begin{tabular}{cc}
\toprule
Abbreviation & Meaning \\ \midrule
R & removal images\\
J & splicing images recompressed by JPEG\\
F & splicing images processed with sharpening filters\\
B & Gaussian blurred images\\
G & genuine images \\ \bottomrule
\end{tabular}
}
\label{table::type_abbrev}
\end{table}
\begin{table}[htpb]
\caption{The specification of the proposed scientific image forensics datasets.}
\centering{\scriptsize
\begin{tabular}{C{0.15\linewidth}m{0.25\linewidth}C{0.2\linewidth}c}
\toprule
Collection & \makecell[c]{Image\\ Source} & Contents\footnotemark & \makecell[c]{Average\\ Resolution}\\ \midrule
western blot & western blot images from the Internet & R(436), G(51) & $137,244$\\ \midrule
microscopy & microscopy images from the Internet & R(180), J(20), F(19), B(20), G(21) & $591,906$\\ \midrule
natural image & natural images from the ``pristine'' collection of IEEE dataset \cite{ieeedataset} & J(40), F(40), B(40), G(40) & $775,328$ \\ \bottomrule
\end{tabular}
}
\label{table::sci_img_dataset}
\end{table}
\footnotetext{format: type(number of images)}
\subsection{Test Configurations}
The sizes of images in the western blot collection are significantly smaller. Therefore, we need to train a special model for them. For the microscopy model, we added natural images into the training set to compensate for the lack of data. The patches from residual images are transformed into frequency domain by Discrete Cosine Transform (DCT) because it yields slightly better performance. Within each model, the parameters of each feature extractor are the same. Detailed configurations of the two models that we trained are shown in Table \ref{table::exp_config}.
We use a one-class SVM outlier detector \cite{scholkopf2001estimating}, provided by scikit-learn \cite{scikit-learn}, which is based on LIBSVM\cite{changlibsvm}. The kernel we use is radial basis function, whose kernel coefficient ($\gamma$) is given by the scale, which is
\begin{align*}
\frac{1}{\mbox{number of features} \times \mbox{variance of all inputs}}.
\end{align*}
The tolerance of optimization is set to $0.01$; and $\nu$ (the upper bound on the fraction of training errors and the lower bound on the fraction of support vectors) is set to 0.1.
Note that the choice of parameters can significantly influence the speed of feature extraction. One of the most expensive operations is fitting SVM, which has a computational complexity of $O(N^3)$, where $N$ is the number of patches in each grid cell. Therefore, it is important to choose an appropriate $(m,n)$ and $(s,t)$ pair. With our Python implementation and the configuration given in Table \ref{table::exp_config}, the extraction speed for western blots is approximately 212.36 sec/megapixel (12.13 sec/image), while the extraction speed for microscopy images is approximately 86.15 sec/megapixel (49.32 sec/image). We tried to use ThunderSVM\cite{wenthundersvm18}, which is a GPU-accelerated SVM implementation. Although it has a much higher speed, its precision is not ideal compared to LIBSVM. Therefore, our experiments are conducted with LIBSVM only.
The number of centroids of $k$-means clustering is set to $k=6$, and the clustering algorithm is run 150 times with different initializations in order to get a best result. We select this particular value of $k$ because when we apply $k$-means clustering to $\bm{v}_h$, the tampered region would usually blend with other clusters unless there are more than 6 centroids. Therefore, we consider it reasonable to represent the major content of an image by its first 6 cluster centroids.
\begin{table}[htpb]
\caption{Test configuration parameters}
\centering
{\scriptsize
\begin{tabular}{ccccc}
\toprule
& \makecell[c]{Patch\\ Dimension} & \makecell[c]{Patch Grid\\ Dimension} & \makecell[c]{\# Training\\ Images} & \makecell[c]{\# Testing\\ Images}\\ \cmidrule{2-5}
Western Blot & $(6, 6)$ & $(5, 5)$ & 352 & 135 \\ \cmidrule{2-5}
\makecell[c]{Microscopy} & $(10, 10)$ & $(7, 7)$ & 251 & 106 \\ \bottomrule
\end{tabular}
\label{table::exp_config}
}
\end{table}
Because the dimensionality of the extracted feature is not very high, the outputs of each feature extractor are simply concatenated into a single feature vector and then fed to the classifier. The classifier we use is a simple Multilayer Perceptron neural network. For the western blot model, we use a four-layer network with 200 units per layer; for the microscopy model, we use a similar network with 300 units per layer. Softmax regression is applied to the last layer to get the classification results.
\section{Results}
The performance evaluation metric that we use are patch-level accuracy, AUC scores, and F1 scores. We compare the performance of our model with two baseline models, which are widely compared against in related papers:
\begin{enumerate}
\item CFA \cite{dirik2009image}: a method that uses nearby pixels to evaluate the Camera Filter Array patterns and then produces the tampering probability based on the prediction error.
\item NOI \cite{mahdian2009using}: a method that finds noise inconsistencies by using high pass wavelet coefficients to model local noise.
\end{enumerate}
For our method, the threshold for F1 score is 0.5. For the baseline methods, their output map is normalized to $[0, 1]$, and the F1 score is acquired by setting the threshold to 0.5.
Table \ref{table::gen_acc} shows the accuracies of the three methods on genuine images, where AUC and F1 scores does not apply. Table \ref{table::aucs} and \ref{table::f1s} shows the AUC scores and F1 scores of our methods compared to the baseline. The meanings of the abbreviations can be seen in Table \ref{table::type_abbrev}. The ``overall'' scores are computed across the entire dataset, including genuine images. A visual comparison of the results of each method is shown in Figure \ref{fig::visual_result}.
It can be seen that CFA cannot handle western blot images very well, as it has low accuracy on genuine images. Its performance on J, F and B tampering types are also mediocre. NOI has better behavior at locating noisy regions in the image, but it fails drastically when encountering manipulations that contain less noise. It constantly treats R[0] and B manipulations as negatives, which yields a false negative region that is not always separable. Its performance on J images is not very satisfactory as well. Generally speaking, the performance of our method is more consistent across different types of manipulations, which makes it more reliable in practice.
\begin{figure*}[p]
\includegraphics[width=\linewidth]{fig8.pdf}
\caption{\csentence{Visual comparison of the results.}}
\label{fig::visual_result}
\end{figure*}
\begin{table*}[htpb]
\caption{The AUC scores on datasets}
\centering
{\scriptsize
\begin{tabular}{clccccclccc}
\toprule
& \makecell[c]{Tampering\\ Type} & Ours & CFA & NOI & & & \makecell[c]{Tampering\\ Type} & Ours & CFA & NOI \\ \cmidrule{2-5} \cmidrule{8-11}
\multirow{8}{*}{\rotatebox{90}{Western Blot}} & R$[0]$\footnotemark & \textbf{0.939} & 0.606 & 0.026 & & \multirow{8}{*}{\rotatebox{90}{Microscopy}} & R$[0]$ & \textbf{0.924} & 0.780 & 0.027 \\
& R$[0.5\sigma]$ & 0.861 & 0.866 & \textbf{0.879} & & & R$[0.5\sigma]$ & \textbf{0.903} & 0.925 & 0.887 \\
& R$[\sigma]$ & 0.923 & 0.877 & \textbf{0.968} & & & R$[\sigma]$ & \textbf{0.968} & 0.940 & 0.959 \\
& R$[2\sigma]$ & 0.990 & 0.885 & \textbf{0.992} & & & R$[2\sigma]$ & 0.966 & 0.937 & \textbf{0.978} \\
& & & & & & & J & \textbf{0.994} & 0.639 & 0.618 \\
& & & & & & & F & 0.868 & 0.629 & \textbf{0.913} \\
& & & & & & & B & \textbf{0.805} & 0.334 & 0.104 \\ \cmidrule{2-5} \cmidrule{8-11}
& overall & \textbf{0.927} & 0.813 & 0.696 & & & overall & \textbf{0.925} & 0.864 & 0.695 \\ \bottomrule
\end{tabular}}
\label{table::aucs}
\end{table*}
\footnotetext{format: R[noise standard deviation]}
\begin{table*}[htpb]
\caption{The F1 scores on datasets}
\centering
{\scriptsize
\begin{tabular}{clccccclccc}
\toprule
& \makecell[c]{Tampering\\ Type} & Ours & CFA & NOI & & & \makecell[c]{Tampering\\ Type} & Ours & CFA & NOI \\ \cmidrule{2-5} \cmidrule{8-11}
\multirow{8}{*}{\rotatebox{90}{Western Blot}} & R$[0]$ & \textbf{0.834} & 0.039 & 0.003 & & \multirow{8}{*}{\rotatebox{90}{Microscopy}} & R$[0]$ & \textbf{0.834} & 0.039 & 0.000 \\
& R$[0.5\sigma]$ & \textbf{0.744} & 0.399 & 0.543 & & & R$[0.5\sigma]$ & \textbf{0.745} & 0.398 & 0.560 \\
& R$[\sigma]$ & \textbf{0.867} & 0.553 & 0.712 & & & R$[\sigma]$ & \textbf{0.867} & 0.414 & 0.773 \\
& R$[2\sigma]$ & 0.762 & 0.522 & \textbf{0.880} & & & R$[2\sigma]$ & 0.762 & 0.378 & \textbf{0.896} \\
& & & & & & & J & \textbf{0.966} & 0.038 & 0.045 \\
& & & & & & & F & \textbf{0.623} & 0.139 & 0.360 \\
& & & & & & & R & \textbf{0.476} & 0.016 & 0.001 \\ \cmidrule{2-5} \cmidrule{8-11}
& overall & \textbf{0.770} & 0.300 & 0.424 & & & overall & \textbf{0.738} & 0.329 & 0.455 \\ \bottomrule
\end{tabular}}
\label{table::f1s}
\end{table*}
\begin{table}[htpb]
\caption{The accuracy scores on genuine images}
\centering
{\scriptsize
\begin{tabular}{ccccccc}
\toprule
\multicolumn{3}{c}{Western Blot} & & \multicolumn{3}{c}{Microscopy} \\ \cmidrule{1-3}\cmidrule{5-7}
Ours & CFA & NOI & & Ours & CFA & NOI \\
\textbf{0.988} & 0.513 & 0.838 & & \textbf{0.988} & 0.774 & 0.920 \\ \bottomrule
\end{tabular}}
\label{table::gen_acc}
\end{table}
\section{Conclusion And Discussion}\label{sec::conclusion}
We have proposed a novel image tampering detection method for scientific images, which is based on uncovering noise inconsistencies. We use residual images to exploit the noise pattern of the image, and we develop a new feature extraction technique to lower the dimensionality of the problem so that it can be handled by a light-weight classifier. The method is tested on a new scientific image dataset of western blots and microscopy imagery. Compare to two base line methods popular in the literature, results suggest that our method is capable of detecting various types of image manipulations better and more consistently. Thus, our solution promises to solve an important part of image tampering in science effectively.
There are also some weaknesses in our study. First, our proposed method is tested on a custom database, which only contains a small amount of samples. We only include several types of manipulations in our datasets, which is rather monotonous compared to the space of all possible image tampering techniques. Nonetheless, the choice of these specific image sources and manipulation types is inspired by existing problematic papers. If our method is capable of detecting these manipulations to some extent, we believe that it can make valuable discoveries once put into practice.
Second, we think that noise-inconsistency-based methods do possess certain limitations. For example, not all manipulation will necessarily trigger noise inconsistency; it is also easier for one to hide the noise inconsistency, had he/she known the underlying mechanism of the automatic detector. This kind of adversarial attack, however, is significantly challenging and unlikely to be done by the average scientist. In the future, we want to develop more advanced methods that take both image content and noise pattern into account.
However, our proposed method is one of the first methods that tackles scientific image manipulation directly. Put together in screening pipelines for scientific publications (similar to \cite{acuna2018bioscience}), our method would significantly expand the range of manipulations that could be captured at scale. It also makes predictions based on many types of residuals, which possesses improved robustness. The method a set of easily adjustable parameters, which allows it to be adapted for different fields with less effort and a smaller amount of training data.
We would like to continue extending the database with more images from various disciplines to make it standard and comprehensive, and report test results on the updated version. It is our hope that the datasets that we propose can also be useful for the nascent Computational Research Integrity research area. But we are also facing a major difficulty: there are no openly available datasets on images that actually come from \emph{science} (although see the efforts in \cite{beck2016shaping}). The images that we currently have are collected from the Internet, and form a small but significant portion of images with manipulation issues. Unfortunately, access to problematic scientific images are tend to be removed from the public soon after retraction. So far, neither publishers nor authors are yet willing to share those images for understandable reasons. Hopefully, once scientific image tampering detection methods prove their efficacy, publishers and funders can start to share and create datasets with proper safeguards to check for potential problems during peer review -- similar to how they do it with full-text through the Crossref organization\footnote{\url{https://crossref.org}}.
\section*{Acknowledgment}
Daniel E. Acuna and Ziyue Xiang are funded by the Office of Research Integrity grants \#ORIIR180041 and \#ORIIR19001.
|
2,877,628,088,650 | arxiv | \section{Introduction} \label{sec:intro}
Microlensing event observation is a useful method for exoplanet research. Some wide-orbit bound planets and candidate free-floating planets (FFPs) have been found in recent decades (\citealt{Beaulieu2006, Gould2006, Muraki2011, Sumi2011, Sumi2013, Mroz2018}). A single observation of microlensing event enables calculation of the lens size (so-called Einstein radius) from the magnification and event duration, but it is still a challenge to identify the details of the lens properties. A microlensing parallax observation is, therefore, expected to yield additional keys for analysis of the lens properties. Stereo-vision offers different sightlines towards a microlensing event, and the lens mass and distance can be calculated more effectively.
Parallax observation has long been used in the history of astronomy based on the Earth's position with respect to the Sun. For exoplanet research, however, the caustic magnification of a microlensing effect by a planetary object is much quicker. Therefore, simultaneous parallax observation by separated observers is a more appropriate method. \citet{Poindexter2005} analysed some microlensing events with ground-based parallax observation data and found a candidate Jupiter-mass FFP. \citet{Mogavero2016} suggested the efficiency of ground-based and geosynchronous satellite parallax for FFP search, and recently, the parallax observation of ground-based and space-based telescopes is developing. For example, the space-based telescope \emph{Kepler}{} conducted microlensing parallax observations with the ground-based telescopes such as MOA, OGLE, and other >25 telescopes \citep{Henderson2016a, Henderson2016b, Gould2013, Zhu2017}. The data observed by {\it Spitzer} were applied to analyse OGLE data, and some planetary objects were reported (\citealt{Zhu2015, Novati2015, Street2016}). Moreover, some new telescope missions have been proposed and are expected to be operational within recent 10 years. Especially, \emph{Euclid}{} and \emph{WFIRST}{} are expected to find exoplanets including FFPs through microlensing events and parallax observations \citep{Penny2013, Mcdonald2014, Hamolli2014, Zhu2016}.
\begin{figure*}
\centering
\begin{tabular}{cc}
\hspace{-0.1in}\includegraphics[width=3.5in]{GC3Dmodel_full.png} &
\hspace{-0.1in}\includegraphics[width=3.5in]{GC3Dmodel_Euclid.png} \\
(a) Earth motion & (b) \emph{Euclid}{} motion\\
\end{tabular}
\caption{(a) 3D model of the Earth (i.e. LSST{}) in the Galactic Coordinate System (GCS). The longitude and latitude are along the y-axis and z-axis, respectively. The x-axis shows the direction toward the Galactic centre. The small square around the Earth is the zoom-up window for (b). (b) 3D model of the \emph{Euclid}{} orbit in the GCS. The circle around L2 is in the terrestrial sky frame so that it always perpendicular to the Sun-Earth-L2 line. We assume \emph{WFIRST}{} shares this orbit in case of the Halo orbit at L2. }
\label{fig:3d}
\end{figure*}
In this paper, we report the simulation of parallax observation of FFP microlensing events in a 3D model with signal-to-noise consideration to derive the effective value of parallax event rate. \citet{Henderson2016b} also considered a simultaneous parallax observation of FFP microlensing for the combination of \emph{Kepler}{} and some ground-based surveys. Because of \emph{Kepler}{}'s motion, the separation between these two telescopes varies; and they obtained the result that Earth-mass FFPs could be detected in parallax at the early stage of K2 Campaign 9. They mentioned the importance of observer separation with respect to the lens size. Hence, we chose the target area of $(l, b)=(1^{\circ}, -1.^{\circ}75)$ monitoring by two combinations of separated telescopes: \emph{Euclid}{}-\emph{WFIRST}{} and \emph{Euclid}{}-LSST{}. Their separation is not as variable as \citeauthor{Henderson2016b}'s combination and the shorter baseline allows more effective targetting of low-mass FFPs due to their smaller Einstein rings. In \S\ref{sec:tele}, the configuration of three telescopes (\emph{Euclid}{}, \emph{WFIRST}{} and LSST{}) is reviewed. In \S\ref{sec:micro}, we describe the simulation process of FFP microlensing events using Besan\c{c}on{} model data without considering parallax observation. In \S\ref{sec:prlx}, the processes of event initialisation and parallax simulation are explained. The results will be shown in \S\ref{sec:res} and discussed in \S\ref{sec:disc}, and conclusions are described in \S\ref{sec:conc}.
\section{Observatory configuration} \label{sec:tele}
We assume \emph{Euclid}{} is the main telescope in our simulation, and \emph{WFIRST}{} and LSST{} will be partners for simultaneous parallax observation of FFP events. Hence, the combination of \emph{Euclid}{} and \emph{WFIRST}{} offers the parallax in similar sensitivity (i.e. space-based $H$-band), and the combination of \emph{Euclid}{} and LSST{} offers the parallax in different photometric bands.
\subsection{Conditions} \label{subsec:tele-kine}
\emph{Euclid}{} is expected to launch in 2022\footnote{As of 25th February 2020 retrieved from \emph{Euclid}{} mission website \url{http://sci.esa.int/euclid/}} and orbit around Lagrange Point 2 (L2) with a period of 6 months \citep{ESA2011}. In the terrestrial sky, the angular distance of \emph{Euclid}{} from L2 is no more than 33 deg, and the solar aspect angle (SAA) must be kept within 90$^{\circ}$<SAA<120$^{\circ}$. It limits the possible observation period to $\sim$30 days around the equinoxes.
The Wide Field Infrared Survey Telescope (\emph{WFIRST}{}) will be launched sometime around 2024 \citep{Spergel2015}. The geosynchronous orbit around the Earth with a distance of 40,000 km and the Halo orbit around L2 were considered in the planning stage, and the Halo orbit was decided. In our simulation, both orbits are taken and will be compared. In case of the geosynchronous orbit, the orbital path inclines 28.5 deg from the celestial equator and a node located at RA=175 deg. The distance and inclination avoid occultations by the Earth when targeting at the hot spot. In case of the Halo orbit at L2, some trajectories have been discussed and is expected to be similar to the \emph{Euclid}{} trajectory \citep{Folta2016, Bosanac2018}. We assume it shares the \emph{Euclid}{} orbital period with the orbital radius of $\sim0.75\times10^6$ km \citep{Webster2017} in our simulation. \emph{WFIRST}{} allows wider SAA as 54$^{\circ}$<SAA<126$^{\circ}$ which completely covers the \emph{Euclid}{} observation period. Thus, \emph{Euclid}{} will determine the observation period of simultaneous parallax detection for the \emph{Euclid}{}-\emph{WFIRST}{} combination.
The Large Synoptic Survey Telescope (LSST{}) is an 8m-class telescope being built at Cerro Pach\'{o}n in Chil\'{e} which will start operations in 2022 \citep{Ivezic2008}. It has not scheduled the high cadence, continuous operation for the microlensing event in the campaign as of today. Nonetheless, we selected this telescope to exemplify ground-based surveys. The sensitivity has enough potential to operate microlensing observations for FFPs down to the Earth-mass size whilst the current microlensing missions such as MOA{} and OGLE{} are relatively difficult to detect such low-mass lens events. Unlike the space-based telescopes described above, the day-night time and airmass will limit the observation period. Targeting at the Galactic centre, we assume LSST{} can on average perform for 7.5 hours per night\footnote{This value is derived from \url{http://www.eso.org/sci/observing/tools/calendar/airmass/html} which we assumed a site of La Silla Observatory is a close proxy location. The airmass 1$\leq $sec(z)$\leq$8 was taken.} during the \emph{Euclid}{} observation period. Moreover, we assume the fine weather for photometric observation is $\sim$65.89\% by taking an average of the climate data from 1991 to 1999 at La Silla observatory.\footnote{\url{http://www.eso.org/sci/facilities/lasilla/astclim/weather/tablemwr.html}} We predict that, even with the sensitivity of LSST{}, we cannot cover the event detections as reliably as the space-based surveys. Hence, we simulate the \emph{Euclid}{}-LSST{} combination as an indication of the ground-based sensitivity limitation and for comparison with the \emph{Euclid}{}-\emph{WFIRST}{} combination.
Figure \ref{fig:3d} shows a 3D image of the telescope motion in the Galactic Coordinate System (GCS). We assume that the \emph{WFIRST}{} phase is $90^{\circ}$ ahead of the \emph{Euclid}{}phase in the x-y sky frame centred on L2. The barycentric motion of the Earth and Moon is ignored since the barycentre exists within the Earth and our Monte-Carlo based simulation (the detail of sample data pick-up is explained later) moderates the uncertainty due to the barycentric motion. An appropriate distance between two telescopes is an important factor of simultaneous parallax observation. We expect the combinations of \emph{Euclid}{}-\emph{WFIRST}{} and of \emph{Euclid}{}-LSST{} would show a geometrical factor on their parallax detectability.
\begin{table}
\centering
\caption{Survey parameters for three telescopes. The \emph{Euclid}{} sensitivity is taken from Table 2 of \citet{Penny2013}, the \emph{WFIRST}{} sensitivity is from \citet{Spergel2015} and the LSST{} sensitivity is from \citet{Ivezic2008,LSST2009}. We will handle sample source data in the Johnson-Cousins photometric system throughout our simulation, therefore a proxy band (in the row of ``J-C photometry'') is assumed for every telescope filter. }
\label{tab:survs}
\begin{tabular}{@{}lp{0.7in}p{0.7in}p{0.7in}@{}}
\hline
& LSST{} & \emph{Euclid}{} &\emph{WFIRST}{}\\ \hline
Location & ground-based & space-based & space-based \\
Filter & $z$ & NIPS $H$ & $W149$ \\
J-C photometry & $I$ & $H$ & $H$ \\ \hline
$u_{\rm max}$ & 3 & 3 & 3\\
$m_{\rm sky}$ [mag/arcsec$^2$] & 19.6 & 21.5 & 21.5 \\
$\theta_{\rm psf}$ [arcsec] & 1.3 & 0.4 & 0.4\\
$m_{\rm zp}$ & 28.2 & 24.9 & 27.6 \\
$t_{\rm exp}$ [sec] & 30 & 54 & 52 \\
\end{tabular}
\end{table}
\subsection{Parameters} \label{subsec:tele-param}
We apply the Near Infrared Spectrometer and Photometer (NISP) $H$-filter of \emph{Euclid}{} and $W149$-filter of \emph{WFIRST}{} for a space-based survey in our simulation \citep{ESA2011, Spergel2015}. These filters have very similar transmission curves, so closely approximate each other. For comparison to the LSST{} $z$-band filter \citep{Ivezic2008}, we use the approximation of the Johnson $I$-band filter. The source magnitude is treated with the Johnson-Cousins photometric system in our simulation; hence we approximate the bands to the filters. Table \ref{tab:survs} summarises the telescope parameters for FFP surveys.
The formula for microlensing amplitude is defined as
\begin{equation} \label{eq:au}
A(t) = \frac{u(t)^2+2}{u(t)\sqrt{u(t)^2+4}},
\end{equation}
where $A(t)$ the amplitude of the detected flux and $u(t)$ is the impact parameter in units of Einstein radii. We assume \emph{Euclid}{}, \emph{WFIRST}{} and LSST{} are sensitive enough to start a microlensing observation when the lens is approaching the source with the projected distance of 3 times larger than the Einstein radius. Thus, the maximum impact parameter is set as $u_{\rm max}$=3 for the point source case. $u_{\rm max}$=3 corresponds to the minimum amplitude of $A_{min}$$\sim$1.02. For the finite source case, however, Eq.(\ref{eq:au}) is no longer sufficient.
Figure \ref{fig:amp} shows the distribution of maximum magnification in the finite source case, along with different impact parameters and source surface angular sizes in units of Einstein radius ($\rho$) retrieved from our previous paper \citep{Ban2016}. The contour labelled $1.017$ is the amplitude limit that we are assuming for our parallax simulation (i.e. $A_{min}$$\sim$1.02) and the -0.5$\leq$$ log_{10}\rho$$\leq$1.0 regime shows the strong ``boost'' of threshold impact parameter to yield $A_{min}$$\sim$1.02. It implies that some specific events (i.e. finite source with -0.5$\leq$$log_{10}\rho$$\leq$1.0) allow the telescopes to observe them even though the minimum approach of the lens is larger than $u_{\rm max}$. In our simulation, this finite source effect is taken into account. The sky brightness ($m_{\rm sky}$) and the full width at half-maximum size of a point spread function ($\theta_{\rm psf}$) become dimmer and smaller for the space-based surveys because of the atmosphere scattering for the ground-based survey. The zero-point magnitude ($m_{\rm zp}$) and exposure time ($t_{\rm exp}$) are used to count photons as a signal. $m_{\rm sky}$ and $m_{\rm zp}$ are adjusted to the Johnson-Cousins photometric system.
\begin{figure}
\hspace{-0.1in}
\includegraphics[width=3.8in]{Finite-Amp.png}
\caption{Finite source magnification with impact parameters ($u$) on the x-axis and angular source radius in units of Einstein radius ($\log_{10}\rho$) on the y-axis. The two white contours correspond to two threshold amplitudes with $u_{\rm max}$=1 and $u_{\rm max}$=3 for the point source regime. This figure is reproduced from Fig. 1 in \citet{Ban2016}. }
\label{fig:amp}
\end{figure}
\section{Microlensing simulation} \label{sec:micro}
The parallax simulation was based on our previous simulation of FFP microlensing described in our paper \citep{Ban2016}. In this section, we review the FFP simulation with signal-to-noise consideration described in \citep{Ban2016}. The goal was to calculate the expected number of FFP observation by \emph{Euclid}{}, \emph{WFIRST}{}, and LSST{} in contrast to ongoing ground-based surveys such as MOA{} and OGLE{}. The FFP event rate derived in our previous paper is going to be applied to our parallax simulation in this paper.
\subsection{Besan\c{c}on{} galactic model} \label{subsec:besancon}
We used the stellar data from the Besan\c{c}on{} model version 1112, which was created by \citet{Robin2004, Robin2012a, Robin2012b}. The model comprises the stellar distributions of four populations: thin disc, thick disc, bulge, and spheroid. Each population is modelled with a star formation history, and initial mass function and kinematics are set according to an age-velocity dispersion relation in the thin disc population. For the bulge population, the kinematics are taken from the dynamical model of \citet{Fux1999}. In the bulge, a triaxial Gaussian bar structure is taken to describe its density law. The reddening effect of the interstellar medium (ISM) is considered using the 3D distribution derived by \citet{Marshall2006}. The model is the same as we used in our previous simulation, and the details are described in \citet{Ban2016}. Here we only mention the parameters of our target area.
\citet{Penny2013} found the discrepancy of the microlensing optical depth value toward the Galactic bulge between the Besan\c{c}on{} model (ver.1106) and observed data in $I$-band and applied a correction factor of 1.8 to their results. In our parallax simulation, we simulate the same observational target $(l, b)=(1^{\circ}, -1.^{\circ}75)$ as \citeauthor{Penny2013} and also apply the correction factor 1.8. This survey target is very close to the ``hot-spot'' of microlensing observation by a ground-based survey (\citealt{Sumi2013}). Table \ref{tab:catals} shows the parameters of the Besan\c{c}on{} model for our research. The catalogues offer a population of about 15.6 million stars within 0.25 $\times$0.25 deg$^{2}$ region centring at our survey target. To gain a statistically reasonable number of stellar data, we divided the stellar catalogues into 4 depending on the magnitude. Since the luminosity function of stars increases significantly towards fainter magnitudes, by invoking a larger solid angle for the simulated brighter stars, we can ensure that they are sampled in the simulated data set, without creating a computationally unfeasible number of fainter stars. Figure \ref{fig:bes-pop} visualises the stellar luminosity functions throughout four catalogues along with the source magnitude bin of 0.1. The average threshold amplitudes based on the \emph{Euclid}{} and \emph{WFIRST}{} sensitivity for $H$-band and the LSST{} sensitivity for $I$-band are plotted together. The Besan\c{c}on{} data is successfully providing a smooth population throughout four catalogues. Our criteria of event detectability seem to require that catalogue D stars are strongly amplified to be detected by the telescopes. The details of the detectability test are described in the next section.
\begin{figure}
\centering
\includegraphics[width=3.5in]{population_At.png} \\
\caption{Graph of stellar population per square degree ($N_*$ labelled at left y-axis) and average threshold impact parameter (<$A_t$> labelled at right y-axis) along with the different magnitude. the magnitude bin is 0.1. The black and dark grey lines are the stellar population in common logarithm for $I$-band and $H$-band, respectively. The black, dark grey and light grey dotted lines are the average threshold amplitude in common logarithm based on the \emph{Euclid}{}, \emph{WFIRST}{}, and LSST{} sensitivity, respectively.}
\label{fig:bes-pop}
\end{figure}
\begin{table}
\hspace{0.3in}
\begin{minipage}{140mm}
\caption{Besan\c{c}on{} catalogue parameters adopted for this work.}
\label{tab:catals}
\begin{tabular}{@{}lp{1.5in}p{1.0in}@{}}
\hline
Main band & $K$-band \\
colour bands & $I-K$, $J-K$, $H-K$ \\ \hline
survey target & $(l, b)=(1^{\circ}, -1.^{\circ}75)$ \\
survey region & $0.25\times0.25$ deg$^2$ \\
distance range [kpc] & 0-15\\ \hline
magnitude range & ctlg.A : K = 0-12 \\
& ctlg.B : \hspace*{5.5mm}12-16 \\
& ctlg.C : \hspace*{5.5mm}16-20 \\
& ctlg.D : \hspace*{5.5mm}20-99 \\ \hline
solid angle [deg$^2$] & ctlg.A : 0.0625 \\
& ctlg.B : $6.8\times 10^{-3}$ \\
& ctlg.C : $2.1\times 10^{-4}$ \\
& ctlg.D : $3.6\times 10^{-5}$ \\ \hline
\end{tabular}
\end{minipage}
\end{table}
\subsection{FFP event detectability} \label{subsec:ffp}
Objects from the four catalogues were used for the source and lens properties, and all combinations of a source and lens were tested. We assumed three FFP mass cases: Jupiter-mass, Neptune-mass, and Earth-mass. Hence, the lens mass was fixed and replaced by these three whilst the distance and proper motion values were taken from the catalogues.
For every source-lens pair, the signal-to-noise ratio was calculated. We defined the detected signal as the number of photons received by a telescope during the exposure time. The noise came from the flux of background unresolved stars, nearby star blending, sky brightness and the photon shot noise from the event itself. Thus, the equation of the signal-to-noise ratio becomes,
\begin{equation} \label{eq:sn}
S/N(t) = \frac{10^{0.2 m_{\rm zp}} \, t^{1/2}_{\rm exp} \, A(t) \, 10^{-0.4m_*}}{\sqrt{10^{-0.4 m_{\rm stars}} + 10^{-0.4 m_{\rm blend}} + \Omega_{\rm psf}10^{-0.4m_{\rm sky}} + A(t) \, 10^{-0.4m_*}}},
\end{equation}
where $m_*$ is the apparent magnitude of the source star, $A(t)$ is the microlensing amplitude factor at time $t$, and $m_{\rm zp}$, $t_{\rm exp}$ and $m_{\rm sky}$ are sensitivity-dependent parameters for every observer shown in Table \ref{tab:survs}. $\Omega_{\rm psf}$=$\pi$$\theta_{\rm psf}^2/4$ is the solid angle of the survey point spread function (PSF) where $\theta_{\rm psf}$ is also in Table \ref{tab:survs}. $m_{\rm stars}$ is the combined magnitude contribution of all unresolved sources within the survey target angle (0.25$\times$0.25 deg$^2$), and $m_{\rm blend}$ is the combined magnitude contribution of nearby bright stars around a given target. To determine $m_{\rm stars}$, we have to find the boundary between the resolved and unresolved regimes of our catalogue. Suppose if the baseline magnitude of a given star ($j$) attributes to the flux ($F_j$) and fainter stars than the given star are unresolved, the background noise ($\sqrt{B_{res}}$) can be calculated as the combined flux of those unresolved stars. Subsequently, another signal-to-noise equation is
\begin{equation} \label{eq:resolvedstar}
\frac{F_j}{\sqrt{B_{res}}} = \frac{10^{0.2 m_{\rm zp}} \, t^{1/2}_{\rm exp} \, 10^{-0.4m_j}}{\sqrt{\textstyle\Omega_{\rm psf}\sum\limits_{m_i > m_j}\frac{10^{-0.4m_i}}{\textstyle \Omega_{\rm cat,i}}}},
\end{equation}
where $\Omega_{\rm cat}$ is the solid angle of the Besan\c{c}on{} data catalogue, and the depth of summation ($\sum_{m_i > m_j}$) is dependent on the given star ($j$). We defined the resolved stars satisfy $F_j\sqrt{B_{res}}>3$ for the PSF noise contribution from the unresolved stars. Sorting the stars by magnitude throughout the catalogues, we find the boundary star ($j_{lim}$) which is the faintest resolved star satisfying $F_j\sqrt{B_{res}}>3$. The brighter sources easily satisfy the condition even though the number of fainter stars counted into the noise increases. Once the boundary star ($j_{lim}$) is found, the background noise of the unresolved stars is converted to $m_{\rm stars}$;
\begin{equation} \label{eq:mstars}
B_{res,lim} = \textstyle\Omega_{\rm psf} \sum\limits_{m_i > m_{j_{lim}}}\frac{10^{-0.4m_i}}{\textstyle \Omega_{\rm cat,i}} = 10^{-0.4m_{\rm stars}}.
\end{equation}
Thus, the $m_{\rm stars}$ value is attributed to the Besan\c{c}on{} data distribution and is found for every telescope sensitivity. $m_{\rm blend}$ is also found from the combined flux of stars, but this time, the stars which are brighter than the given target and within PSF range are summed up.
\begin{equation} \label{eq:mblend}
B_{blend,j} = \textstyle\Omega_{\rm psf} \sum\limits_{nearby}\frac{10^{-0.4m_i}}{\textstyle \Omega_{\rm cat,i}} = 10^{-0.4m_{\rm blend}},
\end{equation}
where the summation limit of ``nearby'' is defined as the brighter stars within the PSF of the target star. Thus, all resolved stars ($m_i <= m_{j_{lim}}$) within the PSF area of the given star are counted. All source stars in the catalogues have the individual value of $m_{\rm blend}$ for every telescope sensitivity. Once we initialise $m_{\rm stars}$ and list $m_{\rm blend}$, the round-robin pairing of the source and lens properties from the Besan\c{c}on{} data is carried out to simulate the detectable events.
For each event, to determine the event detectability, we assume the event must show $S/N$>50 at a peak, and this limit offers the threshold amplitude ($A_t\geq A_{min}$) of the event; hence, threshold impact parameter ($u_{\rm t}$). We can ignore the blending influence due to the lens itself since we assume FFP lenses.
Once the event is confirmed to be detectable by satisfying above conditions ($S/N>50$), the angular Einstein radius $\theta_{\rm E}$ and event timescale for the given source ($j$) and lens ($i$) pair is given by
\begin{equation} \label{eq:theta}
\theta_{{\rm E},ij} = \sqrt{\frac{4G M_i (D_j - D_i)}{c^2 D_j D_i}},
\end{equation}
\begin{equation} \label{eq:time_ij}
t_{ij} = \frac{\textstyle u_{\rm max}\theta_{{\rm E},ij}}{\mu_{ij}},
\end{equation}
where $M_i$ is the lens mass and $G$ and $c$ are the gravitational constant and the speed of light. $D_{\rm i}$ and $D_{\rm j}$ represent the lens and source distances, respectively, and $\mu_{ij}$ is the relative lens-source proper motion.
To find the mean timescale of all detectable event, we define an event occurrence weight ($W_{ij}$), which is a factor of physical lens size and lens speed.
\begin{equation} \label{eq:wrate}
W_{ij} = u_{{\rm t},ij} D_i^2 \mu_{ij} \theta_{{\rm E},ij}.
\end{equation}
Since we tested all possible source-lens pairs overall four catalogues, the population difference defined by the solid angle ($\Omega_{cat}$) should also be considered:
\begin{equation} \label{eq:pop}
p_{ij} = \sum_s \frac{\textstyle 1}{\textstyle \Omega_{{\rm cat},s}} \sum_j \sum_l \sum_{i,D_i < D_j} \frac{\textstyle P_{FFP,l}}{\textstyle \Omega_{{\rm cat},l}}.
\end{equation}
$P_{FFP,l}$ is a population ratio of FFPs per star for each catalogue. We assume $P_{FFP}$=1 for all FFP mass cases (i.e. one Jupiter-mass, one Neptune-mass, and one Earth-mass planet per stars), and for all catalogues. This is discussed further in \S\ref{subsec:initial}. These equations perform overall lenses ($i$) drawn from catalogue ($l$) and all sources ($j$) drawn from catalogue ($s$). Thus, the mean timescale ($\langle t \rangle$) is
\begin{equation} \label{eq:time}
\langle t \rangle = \frac{p_{ij}W_{ij}t_{ij}}{p_{ij}W_{ij}}.
\end{equation}
Note that $\langle t \rangle$ becomes the mean ``Einstein'' timescale $\langle t_E \rangle$ when $u_{\rm max}=1$.
The optical depth for a given source ($\tau_j$) is defined as compiling possible lenses between the source and an observer. The population difference of four catalogues is also considered here. Therefore, the optical depth for a given source ($j$) is
\begin{equation} \label{eq:tau_j}
\tau_j=\sum_l \sum_{i,D_i < D_j} \textstyle \pi \theta_{{\rm E},ij}^2\frac{P_{FFP,l}}{\textstyle \Omega_{{\rm cat},l}},
\end{equation}
where the equation performs overall lenses ($i$) drawn from catalogue ($l$). $D$ and $\Omega_{\rm cat}$ are the distance and solid angle from the catalogue, respectively. The final optical depth ($\tau$) is the mean value over all possible sources weighted by the ``effectivity'' of the microlensing. Here we define the effectivity as the impact parameter factor ($U_j^{(N)}$) of a given source ($j$) where $N$ varies by the proportionality of the target parameter to the Einstein radius:
\begin{equation} \label{eq:uwgt}
U_j^{(N)} = \frac{\textstyle \sum_l P_{FFP,l}\Omega_{{\rm cat},l}^{-1} \sum_{i,D_i<D_j} \mbox{min}[1,(u_{{\rm t},ij}/u_{\rm max})^N]}{\textstyle \sum_l P_{FFP,l}\Omega_{{\rm cat},l}^{-1} \sum_{i,D_i<D_j} 1}.
\end{equation}
Since $\tau \propto \theta_E^2$, $N=2$ is applied and the final optical depth for all possible sources is expressed as
\begin{equation} \label{eq:tau}
\tau = \left( \sum_s \frac{\textstyle 1}{\textstyle \Omega_{{\rm cat},s}} \sum_j U_j^{(2)}\right)^{-1} u_{\rm max}^2 \sum_s \frac{\textstyle 1}{\textstyle \Omega_{{\rm cat},s}} \sum_j U_j^{(2)} \tau_j,
\end{equation}
where the equation is calculated over all sources ($j$) drawn from catalogue ($s$), and we have already assumed $u_{\rm max}$=3 for our telescopes.
Finally, the source-averaged event rate is given by the optical depth and mean timescale calculated above. The standard formula of the event rate ($\Gamma$) is
\begin{equation} \label{eq:eventrate}
\Gamma = \left[\frac{2}{\pi} \frac{\tau}{\langle t \rangle}\right].
\end{equation}
So far, we have factorised the timescale and optical depth by the maximum impact parameter to reflect the telescope sensitivity into the event rate. However, there is another way to do this; Eq. \ref{eq:eventrate} can be rewritten as,
\begin{equation} \label{eq:eventrate2}
\Gamma = u_{\rm max}\left[\frac{2}{\pi} \frac{\tau_1}{\langle t_E \rangle}\right],
\end{equation}
where $\tau_1$ is the optical depth for the $u_{\rm max}$=1 case and $\langle t_E \rangle$ is the mean Einstein timescale. Thus, Eq.(\ref{eq:eventrate2}) is that the event rate for the $u_{\rm max}=1$ case factored by any maximum impact parameter $u_{\rm max}$.
Finally, the actual number of events per year ($\tilde{\Gamma}$) is given by $\Gamma$$\times$$N_*$ where $N_*$ is the number of source stars for the observation period counted as
\begin{equation} \label{eq:sourcenum}
N_* = \sum_s \sum_{j,U_j^{(1)} > 0} 1.
\end{equation}
$U_j{(1)}$ is given by Eq.(\ref{eq:uwgt}) with $N=1$.
\subsection{FFP event rate as applied to the parallax simulation} \label{subsec:ffp-to-prlx}
In \citet{Ban2016}, we simulated a 200 $deg^2$ survey field and mapped the results. The maximum predicted microlensing event rate was recovered in a ``hot-spot'' (Table\ref{tab:ffp-ev}), which defines our simulated field. The ground-based (LSST{}) sensitivity expects a higher noise level originating from the larger PSF and the unresolved background stars, leading to a lower event rate. We take in to account the rate-weight of the events and the uncertainty of the parameters when computing every formula above. The farther the source is, the more lenses pass the front so that the uncertainty of the optical depth per source gets large. The calculated uncertainty of the event rate is less than 0.01\% for every band and FFP mass so that it is omitted from Table \ref{tab:ffp-ev}. However, this uncertainty does not include the complexities of modelling real-world events, which may increase the sensitivity above what is modelled in unpredictable ways.
In real observation, however, the interference of any other objects and events cannot be treated so simply, hence our method is just ignoring such unexpected interferences and only assuming the ubiquitous causes of uncertainties. Moreover, we assumed S/N>50 at the peak, and the event may be too short to have sufficient indicidual exposures above the threshold signal-to-noise at which an event can be identified. Therefore, as the next step of the microlensing event simulation, we create the time-dependent observation model and simulate parallax observations. The time-dependent model is expected to yield the probability of simultaneous parallax observation that provides the parallax event rate per year multiplied by the annual number of detectable FFP microlensing events.
In the parallax simulation, the source and lens properties are randomly selected unlike a round-robin pairing done in \citet{Ban2016}. The given pair is thrown into the signal-to-noise detectability test ($S/N>$50, Eq.\ref{eq:sn}). If it passes the test, the time-dependent observation model is applied to the event as the simultaneous parallax observation is tried by the expected telescope combinations. The process is repeated until 30,000 events are detected in parallax for every fixed FFP mass (i.e. Jupiter-mass, Neptune-mass, and Earth-mass) to offer a statistically plausible probability of simultaneous parallax. In the next section, we describe the detail of the parallax simulation process and some calculations for further analyses of successful parallax observations by \emph{Euclid}{} and either LSST{} or \emph{WFIRST}{}.
\begin{table}
\centering
\hspace{0.3in}
\caption{FFP event rate ($\tilde{\Gamma}_{FFP}$ [events year$^{-1}$ deg$^{-2}$]) at $(l, b)=(1^{\circ}, -1.^{\circ}75)$ retrieved from the data used in \citet{Ban2016}. Note that the event rate is under the solo-observation of the telescope without considering the operation seasons and time. For LSST{}, the night-time lasting (7.5h) and fine-weather ratio (65.89\%) was applied.}
\label{tab:ffp-ev}
\begin{tabular}{llll
Lens mass & \emph{Euclid}{} & \emph{WFIRST}{} & LSST{} \\ \hline
Jupiter & 2045 & 2026 & 377 \\
Neptune & 475 & 470 & 87 \\
Earth & 114 & 114 & 21 \\
\end{tabular}
\end{table}
\section{Simultaneous parallax observation} \label{sec:prlx}
In this section, we describe the structure of the parallax simulation. The time-dependent model offers the light curves for every simulated event, and we use the light curves to determine the parallax detectability. The goal is to derive the parallax event rate and the accuracy of the lens-mass estimation from the differential light curves.
\begin{figure*}
{\centering
\begin{tabular}{cc}
Object positions in 2D & Sample light curve \\
\hspace{-0.2in}\includegraphics[width=4in]{GeoImage2D.png} &
\hspace{-0.1in}\includegraphics[width=3in]{SampleLC.png} \\
\end{tabular}
}
\caption{Schematic images of simultaneous parallax observation concept. {\it Left panel}: A two-dimensional geometrical arrangement for a source, lens and two observers. The large black dot is the source star, the small black dot is the lens object, and the numbers in boxes represent two observers. The dashed circle centring at the lens shows the Einstein ring. $\vec{D_S}$ and $\vec{D_L}$ are the line-of-sight to the source and lens from each observer, respectively. $\vec{u}$ is vectorial impact parameter and $r_E$ represents the Einstein radius in length. $D_T$ is the observer separation. The dashed curve represents the light horizon of the event when it is detected by Observer 1; an extra time ($\Delta t$) is necessary for Observer 2 to detect the same radiation which is detected by Observer 1 at this moment. The dashed line from Observer 1 to $\vec{D_{S2}}$ is just a supportive line to visualise the parallactic angle $\gamma$ that also varies in a function of time. {\it Right panel}: Sample light curves of simultaneous parallax observation in units of amplitude factor (top) and differential amplitude between two observers (bottom). We define $t_{0,1}$ and $t_{0,2}$ as the light curve peak of each observer and so $\Delta t_0$ as the difference in time of maximum amplification. We will use the absolute value of $\Delta A$ in later calculations. }
\label{fig:prlx-geo}
\end{figure*}
\subsection{Configuration of an event simulation} \label{subsec:initial}
We applied some random values for the parameters in our simulation: source-lens pair selection, zero-time ($t_{event}=0$) reference and minimum impact parameter. The randomness was controlled by the proper probability if the distribution is not uniform.
\subsubsection{Source-lens pair selection} \label{subsubsec:random1}
First, a source and lens are randomly chosen from four catalogues. The probability of data selection is controlled based on the stellar population defined by solid angles. The stellar population ratio is 1:14:455:3200 from catalogue A to D (brighter to fainter). Hence, more faint sources will be chosen. A lens is selected in the same way, but it must be at a closer distance than the source. As in \cite{Ban2016}, we assume three fixed planetary lens cases: Jupiter-mass, Neptune-mass, and Earth-mass, so that the lens mass is replaced with them and only the distance and proper motion were referred from the catalogue.
The FFP population is assumed to be 1 FFP per star. \cite{Sumi2011} first estimated that the Jupiter-mass FFP population is about twice as large as the main-sequence (MS) star population. Later, \cite{Mroz2017} recalculated the population, resulting in an upper limit at 95\% confidence of $\sim$0.25 FFPs per MS star. The population of FFPs of any given mass is still effectively unknown. Hence, for simplicity, we adopt an FFP frequency of one FFP of each mass (Jupiter-mass, Neptune-mass, and Earth-mass) per star, with the true number of expected planets to be normalised by the end-user once these are ascertained.
\subsubsection{Zero-time reference} \label{subsubsec:random2}
Second, two angles are randomly chosen for observers' location. One is the positioning angle of the Earth on the orbital plane in the heliocentric co-ordinates. The other is the phase position of \emph{Euclid}{} on the orbit around L2 in the terrestrial sky reference frame (i.e. 2D with the L2 origin). We set these positions at $t_{event}=0$ in our time-dependent observation model when the reference observer on the Earth detects the maximum amplitude of the event. Thus, the ingress ($t_{event}<0$) and egress ($t_{event}>0$) portions of the event are simulated with respect to the $t_{event}=0$ positions. In case of \emph{Euclid}{}-\emph{WFIRST}{} combination, we assume that \emph{WFIRST}{} in the Halo orbit shares the \emph{Euclid}{} orbit around L2. The relative motion of the source, lens and telescopes in the 3D space varied in our simulation. We considered it was too much to make the \emph{WFIRST}{} position random in addition to the \emph{Euclid}{} position. Under the shared orbit, all phase differences are possible to be numerically handled by the distribution probability. Besides, the small phase difference (i.e. small separation between \emph{Euclid}{} and \emph{WFIRST}{}) cannot take advantage of the parallax observation. We decided to take the 90$^{\circ}$ phase difference in our model. For the \emph{Euclid}{}-LSST{} combination, the Earth's rotation is not operated in our simulation. Instead, we assumed that the observable night is lasting 7.5 hours and the weather fineness is $65.89\%$ as it is mentioned in \S\ref{subsec:tele-kine}. Once the positions $t_{event}=0$ are determined, the solar aspect angle (SAA) is examined. As shown in \$\ref{subsec:tele-kine}, the telescope position must satisfy 90$^{\circ}$<SAA<120$^{\circ}$ for \emph{Euclid}{} and 54$^{\circ}$<SAA<126$^{\circ}$ for \emph{WFIRST}{}. Our time-dependent model allows changing SAA during the event along with the telescope motion. However, we assume that \emph{Euclid}{} and \emph{WFIRST}{} must be within their SAA range at $t_{event}=0$. The edge-of-SAA issue is not so serious because the SAA shift during the event will be quite small within the FFP event duration and we will also have the simultaneous parallax duration limit mentioned later in \S\ref{subsec:prlx-obs}.
\subsubsection{Minimum Impact parameter} \label{subsubsec:random3}
Third, the impact parameter ($u_{\oplus}(t)$) at $t_{event}=0$ is randomly chosen; where the subscript symbol ($\oplus$) means the reference observer on the Earth. Note that we assumed the reference observer detects maximum amplitude at $t_{event}=0$, and $u_{\oplus}(t_{event}=0)$ becomes the minimum impact parameter for the reference observer. The minimum impact parameter must be less than the threshold impact parameter ($u_{\oplus}(t_{event}=0)<u_{t,\oplus}$) found in the process of event detectability discussed in $\S$\ref{subsec:ffp}. The distribution probability of $u_{\oplus}(t_{event}=0)$ is proportional to ${r^2 : r \in u_{t,\oplus}}$ where $r$ is the distance from the lens centre in units of Einstein radii. Whist the coordinates of the source refer to the Besan\c{c}on{} data, the coordinates of the lens are calculated after $u_{\oplus}(t_{event}=0)$ is randomly chosen to yield a microlensing event.
\subsubsection{Statistics of the simulation length} \label{subsubsec:random1}
According to our convergence test for a Monte-Carlo method, 30,000 detectable parallax events is enough amount to gain statistically plausible estimate of the parallax event rate through the simulation. Again, we have 9 runs in total: 3 telescope combinations (\emph{Euclid}{}-\emph{WFIRST}{} with the Halo orbit, \emph{Euclid}{}-\emph{WFIRST}{} with the geosynchronous orbit, and \emph{Euclid}{}-LSST{}) times 3 fixed lens-mass cases (Jupiter-mass, Neptune-mass, and Earth-mass). Each run requires 30,000 detectable parallax events. Accordingly, it was necessary to test 4-11 times more events to obtain 30,000 detectable parallax events, since we set some conditions of detectability as follows.
\subsection{Parallax observation} \label{subsec:prlx-obs}
Figure \ref{fig:prlx-geo} shows the observational setup for simultaneous parallax observation. The left panel is a schematic 2D image of the event object arrangement, and the right panel shows sample light curves with their difference (residual light curve). The parallactic angle $\gamma$ helps to identify the lens mass and distance from the light curves. The value $\gamma$ is determined as
\begin{equation} \label{eq:prlx_angle}
\gamma(t) = \left|\vec{u_1}(t)-\vec{u_2}(t)\right|\theta_E
\end{equation}
where $\vec{u_1}(t)$ and $\vec{u_2}(t)$ are impact parameters of every observer in a function of time. In real observations, the observed amplitude factors of each telescope represent these impact parameters. \cite{Gould2013} stated the mass and distance equations with microlensing parallax ($\pi_E$) and their equations are rewritten by our parallactic angle as
\begin{equation} \label{eq:prlx_mass}
M_L =\frac{\theta_E^2 D_T}{\kappa \gamma} \hspace{0.2in} {\rm where} \hspace{0.2in} \kappa\equiv\frac{4G}{c^2AU}\sim8.1\frac{{\rm mas}}{M_{\odot}},
\end{equation}
and
\begin{equation} \label{eq:prlx_signal}
D_L = \frac{D_S D_T}{\gamma D_S+D_T},
\end{equation}
where these symbols correspond to those in Figure \ref{fig:prlx-geo}. $D_S$ is a standardised distance of the source from the Sun. Note that the impact parameters and parallactic angle time-dependent quantities that will be computed from the light curves. The observer distance ($D_T$) causes a time gap ($\Delta t$) of the arriving signal. As we described in \S\ref{subsec:initial}, we set the reference observer on the Earth, and the time gap for space-based observers are carefully considered in our time-dependent model.
To analyse the parallax observation, we define a parallax signal ($S$) from the residual light curve as
\begin{equation} \label{eq:prlx_signal}
S = \Delta A_{max} \times T,
\end{equation}
where $\Delta A_{max}$ is the maximum absolute value of the differential amplitude, and $T$ is the duration over which the amplitude seen by both telescopes exceeds $A_t$, in hours. Besides, $T$ should be at least 1 hour for given telescopes' cadences of 10-20 minutes. Note that a ``cadence'' for microlensing surveys is usually used in the meaning of the interval between observations/shuttering. Thus, we can instead state that both telescopes require at least three exposures with amplitude above $A_t$ to identify a simultaneous detection of an event. We also consider the noise level of differential amplitude calculated as
\begin{equation} \label{eq:prlx_errfunc}
D = \left(\frac{\Delta A(t)}{\sigma_{\Delta A(t)}}\right)_{max},
\end{equation}
\begin{equation} \label{eq:prlx_err}
\sigma_{A_i(t)} = \sqrt{A_i(t)\times10^{-0.4(m_{*,i}-m_{zp,i})}},
\end{equation}
where $A_i(t)$ is the observed magnification for each telescope. $m_{*,i}$ and $m_{zp,i}$ are the source magnitude and zero-point magnitude in the corresponding photometric band of each telescope taken from Table \ref{tab:survs}. We assume $\sigma_{A_i(t)}\geq3\times10^{-4}$ for every moment $t$ and require D > 5 for a detectable parallax event.
Finally, we will find the parallax event rate by
\begin{equation} \label{eq:prlx_er}
\tilde{\Gamma}_{parallax} = P_{parallax}\times\tilde{\Gamma}_{FFP},
\end{equation}
where $\tilde{\Gamma}_{FFP}$ is the event rate only from the FFP simulation described in $\S$\ref{subsec:ffp}, and $P_{parallax}$ is the rate weighted probability of detectable FFP parallax observation to all tested FFP events.
\begin{equation} \label{eq:prlx_prob}
P_{parallax} = \frac{\sum\limits_{ij}W_{ij}}{\sum\limits_{all}W_{all}},
\end{equation}
where $W_{ij}$ is the rate weight value of detectable parallax (see Eq.\ref{eq:wrate}). Note that the SAA limit will constrain detectable events to within two 30-day periods around the equinoxes of each year. $W_{all}$ is the rate weight of all detectable FFP microlensing event through a year which satisfy the S/N>50 limit. Hence, the summation of $W_{all}$ contains events out of SAA, events observed by either telescope within SAA (i.e. no parallax observation), events observed by both telescopes within SAA but failed our parallax detectability limit, and detectable parallax events. The input catalogues enough contain data to operate Monte-Carlo method by repeated random selection of a source-lens pair followed by the random selection of positioning angles and minimum impact parameter.
\section{Results} \label{sec:res}
The simulation finally provides some mapped results and the numerical result of FFP parallax observation probability. Note that these results are offered for each lens mass case (Jupiter-mass, Neptune-mass, and Earth-mass) under every combination (\emph{Euclid}{}-\emph{WFIRST}{} and \emph{Euclid}{}-LSST{}). Moreover, both the Halo orbit at L2, which is now the official decision, and geosynchronous orbit for \emph{WFIRST}{} are simulated. In \S\ref{subsec:res_pairs} and \S\ref{subsec:res_chara}, we are going to analyse the tendencies of detectable parallax observation with the fraction of detectable events distribution maps; which indicates the percentage probability of the event binned by the parameters shown in the axes among the detectable parallax events (i.e. total detectable events in our simulation is 30,000 events and the percentage is about it). The fraction of detectable events was taken as the rate-weighted probability (see \S\ref{eq:wrate}) and shown as a map. The mapped results of \emph{Euclid}{}-\emph{WFIRST}{} combination with the Halo orbit at L2 is going to be shown whilst the others are omitted: this is because we confirmed that the mapped results of \emph{Euclid}{}-\emph{WFIRST}{} combination with geosynchronous orbit showed similar patterns since their separation is the only different condition. The mapped results of the \emph{Euclid}{}-LSST{} combination is shown as a differential distribution map from the \emph{Euclid}{}-\emph{WFIRST}{} combination with the Halo orbit at L2 to clarify their differences. The numerical result of parallax probability for all combinations and cases are finally described. In \S\ref{subsec:res_er} and \S\ref{subsec:res_mass}, we discuss the event rate of parallax observation and the plausibility of our simulation from the view of mass estimation from the output light curves.
\begin{figure*}
{\centering
\includegraphics[width=6.5in]{PRLX1818-tH.png} \\
}
\caption{Fraction of detectable events distribution maps for the Einstein timescale and source magnitude in H-band combination with the bin of $\Delta t_E=0.1$ day in logarithmic calibration and $\Delta H=0.1$ per square-degree. {\it Top}: The fraction of detectable events distribution from the \emph{Euclid}{}-\emph{WFIRST}{} combination. {\it Bottom}: The difference in the distribution of detected events between the \emph{Euclid}{}-\emph{WFIRST}{} and \emph{Euclid}{}-LSST{} combinations (differential distributions). Note that the original distribution maps are similar to each other on the appearance and the range of fraction. The blacker regime means that relatively more detections will be recovered by the \emph{Euclid}{}-LSST{} combination and the whiter regime is more detections by the \emph{Euclid}{}-\emph{WFIRST}{} combination.}
\label{fig:th}
\end{figure*}
\begin{figure*
{\centering
\includegraphics[width=6.5in]{PRLX1818-tS.png} \\
}
\caption{As Figure \ref{fig:th}, but showing the fraction of detectable events distribution maps for the Einstein timescale and parallax signal ($S$) combination with the bin of $\Delta t_E=0.1$ day in logarithmic calibration and $\Delta log_{10}S=0.1$ hour.}
\label{fig:ts}
\end{figure*}
\begin{figure*
{\centering
\includegraphics[width=6.5in]{PRLX1818-HS-HR.png} \\
}
\caption{Fraction of detectable events distribution maps for the source magnitude in $H$-band and parallax signal ($S$) combination with the bin of $\Delta H=0.1$ per square-degree and $\Delta log_{10}S=0.1$ hour and the HR diagram of these source stars. {\it Top}: The fraction of detectable events distribution for from the \emph{Euclid}{}-\emph{WFIRST}{} combination. {\it Bottom}: The source star origin in HR diagram; where the event rate weight is from Eq.(\ref{eq:wrate}). Note that the \emph{Euclid}{} and LSST{} are approximated as observing in $H$-band and $I$-band, respectively.}
\label{fig:hs-hr}
\end{figure*}
\subsection{Distribution of source-lens pairs} \label{subsec:res_pairs}
The population of catalogue stars is almost a continuous distribution in magnitude, but it is true that the faint star population (main-sequence (MS) stars) is much larger than bright stars (i.e. Red Giant Branch (RGB) stars). The faint sources, therefore, dominate the 30,000 simulated parallax events. Figure \ref{fig:th} shows the distribution of the fraction of detectable events in Einstein timescale and source magnitude in $H$-band. Because both combinations provided similar distribution maps, here we only show the maps of the \emph{Euclid}{}-\emph{WFIRST}{} combination for the top panels of Figure \ref{fig:th} and the residual fraction between two combinations (i.e. the fraction recovered in the \emph{Euclid}{}-\emph{WFIRST}{} (EW) combination subtracted from the \emph{Euclid}{}-LSST{} (EL) combination) for the bottom panels to see the distribution difference more effectively.
Events associated with each FFP mass concentrate near a particular Einstein timescale. The timescale corresponds to the mean Einstein timescale for every FFP mass with which event is observed solely \citep{Ban2016}. For example, the canonically assumed Jupiter-mass FFP event will have $\theta_E\sim0.03$ mas and $t_E\sim$2 days, and these canonical values are proportional to the square root of the lens mass \citep{Sumi2011}. The mean value from our simulation is shorter than the canonical timescale. The reason is that our simulation likely provided a smaller lens than the canonical size. As Eq.(\ref{eq:theta}) shows, the source and lens distance combination ($(D_j-D_i)/D_jD_i$) determines the size of the Einstein radius for a fixed lens mass. A canonical Jupiter-mass event with $t_E$$\sim$2 days is usually assumed to have $D_j=8$ kpc and $D_i=4$ kpc; which gives $(D_j-D_i)/D_jD_i$=0.125 kpc$^{-1}$. We confirmed that about 97\% of our 30,000 detectable parallax events gave less than 0.125 kpc$^{-1}$. The same tendency appears for the Neptune-mass and Earth-mass cases. Further discussion about the source and lens distance relation will be made in the later paragraph with the lens distance distribution maps.
The lower limit of the source magnitude increases as the FFP mass decreases because a low-mass source requires a small threshold impact parameter to yield a large amplitude and therefore the less-massive lens cannot pass our parallax detection limit of $T>$1 hour. The diagonal cut of the bottom-left edge of the distribution occurs for the same reason, and the effect of short timescale is more strictly appearing. The differential distributions shows that the \emph{Euclid}{}-LSST{} combination rises the lower-limit of the source magnitude because the ground-based sensitivity of LSST{} requires a smaller threshold impact parameter (i.e. higher amplitude) to be observed. Due to the more strict cut-off by the LSST{} sensitivity and the stellar population, the distribution of the fraction of detectable events concentrates more on $H\sim20$ sources in the \emph{Euclid}{}-LSST{} combination than that of the \emph{Euclid}{}-\emph{WFIRST}{} combination. The differential distributions of Earth-mass lenses specifically show how the limiting timescale differs between two telescope combinations; the \emph{Euclid}{}-\emph{WFIRST}{} combination can detect shorter-timescale events than the \emph{Euclid}{}-LSST{} combination. The influence of the ground-based sensitivity and our parallax detectability limit of $T>$1 hour caused such a clear boundary between two regimes where the distribution is concentrated by the \emph{Euclid}{}-\emph{WFIRST}{} combination and the \emph{Euclid}{}-LSST{} combination. Consequently, the Jupiter-mass lens and Neptune-mass lens cases do not clearly show the gap of the timescale limit between two combinations.
Figure \ref{fig:ts} shows the distribution of the fraction of detectable events in Einstein timescale and parallax signal ($S$ in logarithmic scale, Eq.(\ref{eq:prlx_signal})) combination. The fraction distribution shows that the range of $S$ does not vary depending on the lens mass despite the lens mass determining the Einstein radii, hence Einstein timescale. The lower limit of $S$ can be attributed to our duration limit for the parallax detectability ($T>$1 hour). On the other hand, the upper limit can vary with the Einstein timescale, but we require enough ``distinguishability'' of parallax light curves compared to the noise level (D > 5, see Eq.(\ref{eq:prlx_errfunc})). The larger the Einstein radius and the longer the duration, the more difficult it is to identify differences in the two light curves. Consequently, none of the FFP lens-mass cases exceeds $log_{10}S\sim2.5$. The differential distributions indicates that the \emph{Euclid}{}-LSST{} combination is sensitive to larger parallactic angles than the \emph{Euclid}{}-\emph{WFIRST}{} combination. The higher noise of the ground-based survey requires a larger differential amplitude between the \emph{Euclid}{} light curve and LSST{} light curve to satisfy the noise level limit (D > 5) in our simulation. The boundary between the dominant regions of the \emph{Euclid}{}-\emph{WFIRST}{} combination and the \emph{Euclid}{}-LSST{} combination illustrates the relation of $S\propto t_E$.
Figure \ref{fig:hs-hr} shows the relation between $H$-band magnitude and $S$ (top), and the position of source stars on the $I-H$ colour--magnitude diagram (bottom). The fraction distribution shows that $S$ reaches a maximum at a source magnitude of $H\sim$20.5 and decreases to both brighter and fainter source regimes. The peak in source magnitude can be explained using Figure \ref{fig:bes-pop}. Sources with $H>$20.5 mag numerically dominate the source population, but an event requires strong magnification to become detectable. Sources with $H<$20.5 mag are comparatively rare, and finite source effects decreases the detectability of $S$. The near the $H\sim$20.5 mag boundary are common and can easily satisfy the S/N$>$50 criterion without strong amplitude (i.e. without a small impact parameter).
Figure \ref{fig:tm} shows the distribution of the fraction of detectable events in Einstein timescale and source mass combination. Figure \ref{fig:Nmass} is a supportive plot, showing the stellar mass and the luminosity function from the whole Besan\c{c}on{} catalogue. Detectable events involving Jupiter-mass FFPs typically involve less-massive source stars than events involving Earth-mass FFPs. Since the MS stars have the power-law relationship between luminosity and mass, low mass sources are faint sources which require significant amplitude to satisfy the signal-to-noise detectability limit (S/N>50). The Earth-mass lens events for these faint sources are likely cut-off by our parallax duration limit ($T>$1 hour) due to the small lens size. Hence, the fraction of detectable events involving massive source stars increases for Earth-mass FFPs whilst the fraction for Jupiter-mass FFPs reflects the source population more directly. According to Figure \ref{fig:Nmass}, the majority of faint stars from catalogue C and D were $\leq1M_{sun}$ whose population was much larger than the brighter stars. The differential distributions shows almost the same tendencies as Figure \ref{fig:th}; the \emph{Euclid}{}-\emph{WFIRST}{} combination can reliably detected parallax-induced differences in the light curves from lower-mass lenses than the \emph{Euclid}{}-LSST{} combination, and the Earth-mass lens case shows the timescale limit of the \emph{Euclid}{}-LSST{} combination due to the ground-based sensitivity and the parallax detectability limit of $T>$1 hour.
Figure \ref{fig:td} shows the distribution of the fraction of detectable events in Einstein timescale and lens distance combination. Figure \ref{fig:Ndist} is a supportive plot, showing the stellar distance and the luminosity function from the whole Besan\c{c}on{} catalogue. Note that both source and lens distances were taken from the same Besan\c{c}on{} datasets. For lenses up to 6 kpc from the Sun, the lens properties tend to be taken from the bright-stellar data ($H<9$) whose population is very low. Moreover, the disc-disc microlensing dominated the event weights because the small relative proper motion of the disc-disc microlensing offers longer events than the disc-bulge microlensing. So the fraction distribution spreads more on the long timescale regime. From 6 to 9 kpc, the timescale range spreads and the most likely distance for a detectable FFP lens is at $\sim$7 kpc for all FFP lens masses. Stars between 6-9 kpc dominate the objects simulated in our Besan\c{c}on{} catalogue, and occupy a wide range of $H$-band magnitudes, hence we expect a lens will typically form part of this dense population, but will normally be closer to us than the Galactic Centre (8kpc).
Sources with $H<$23 tended to be located closer to the lens (both would be in bulge). They tend to exhibit finite source effects. The motion of bulge stars varied so that events occur with different timescales. Stars beyond 9 kpc are also included in our simulation. It is obvious from Figure \ref{fig:Ndist} that these events only had fainter sources ($H>$24). Since we considered the background noise due to unresolved stars and the extinction decreases the apparent population, these fainter sources were quite difficult to observe, and therefore, the likelihood diminishes rapidly beyond 9 kpc. According to Figure \ref{fig:Nmass}, stars with $H>$26 are more massive than stars $H\sim24$, and their mean mass covers the canonical white dwarf mass. Most of the population is white dwarfs, and a few events are detectable only with a Jupiter-mass lens (Figure \ref{fig:hs-hr}).
\begin{figure}
{\centering
\vspace{-0.15in}\includegraphics[width=5.4in]{PRLX1818-tM.png} \\
}
\caption{As Figure \ref{fig:th}, but showing the fraction of detectable events distribution maps for the Einstein timescale and source mass combination with the bin of $\Delta t_E=0.1$ day in logarithmic calibration and $\Delta M_S=0.1$ solar-mass.}
\vspace{5in}
\label{fig:tm}
\end{figure}
\begin{figure}
\centering
\vspace{6.5in}
\includegraphics[width=3.4in]{Nmass_H.png}
\vspace{-0.2in}
\caption{The $H$-band luminosity function (grey line, left y-axis) and average mass (black points, right y-axis), as computed by the Besan\c{c}on{} model, in 0.1 mag bins. The error bars represent the maximum-minimum range of stellar masses within each magnitude bin.}
\label{fig:Nmass}
\end{figure}
\begin{figure}
{\centering
\vspace{-0.15in}\includegraphics[width=5.4in]{PRLX1818-tD.png} \\
}
\caption{As Figure \ref{fig:th}, but showing the fraction of detectable events distribution maps for the Einstein timescale and lens distance combination with the bin of $\Delta t_E=0.1$ day in logarithmic calibration and $\Delta D_L=0.1$ kpc.}
\vspace{5in}
\label{fig:td}
\end{figure}
\begin{figure}
\centering
\vspace{6.5in}
\includegraphics[width=3.4in]{Ndist_H.png}
\vspace{-0.2in}
\caption{The $H$-band luminosity function (grey line, left y-axis) and average distance (black points, right y-axis), as computed by the Besan\c{c}on{} model, in 0.1 mag bins. The error bars represent the maximum-minimum range of stellar distances within each magnitude bin.}
\label{fig:Ndist}
\end{figure}
\begin{figure*
{\centering
\includegraphics[width=7in]{PRLX1818EWl2-tA.png}
}
\caption{Fraction of detectable events and event parameter distribution maps for the Einstein timescale and maximum differential amplitude of parallax light curves ($\Delta A_{max}$) with the bin of $\Delta t_E=0.1$ day in logarithmic calibration times $log_{10}\Delta A_{max}=0.1$. The rows are the distributions of the fraction of detectable events, median relative proper motion, median threshold lens radii, and median parallactic angle from top to bottom. These median values are taken based on the median rate-weight in each bin.}
\label{fig:ta}
\end{figure*}
\subsection{Parallax event characteristics} \label{subsec:res_chara}
Figure \ref{fig:ta} shows the binned distribution of some event parameters with differential amplitude and Einstein timescale. The fraction of detectable events distribution (top-row) becomes an indicator to read the relative proper motion map (2nd-row), the transverse line of the projected source in the lens frame map (3rd-row), and the parallactic angle map (bottom-row). Note that parallactic angle is time-dependent throughout an event so that we picked the maximum angle of every event (corresponding to the peak of the residual light curve) and took the mean of each bin to show the distribution. These maps are for the \emph{Euclid}{}-\emph{WFIRST}{} combination. The figures for the \emph{Euclid}{}-LSST{} combination show similar distributions.
From the top panels of Figure \ref{fig:ta}, we can identify the most-likely combination of $(log_{10}\Delta A_{max}, t_E)$ to be at $\sim$(-0.8, 0.6), $\sim$(-0.6, 0.2) and $\sim$(-0.6, 0.03) for Jupiter-mass, Neptune-mass and Earth-mass, respectively. We can interpret the remaining panels of Figure \ref{fig:ta} through this probability distribution.
The distributions of relative proper motion (2nd-row) show a gradation along the timescale axis for all FFP lens masses. The short timescale and high differential amplitude regime corresponds to the largest proper motion. This is plausible because large relative proper motion leads to quicker events. Besides, the most likely combination of $(log_{10}(\Delta A_{max}), t_E)$ from the top panels typically yields $\mu_{rel}\sim$7.5 mas yr$^{-1}$ for all FFP lens masses. Since we used stellar data for lens properties, replacing only their mass by planetary masses, the most likely relative proper motion is similar to the mean proper motion of disc stars in our model. FFPs possibly have higher velocity as a result of ejection from their host stars and of swing-by acceleration by encounters. In that case, $\mu_{rel}$ becomes higher and yields shorter $t_E$ though we did not model it.
We calculated the mean transverse line, which is defined as the angular distance of the projected source path in the lens frame and plotted the distribution (3rd row). The Jupiter-mass and Neptune-mass FFP lenses show large size is correlated with low differential amplitude. If the radius of the Einstein ring is large compared to the projected telescope baseline, both telescopes will experience similar amplification and the light curves will not be differentiable from each other. This tendency can be seen for Earth-mass FFP events to some extent but not as extreme as massive FFPs since an Earth-mass FFP lens is small enough not to let telescopes to induce similar light curves. Like the distributions of relative proper motion, the most likely combination of $(log_{10}(\Delta A_{max}), t_E)$ from the top panels provides the transverse line $\theta_{trans}\sim8\pm2$ micro-arcsec for Jupiter-mass and Neptune-mass, and $\sim3\pm1$ mas for Earth-mass FFP. The reason is the same above; a massive lens requires a smaller threshold impact parameter to identify the differential light curve so that the source population converges to an effective transverse line. This condition indicates that a Jupiter-mass lens has the potential to observe fainter sources with which an Earth-mass lens is influenced by the finite source effect and cut-off. Thus, our criteria for detectable parallax give weight to both massive and less-massive FFPs. For the canonical Einstein radii ($\sim$0.03, $\sim$0.007, and $\sim$0.002 milli-arcsec with $D_S=8$ and $D_L=4$ for Jupiter-mass, Neptune-mass, and Earth-mass lenses, respectively), the transverse line of 8 micro-arcsec is $\sim0.3\theta_E$ for Jupiter-mass, $\sim\theta_E$ for Neptune-mass and $\sim1.5\theta_E$ for Earth-mass. Since we set the maximum impact parameter of three telescopes as $u_{max}=3$ or equivalent in the finite source case, the transverse line of $\geq\theta_E$ with any $u_0>0$ is fairly possible. However, the Earth-mass FFP events are most likely cut-off due to a finite source effect. From this point, the Neptune-mass lens is the most suitable size for the telescope separations among the three FFP masses in our model.
The distributions of the parallactic angle (bottom-row) show two regimes: the small parallactic angle regime at short timescales and the large parallactic angle spots scattered across longer timescales. The small-angle regime corresponds to our parallax detectability limit, i.e., that the event is simultaneously observable by both telescopes at least for 1 hour under the proper SAA, day/night timing, and weather. Especially for low-mass lens events, a large proper motion reduces the chance of ``simultaneous'' observation so that a similar line-of-sight between two observers is preferred to satisfy our detectability limit. The large parallactic angle spots are likely an artefact due to plotting the mean of each bin. We assume that the maximum differential amplitude corresponds to the maximum parallactic angle and take their maximum values individually. This assumption is plausible since the telescope separation is quite smaller than the source and lens distances; however in a spatial model of our simulation, this assumption is not always the truth. The model treats the direction of their line-of-sights in vector, and the parallax angle is found in the way of simulation whilst it would be found from the differential light curve at real observations. For the Earth-mass FFP lens, the large parallactic angle spots seem to concentrate on the high differential amplitude regime. A large differential amplitude requires relatively wider telescope separation or large difference in time of maximum amplification. In either case, a very small impact parameter is necessary for Earth-mass FFP lens to offer $log_{10}(\Delta A_{max})$>0.2 satisfying the detectable duration limit of $T>$1 hour. The most likely combination of $(log_{10}\Delta A_{max}, t_E)$ from the top panels also has similar parallactic angles for all FFP lens masses: $\gamma$$\sim$0.36$\pm$0.02 micro-arcsec. As for the threshold lens radii, this is because the requirement of threshold impact parameter and the Einstein radius concentrated parallactic angles towards a specific value across all FFP lens masses.
\begin{figure}
\hspace{-0.1in}\includegraphics[width=3.5in]{PRLX1818EWl2-timegap.png}
\caption{Probability distribution of difference in time of maximum amplification between light curves detected by the \emph{Euclid}{}-\emph{WFIRST}{} combination. The \emph{Euclid}{}-LSST{} combination provided a very similar distribution. The black, dark grey and light grey fitting curves are 10th polynomial fits. Residuals from these fits are shown in the bottom panel. The insert shows a zoom of the bottom-left corner.}
\label{fig:time}
\end{figure}
Figure \ref{fig:time} shows the probability distribution of difference in time of maximum amplification. Lower FFP masses generate longer times between the two light curve peaks. This is because the projected telescope separation becomes larger with respect to the threshold lens radii. For the Neptune-mass and Earth-mass FFP lenses, the highest probability is at $\Delta t_0/t_E$$\sim$0.05 and $\Delta t_0/t_E$$\sim$0.19, respectively. For the most-likely event timescale, the highest probability corresponds to a time difference of $\sim$33 min for Neptune-mass lens and $\sim$22 min for Earth-mass lens. These values are much larger than the maximum time-gap due to the observer separation: taking $\leq$6 sec for radiation travel between Earth and L2. Besides, it is larger than or close to the cadence of our assumed telescopes (15-20 min). The time gap is even possible to identify through real observation.
\subsection{Event rate} \label{subsec:res_er}
The event rate is summarised in Table \ref{tab:eventrate}. The yearly rate covers two 30-day-seasons around the equinoxes, as determined by the maximum SAA of \emph{Euclid}{} (\$\ref{subsec:tele-kine}). For the \emph{Euclid}{}-\emph{WFIRST}{} combination, we simulated both geosynchronous orbit and Halo orbit at L2 cases of \emph{WFIRST}{}. The parallax probability is calculated from the weighted rate of the event (Eq.(\ref{eq:wrate})) where the threshold impact parameter ($u_t$) depends on the telescope configurations. Therefore, there might be a gap of parallax probability between the telescopes even though they are detecting the same events. It is reasonable to accept the smaller probability between the combination telescopes rather than optimising the large number. For the \emph{Euclid}{}-LSST{} combination, the parallax detectability of our model considers the day-night position of LSST{}. Our assumption of 7.5 hours of performance per night corresponds to the zenith angle of $\sim56^{\circ}$. Hence, the population of clear nights (65.89\%) is only applied to the final result.
For the \emph{Euclid}{}-\emph{WFIRST}{} combination, the Halo orbit at L2 case provided a lower probability than the geosynchronous orbit case. This arises mostly from the observer separation in our simulation. On average, the geosynchronous orbit case provided the observer separation $\sim$1.6$\times10^6$ km whilst the Halo orbit at L2 case provided $\sim$0.9$\times10^6$ km. The narrower separation reduces the light curve gap. Moreover, the variation of the relative line-of-sight is greater for the Halo orbit at L2 case than the geosynchronous orbit case because of its motion apart from the reference observer on the Earth. Whilst the geosynchronous orbit case keeps the parallactic angle based on the SAA, the Halo orbit at L2 is more likely to yield a small parallactic angle. We confirmed that the distribution of sample event properties (i.e. lens distance, lens size, relative proper motion and threshold impact parameter) did not show a clear difference between these cases. Therefore, we can simply say that the variation of phase positions resulted in a smaller parallax event rate for the Halo orbit at L2 case than the geosynchronous orbit case. In both orbital cases, the parallax event rate derived from the \emph{WFIRST}{} configuration is smaller than that of \emph{Euclid}{} configuration. It indicates that more events are detected only by \emph{WFIRST}{} in our simulation and is understandable because the \emph{WFIRST}{} sensitivity yields the fainter zero-point magnitude and allows more faint sources which population is relatively large.
\begin{table}
\centering
\caption{FFP parallax event rate $\tilde{\Gamma}_{parallax}$ [events year$^{-1}$ deg$^{-2}$] targeting at $(l, b) = (1^{\circ}, -1.^{\circ}75)$. The parallax probability ($P_{parallax}$) is given in parentheses. Note that $P_{parallax}$ is the summation of the weighted rate of detectable parallax events overall events (detected either in parallax or solely.) in a year. The same values of parallax probability may result in different parallax event rate because of the different FFP event rate of the solo-observation for each telescope (Eq.(\ref{eq:prlx_er})), and the year$^{-1}$ unit of the parallax event rate means ``per yearly co-operation period'' (i.e. two 30-days operation of \emph{Euclid}{}). EW stands for the \emph{Euclid}{}-\emph{WFIRST}{} combination whilst EL stands for the \emph{Euclid}{}-LSST{} combination. Geo and L2 show the orbital options of \emph{WFIRST}{}.}
\label{tab:eventrate}
\begin{tabular}{ll|lll}
\hline
\multicolumn{2}{c|}{Lens mass} & Jupiter & Neptune & Earth \\ \hline
\multirow{2}{*}{EW-Geo} & \emph{Euclid}{} & 80.3 (3.9\%) & 34.1 (7.2\%) & 11.4 (10.0\%) \\
& \emph{WFIRST}{} & 52.0 (2.6\%) & 19.0 (4.0\%) & 4.8 (4.3\%) \\ \hline
\multirow{2}{*}{EW-L2} & \emph{Euclid}{} & 45.0 (2.2\%) & 20.0 (4.2\%) & 6.7 (5.8\%) \\
& \emph{WFIRST}{} & 30.7 (1.5\%) & 13.3 (2.8\%) & 3.9 (3.4\%) \\ \hline
\multirow{2}{*}{EL} & \emph{Euclid}{} & 34.5 (2.6\%) & 8.9 (2.8\%) & 0.5 (0.7\%) \\
& LSST{} & 47.5 (3.9\%) & 14.1 (5.1\%) & 1.0 (1.5\%) \\ \hline
\end{tabular}
\end{table}
The \emph{Euclid}{}-LSST{} combination yields less FFP parallax detection than the \emph{Euclid}{}-\emph{WFIRST}{} combination with the Halo orbit. This event rate is based on our fine-weather assumption of 65.89\% and an average operation of 7.5 hours/night. Compared with the geosynchronous orbit case (the telescope separation is similar to the \emph{Euclid}{}-LSST{} combination), the parallax event rate becomes smaller because the event detectability is lead by less-sensitive LSST{}. The event rate derived from the \emph{Euclid}{} configuration is smaller than that from the LSST{} configuration, unlike the \emph{Euclid}{}-\emph{WFIRST}{} combination. The reason is the same as the \emph{Euclid}{}-\emph{WFIRST}{} combination that there are more events observed only by \emph{Euclid}{} because of the sensitivity difference in our simulation.
The combination of \emph{WFIRST}{} and LSST{} was skipped in our simulation. The zero-point magnitude difference would yield the sensitivity difference and result in different detectability, but as long as LSST{} is the less-sensitive ground-based survey, the LSST{} sensitivity determines the detectability like the \emph{Euclid}{}-LSST{} combination. Besides, the geosynchronous orbit case would provide an observer separation that is too short to successfully and effectively observe parallax. For the Halo orbit case, the wider SAA of \emph{WFIRST}{} would provide a larger event rate than the \emph{Euclid}{}-LSST{} combination, and we can simply multiply the event rate by the SAA coverage ratio to gain the \emph{WFIRST}{}-LSST{} combination event rate since our random selection of Earth's positioning angle (hence, L2 position) is equally distributed within the range.
\citet{Zhu2015} and \citet{Zhu2016} considered parallax observations by ground-based and space-based surveys. They tested the combination of OGLE-{\it Spitzer} and of KMTNet-\emph{WFIRST}{} where the Halo orbit case of \emph{WFIRST}{} was assumed. In both combinations, they suggested the importance of telescope separation to observe FFPs as we have discussed in relation to our results. Besides, LSST{} is expected to have higher sensitivity than other ground-based telescopes currently operating. Although \citet{Zhu2015} suggested the reinforcement of sensitivity by a ``combination'' of ground-based and space-based telescopes, the individual sensitivity is essential to observe FFP events effectively.
\begin{table}
\centering
\caption{The accuracy of mass estimation for the 30,000 simulated events towards $(l, b) = (1^{\circ}, -1.^{\circ}75)$. The percentile likelihood indicates the ratio of events for which the uncertainty ($\epsilon$) in the estimated mass ($M_{FFP}$) successfully covers the given mass ($M_{given}$). EW stands for the \emph{Euclid}{}-\emph{WFIRST}{} combination whilst EL stands for the \emph{Euclid}{}-LSST{} combination. Geo and L2 are the orbital options of \emph{WFIRST}{}.}
\label{tab:uncM}
\begin{tabular}{ll|lll}
\hline
\multicolumn{2}{c|}{$M_{FFP}\pm\epsilon \supset M_{given}$} & Jupiter & Neptune & Earth \\ \hline
\multirow{2}{*}{EW} & Geo & 23.7\% & 27.5\% & 34.6\% \\
& L2 & 20.1\% & 24.7\% & 33.1\% \\
EL & & 18.4\% & 21.0\% & 23.6\% \\ \hline
\end{tabular}
\end{table}
\begin{figure
\hspace{-0.1in}\includegraphics[width=3.5in]{PRLX1818-mass.png}
\caption{Likelihood of obtaining a lens mass to with a fraction $|\Delta|$ of the true lens mass among our simulated events. The discrepancy was calculated as $|Delta|$=$|M_{FFP}-M_{input}|$/$M_{input}$. EW stands for the \emph{Euclid}{}-\emph{WFIRST}{} combination whilst EL stands for the \emph{Euclid}{}-LSST{} combination. Geo and L2 are the orbital options of \emph{WFIRST}{}. The Jupiter-mass, Neptune-mass, and Earth-mass FFP distributions are labelled by their initials.}
\label{fig:disMbar}
\end{figure}
\subsection{Accuracy of mass estimation from parallax light curves} \label{subsec:res_mass}
The mass estimation of the lens will theoretically be more precise using parallax data than a single observation. For every detectable event in our simulation, the residual light curve of parallax observation was generated. $u_1(t)$, and $u_2(t)$ are derived from the light curve. Instead of taking all $u$ for every $t$, we took $u$ at the light curve peak of each observer (see Figure \ref{fig:prlx-geo} where is expressed as $\Delta A(t_{0,1})$ and $\Delta A(t_{0,2})$); hence we had $u_1(t_{0,1})$, $u_2(t_{0,1})$, $u_1(t_{0,2})$, and $u_1(t_{0,2})$. The relative proper motion was assumed to be $\mu_{rel}\sim7.5\pm1.5$ mas yr$^{-1}$, which we derived from the most likely values of ($log_{10}(\Delta A_{max}), t_E$; Figure \ref{fig:ta}). Then $\theta_E$ was calculated. The telescope separation for each combination is averaged as $D_T\sim1.6\times10^6$ km for the \emph{Euclid}{}-LSST{} combination and $\sim0.9\times10^6$ km for the \emph{Euclid}{}-\emph{WFIRST}{} combination, and the fluctuation by the orbital motion is contained as the uncertainty of it. Using Eq.(\ref{eq:prlx_angle}) and these parameters from the light curve, $\gamma$ was calculated. Thus, we had $\gamma(t_{0,1})$ and $\gamma(t_{0,2})$ and moved to Eq.(\ref{eq:prlx_mass}) to calculate the lens mass and averaged them.
To consider the accuracy of our lens mass estimation, we calculate the uncertainty ($\epsilon$) from the parameter errors of $u$, $\mu_{rel}$, and $D_T$ and the discrepancy of the estimated mass from the input mass ($|Delta|$=$|M_{FFP}-M_{input}|$/$M_{input}$). If $M_{FFP}\pm\epsilon$ does not cover the input mass, it is obvious that the uncertainty is underestimated. However, even though $M_{FFP}\pm\epsilon$ covers the input mass, the very large $\epsilon$ case is not acceptable from the view of the estimation accuracy. Table \ref{tab:uncM} summarises the percentage likelihood among our 30,000 events that $M_{FFP}\pm\epsilon$ covers the input mass. It indicates that our parameter errors are underestimated in the majority case. It is because we assumed a certain value of relative proper motion (7.5 mas yr$^{-1}$) with an error value that was derived from the Besan\c{c}on{} data. Considering the distribution of detectable parallax events, the error is flexible in dependance on the differential maximum-amplitude. Figure \ref{fig:disMbar} visualises the percentage likelihood of the discrepancy among 30,000 events. For example, $|\Delta|$<100\% means the estimated mass was up to twice as large as the input mass. Since $|\Delta|$<50\% regime is less than 50\% for all FFP masses and telescope combinations, we can regard that our calculation likely overestimates the FFP mass. One cause was that the parallactic angle was underestimated in most cases because we derived it from the differential light curves, not from the spatial components in our model. The other cause was that we theoretically estimated the impact parameter to derive the parallactic angle from the light curve using Eq.(\ref{eq:au}) assuming a point source. Thus, the parallactic angle variation was wider than what we generalised through our mass estimation process, and our calculation method biased to make the least parallactic angle.
\begin{table}
\centering
\caption{The mean estimated mass and the standard deviation for the representative data of $|\Delta|$<10\%, <50\%, and <100\%. These data can be regarded as showing the Gaussian distribution for each discrepancy range. The values are in units of each input mass.}
\label{tab:gaussM}
\hspace*{-0.3in}
\begin{tabular}{|ll|cc|cc|cc|}
\hline
& & \multicolumn{2}{c}{Jupiter} & \multicolumn{2}{c}{Neptune} & \multicolumn{2}{c}{Earth} \\
& & Mean & SD & Mean & SD & Mean & SD \\ \hline
\multirow{3}{*}{EW-Geo} & $|\Delta|$<10\% & 1.00 & 0.06 & 1.00 & 0.06 & 1.00 & 0.06 \\
& $|\Delta|$<50\% & 0.95 & 0.28 & 0.96 & 0.28 & 0.96 & 0.28 \\
& $|\Delta|$<100\% & 1.04 & 0.46 & 1.04 & 0.46 & 1.02 & 0.46 \\ \hline
\multirow{3}{*}{EW-L2} & $|\Delta|$<10\% & 1.00 & 0.06 & 1.00 & 0.06 & 1.00 & 0.06 \\
& $|\Delta|$<50\% & 1.03 & 0.27 & 1.03 & 0.27 & 1.01 & 0.27 \\
& $|\Delta|$<100\% & 1.21 & 0.43 & 1.21 & 0.43 & 1.18 & 0.43 \\ \hline
\multirow{3}{*}{EL} & $|\Delta|$<10\% & 1.00 & 0.06 & 1.00 & 0.06 & 1.00 & 0.06 \\
& $|\Delta|$<50\% & 0.95 & 0.28 & 0.93 & 0.28 & 1.00 & 0.28 \\
& $|\Delta|$<100\% & 1.05 & 0.44 & 1.01 & 0.45 & 1.15 & 0.46 \\ \hline
\end{tabular}
\end{table}
Table \ref{tab:gaussM} shows the Gaussian means and standard deviations derived from the data of each discrepancy: <10\%, <50\%, and <100\%. As we mentioned in the last paragraph, the mean estimated masses tend to become equal to or larger than the input mass because of the underestimation of the parallactic angle. However, it is not true for the Neptune-mass case in the \emph{Euclid}{}-\emph{WFIRST}{} combination with the geosynchronous orbit and the \emph{Euclid}{}-LSST{} combination. One possible reason is because of the interaction between Einstein radius and the Einstein parallax (i.e. the telescope separation). In \S\ref{subsec:res_chara}, we confirmed that the Neptune-mass lens is the most suitable to our telescope combinations because of the effectiveness of the transverse line for identify the differential light curve. In other words, the upper limit of the minimum impact parameter, which satisfies our parallax detectability limit, was maximised for the Neptune-mass lens. Thus, Table \ref{tab:gaussM} indicates that the telescope separation of $1.6\times10^6$ km allowed relatively larger $u_0$ and underestimated the Einstein radius, which we derived from the transverse line and the detected duration reading from the light curves.
The discrepancy of estimated FFP lens mass correlates with the accuracy of Einstein timescale ($t_E$) and Einstein parallax ($\pi_E$) estimated from light curves. Note that Einstein parallax was calculated from the reciprocal of observer separation corresponding to the full lens size. We confirmed that the accuracy of Einstein timescale estimation was quite acceptable; 74.5\% of events had Einstein timescales reproduced to within 10\% of the input timescale, and almost all of the rest stayed within 50\% discrepancy. On the contrary, Einstein parallax estimation was as inaccurate as the FFP mass estimation. Hence, we need to improve the Einstein parallax estimation from light curves. In our simulation, the 3D positioning model was used, and the measurement of observer separation was done in vectorial 3D space. However, photometric light curves were drawn with the scalar value of impact parameters since we never know the vectorial impact parameters on the sky during real observations, since we never know the rotation angle that the transect the source star makes behind the FFP lens.
We computed the distribution of the angular difference between two vectorial impact parameters observed by two telescopes. Here we define the angular difference as $\alpha_u$ that two vectorial impact parameters $\vec{u_1}$ and $\vec{u_2}$ form between.
The angle $\alpha_u$ is not evenly distributed. The most likely case was the opposite vectorial direction (i.e. $\alpha_u\sim\pi$), but the case was just 11-29\% of detectable parallax events depending on the observer combinations in our simulation, which allowed $\alpha_u$ to take from $-\pi/2$ to $\pi/2$. Compared to the event rate values in Table \ref{tab:eventrate}, we also found that the percentile likelihood of $\alpha_u\sim\pi$ decreased for the higher event rate. One reason for this tendency was that the relative positions between two telescopes varied more for the \emph{Euclid}{}-\emph{WFIRST}{} combination than for the \emph{Euclid}{}-LSST{} combination. Another reason was that the \emph{Euclid}{}-LSST{} combination required a larger amplitude difference between two light curves due to the lower sensitivity of ground-based partner (LSST{}).
Thus, the Einstein parallax estimation using a scalar value of impact parameters was not enough accurate in the commonest scenario, and this issue affected our mass estimation accuracy as we discussed above. We also tried to compute a distribution of parallax time gap (i.e. difference of the light curve peak time between two observers divided by the Einstein timescale; $\Delta t_0/t_E$) for every $\alpha_u$. The range of possible parallax time gap showed a convex curve for $-\pi/2\leq\alpha_u<\pi/2$, but the distribution peak depends on the telescope combinations and FFP masses. Thus, the parallax time-gap was not sufficient evidence to identify the angle between two impact parameters. Besides, the error in the time of peak magnification ($\epsilon(t_0)$) should be smaller than the parallax time gap otherwise the fraction error in Einstein parallax exceeds unity.
\section{Discussion} \label{sec:disc}
For the \emph{Euclid}{}-\emph{WFIRST}{} combination, the phase difference influences the differential light curve more on the low-mass FFP lenses. In our model, we assumed the phase difference between the \emph{Euclid}{} and \emph{WFIRST}{} position 90 degree, which corresponds to $\sim0.9\times10^6$ km separation. If it was the maximum phase difference of 180 degree, the separation would be $\sim1.2\times10^6$ km separation. The additional run of the L2 orbit case with a 180-degree phase difference simulation showed an increase of the parallax event rate from the 90-degree phase difference case but not as large as the event rate of the geosynchronous orbit case (the telescope separation is $\sim1.6\times10^6$ km). Hence, the phase difference between \emph{Euclid}{} and \emph{WFIRST}{} is one of the important issues to effectively yield the parallax detection.
We assumed the FFP population is 1 per star, and we can numerically convert our result to the other population cases. For instance, there are several opinions of the Jupiter-mass FFP population per MS star: $\sim$1.8 \citep{Sumi2011}, 1.4 \citep{Clanton2016}, and <0.25 \citep{Mroz2017} though the detailed conditions and assumptions are different. Therough the microlensing event simulation ($\S$\ref{sec:micro}), the optical depth is proposional to the lens population. This means that we can define the conversion formula of the FFP event rate for the different FFP population as
\begin{equation} \label{eq:convertion}
\tilde{\Gamma}_{FFP}^{new} = \tilde{\Gamma}_{FFP} \times f_{conv} \hspace{0.2in} {\rm where} \hspace{0.2in} f_{conv} \propto P_{FFP},
\end{equation}
where $\tilde{\Gamma}_{FFP}$ is the actual FFP microlensing event rate per year which value we used in the simulation is in Table \ref{tab:ffp-ev} and $P_{FFP}$ is a population ratio of FFPs per star. According to Figure \ref{fig:hs-hr}, the MS stars are $H$>16.5 which roughly corresponds to the boundary of catalogue B and C. In our simulation, the 99.6\% of source and lens data refer from the catalogue C and D due to the population. Therefore, we can approximate the conversion coefficient as $f_{conv}\sim$1.8, $\sim$1.4 and $\sim$0.25 for \citeauthor{Sumi2011}'s, \citeauthor{Clanton2016}'s and \citeauthor{Mroz2017}'s FFP population, respectively. Since 99.6\% of our source stars are MS stars, we can simply multiply the rates in Table \ref{tab:eventrate} by these factors (the associated probabilities do not change). As a result, \citeauthor{Sumi2011}'s population predicts that the \emph{Euclid}{}-\emph{WFIRST}{} combination observe 55 Jupiter-mass FFP for two 30-day periods per year in parallax. The \citeauthor{Clanton2016}'s population is 43 FFPs, and \citeauthor{Mroz2017}' population is <8 FFPs.
We numerically modelled those noise sources that could easily be quantified. In the real observation, there is the effect of the planetesimals and asteroids in the asteroid belt, Kuiper belt, and Oort cloud, and the flux interference by the stellar flares, transits of the source system, and binary-source events. The effect from asteroids is not negligible if the orbits scratch the line-of-sight of the event, and the same thing around the source star offers the flux noise \citep{Trilling2005, Usui2012, Matthews2014, Wong2017, Whidden2019}. The stellar flares provide a sudden magnitude increase, and the followup observation with high cadence will be required to identify either a stellar flare or a short microlensing event from the light curve \citep{Balona2016}. The transit of the planetary system requires the reduction of the phase from the light curve to identify the one-time microlensing event\citep{Hidalgo2018}. The binary source makes the light curve more complex than the single-source events we assumed \citep{Kong2011}. Thus, the determination of short events from the light curve variation becomes difficult, and the event rate will be less than we derived here.
\section{Conclusion} \label{sec:conc}
We have simulated the parallax observations of FFP events in a 3D model, targeted towards $(l, b)=(1^{\circ},-1.^{\circ}75)$. \emph{Euclid}{} was taken as the main telescope and two different partners were applied; \emph{WFIRST}{} and LSST{}. We estimate that the \emph{Euclid}{}-\emph{WFIRST}{} combination will result in 3.9 Earth-mass and 30.7 Jupiter-mass FFP microlensing events will be detectable with sufficient sensitivity to determin their parallax angle during two 30-day-periods of \emph{Euclid}{} operation per yearly co-operation period per square degree. From the latest operation plan for \emph{Euclid}{} and \emph{WFIRST}{}, we may expect that the chance of simultaneous parallax observation will be $\sim$2.5 years (or 5$\times$30-day operations) with 0.28 deg$^2$ field-of-view (FoV). This results in 2.7 Earth-mass FFPs and 21.5 Jupiter-mass FFPs that will be observed with simultaneous observations and measurable parallax. On the other hand, the \emph{Euclid}{}-LSST{} combination resulted in less event rate due to our optimisation of weather and night, and about 0.5 Earth-mass and 34.5 Jupiter-mass FFPs can be found per year per square degree. Unlike the \emph{Euclid}{}-\emph{WFIRST}{} combination, LSST{} is capable to cover the expected \emph{Euclid}{} operation period of $\sim$6 years and 0.54 deg$^2$ FoV. As results in 1.8 Earth-mass FFPs and 112 Jupiter-mass FFPs will be observed in simultaneous parallax. As we mentioned at the introduction of telescopes applied to the simulation ($\S$\ref{subsec:tele-kine}), the exoplanet research campaign using microlensing observation is planned in the \emph{Euclid}{} and \emph{WFIRST}{} missions but not in the LSST{} survey. Out result at least shows the potential of parallel ground-based observing microlensing with the collaboration with upcoming space-based surveys.
The mass estimation from parallax light curves still has some problems with accuracy. In our calculation, the estimated mass is only accurate to <20\% with the uncertainty of $\Delta$<0.5$M_{FFP}$ for both the \emph{Euclid}{}-\emph{WFIRST}{} combination in the \emph{WFIRST}{} Halo orbit case and the \emph{Euclid}{}-LSST{} combination. The vectorial impact parameters are the main source of uncertainty in the mass estimation in our simulation, and the additional approaches we attempted could not over come this. Improved methods of estimating event configuration angle will be required for improved mass estimation.
In our simulation, we considered two orbital options for \emph{WFIRST}{}; the geosynchronous orbit and the Halo orbit at L2. Our simulation made clear that the telescope separation of both cases is useful for a microlensing parallax observation from Earth-mass to Jupiter-mass FFPs. The \emph{Euclid}{}-\emph{WFIRST}{} combination with the geosynchronous orbit resulted in a slightly larger event rate than with the Halo orbit at L2 because of the difference in the observer separation. However, further research about noise detection and reduction is necessary, especially for low-mass FFPs. Besides, we did not consider the variation in \emph{Euclid}{} and \emph{WFIRST}{} trajectories. The probability of parallax detection we have suggested in this paper will be a criterion for further research and future observations. The simultaneous parallax observation is expected to explore the study of exoplanets, including FFPs, in the next decades.
\section{Acknowledgement} \label{sec:ackn}
We would like to thank Dr. Kerins for his expert advice to begin this research and Dr. Robin and her team of Besan\c{c}on{} Galactic model to offer us an important source for the simulation.
\newpage
\bibliographystyle{mnras}
|
2,877,628,088,651 | arxiv | \section{Introduction}
Hundreds of exoplanets have been discovered around a variety of stars, from small M dwarfs out to large G giant stars, to tight orbits around pulsars. The conclusion is clear that planet formation is a robust process in our galaxy. A corollary to this conclusion is the idea that observable evidence for planetary systems must therefore exist from early star formation to various stellar end states. The challenge is to identify which phenomena are uniquely due to planets, and which are considered false positives. When planets are discovered at each stage of stellar evolution, it will be additionally challenging to interpret the impact stellar evolution had on the surfaces, composition, and orbits of such planets. Studying these physical processes tell us what the eventual fate of our own Solar System will be.
A very compelling case can be made for the presence of planetary systems around metal enriched white dwarfs and white dwarfs that possess strong infrared excesses. Many studies with {\em Spitzer} and ground based optical spectroscopy have provided nearly twenty dusty white dwarfs with some sort of abundance estimate for the dusty material, either through detection of the 10\micron\ silicate feature, careful abundance analysis of narrow metal lines in the photosphere of the target white dwarf, or detection of gaseous emission lines \citep{zuckerman03,koester05,reach05,kilic06,gaensicke06,vonhippel07,farihi10,debes11}. More recently, the Wide Field Infrared Survey Explorer (WISE)\citep{debes11b}, has shown that $\sim$1\% of WDs show dusty disks, confirming earlier work with Spitzer. In contrast, nearly 20-25\% of all WDs show measurable metal accretion \citep[e.g.][]{zuckerman03}.
There have been two main conclusions from this excellent body of work: 1) that the material drizzling onto the surfaces of these white dwarfs is primarily similar to the inner terrestrial planets in our solar system, and 2) that planetesimals which are tidally disrupted best fit the observed location and composition of the dust disks \citep{zuckerman07,klein10,jura03}.
\citet{jura03} and \citet{jura07} have elegantly shown that most white dwarfs with infrared excesses require inner disk radii of just a dozen or so {\em white dwarf} radii, or about 0.1 R$_\odot$. The extreme proximity to the white dwarf produces quite unusual conditions for a dust disk, and argues strongly for a tidally disrupted asteroid that has arrived from beyond a few AU,
since anything at smaller distances during post main sequence evolution would be evaporated within the envelope of the evolving giant star. Based on these arguments, it has generally been assumed that a planet several AU away from the WD is responsible for perturbing planetesimals into highly elliptical, tidally disrupting orbits.
A primary uncertainty in linking planetary systems to dusty and polluted white dwarfs is the exact dynamical mechanism that delivers planetesimals so close to the white dwarf. \citet{alcock86}, in a prescient paper, tried to explain the presence of the metal polluted hydrogen WD (type DAZ) G74-7 with infrequent cometary impacts onto a white dwarf surface from Oort cloud analogues. Further work was done looking at the dynamical stability of planetary systems during post-main sequence evolution as a possible driver for material into the inner system \citep{debes02}. This scenario could reproduce the rough number of polluted white dwarfs assuming that as many as 50\% of planetary systems are unstable. It also predicted that some planetary systems may be stable for up to 1~Gyr after a star turns into a WD, in accordance with the range of pollution observed for WDs. Because dusty white dwarfs are observed at relatively late cooling ages when the WD has very low luminosity, processes that are initially efficient at creating dust during post main sequence evolution become negligible once a white dwarf is formed and is cooling \citep{dong10,bonsor10}. Recently, \citet{bonsor11} investigated the frequency with which planetesimals in belts exterior to a giant planet might be perturbed into the inner system. They found in their {\em N}-body simulations that a sufficient number of planetesimals can be perturbed to match the observed evolution of the dust accretion rate onto white dwarfs as a function of cooling age. Two main assumptions however were used in that work: that the planetesimals must survive post main sequence evolution, and that roughly 10\% of perturbed planetesimals would be further deflected (presumably by a second interior planet) into tidally disrupting orbits. Most planetesimals beyond the ice line (Kuiper belt type analogues) should be primarily icy. They may significantly sublimate during post main sequence evolution before they can be perturbed. Similarly, many interior planets that could further perturb outer planetesimals into white dwarf-crossing orbits may be engulfed during post main sequence evolution. Both scenarios could be helped by eccentricity pumping of icy bodies further away from the inner system that don't see the mass loss of the central star as adiabatic \citep{veras11}.
While all dusty white dwarfs show evidence for metal accretion onto their surfaces, not all metal enriched white dwarfs show evidence of a dusty disk. A correlation between accretion rate and the presence of dusty disks has been suggested \citep{vonhippel07,farihi09,zuckerman10}, with an accretion rate of $>$10$^{8.5}$ g/s consistent with the range above which dusty white dwarfs occur. The delineation could be due to the average size of the perturbed planetesimal \citep{jura08}. In this scenario, dusty disks only occur for larger than average disruptions, while smaller planetesimals that are disrupted are quickly turned to gas through mutual collisions and sputtering. Evidence for the presence of multiple smaller disruptions causing gaseous disks has been shown with the discovery of circumstellar Ca gas absorption around WD 1124-293 \citep{debesub11}.
In this paper we present a new scenario for perturbing planetesimals into highly eccentric orbits, thus strengthening the link between dusty white dwarfs and planetary systems. We suggest in \S \ref{sec:mmp} that perturbations in eccentricity from interior mean motion resonances (IMMRs) with a giant planet roughly the mass of Jupiter are sufficient to create a steady stream of white dwarf crossing planetesimals. In particular, we hypothesize that the 2:1 resonance is most efficient at driving white dwarf crossers, and that these asteroids are quickly tidally disrupted. We use numerical simulations to follow the dynamics of the Solar System's asteroid belt under the influence of post-main sequence evolution and perturbation by Jupiter in \S \ref{sec:numerical}. Our model differs from the results of \citet{bonsor11} primarily in that we choose planetesimals that should be primarily rocky and survive post main sequence evolution \citep[see][]{jura08}, and we follow the dynamics of the planetesimals from perturbation to entering the tidal disruption radius of the white dwarf. We also simulate the close approach of an asteroid to a white dwarf in \S \ref{sec:numerical} to determine whether asteroids can be tidally disrupted quickly. In \S \ref{sec:results} we confirm that a significant fraction of asteroids are perturbed into close encounters with a white dwarf, and that highly eccentric encounters between a small rubble pile asteroid and a white dwarf are sufficient to tidally disrupt the asteroid. We compare the results of our models to the currently known population of dusty and polluted WDs in \S \ref{sec:comp} including estimating the asteroid belt masses necessary to produce the observed metal pollution in white dwarfs, and discuss our conclusions in \S \ref{sec:disc}.
\section{The Interior Mean Motion Perturbation Model}
\label{sec:mmp}
The Kirkwood gaps of the Solar System's asteroid belt are regions where asteroids are quickly removed due to gravitational interactions with planets. Within IMMRs, an asteroid's eccentricity random walks until it undergoes a close encounter with a planet (or planets) and is ejected, collides with a planet, or collides with the central star \citep{morbi96,gladman97}. This motion is limited to those bodies in an IMMR, whose width $\delta a_{max}$, can be roughly approximated by the maximum libration width for interior first order resonances in the restricted three body case and expanding the equations of motion for low eccentricities \citep{murraydermot}:
\begin{equation}
\label{eq:lib}
\delta a_{\rm max}=\pm\left(\frac{16}{3}\frac{|C_r|}{n}e\right)^{1/2}\left(1+\frac{1}{27j_2^2e^3}\frac{|C_r|}{n}\right)^{1/2}-\frac{2}{9j_2e}\frac{|C_r|}{n}a,
\end{equation}
where $n$ is the mean motion of the planetesimal and $e$ is the eccentricity of the planetesimal, $C_r$ is a constant from the resonant part of the disturbing function. The constant $j_2$ comes from the resonant argument and determines which resonance is being used for a calculation, and $\alpha$ is the ratio of the asteroid's semi-major axis to Jupiter's. The quantity $\frac{C_r}{n}$ is given by:
\begin{equation}
\frac{C_r}{n}=\mu\alpha |f_d(\alpha)|.
\end{equation}
For the 2:1 resonance, $j_2$=-1 and $\alpha f_d(\alpha)$=-0.749964 \citep{murraydermot}. As can be seen in Equation \ref{eq:lib}, the maximum libration
width implicitly depends on the mass ratio $\mu$. As a star evolves and loses mass, the mass ratio $\mu$ increases. As a result, the width of the resonance $\delta a_{\rm max}$ increases and bodies previously exterior to the IMMR become trapped. Just as in Hill stability and the stability
of multi-planet systems \citep{gladman93,chambers96,debes02}, mass loss from the central star increases the perturbative influence of the planet. Figure \ref{fig:f2} demonstrates the growth of $\delta a_{max}$ with a corresponding change in $\mu$ for the 2:1 resonance.
The 2:1 resonance in the Solar System appears to be the most efficient mechanism for scattering asteroids into Sun crossing orbits over Myr timescales.
{\em N}-body simulations of asteroids injected into different resonances with Jupiter show a typical lifetime of a few million years (such as with the 3:1 resonance) to a few tens of millions of years for the 2:1 resonance \citep{gladman97}. The 2:1 resonance is particularly attractive as a source for tidally disrupting asteroids around WDs because the timescale for asteroid removal is long (50\% removal at $>$100 Myr), and a significant percentage ($\sim$6.5\%) of objects in the 2:1 resonance are perturbed into Sun-crossing orbits.
We now build on the tidal disruption scenario as envisioned by \citet{jura03}, which we call the Interior Mean Motion Perturbation (IMMP) model \citep{hoardbook}. Planetary systems with one or more dominant giant planets possess planetesimal belts that are the remains of core accretion formation of terrestrial planets/giant planet embryos. Over time these belts dynamically evolve through mean motion and secular resonances and lose mass to collisions, ejections, and star grazing orbits. Many of the resonances are cleared out to $\sim \delta a_{\rm max}$. During post main sequence evolution, smaller planetesimals migrate and evaporate in response to gas drag from the central star evolving off the main sequence, but a significant portion of large (r$>$1-3~km) planetesimals survive primarily unscathed \citep{jura08}. Meanwhile, due to mass loss from the central star, the reservoir of planetesimals that are now vulnerable to eccentricity excursions from mean motion resonance perturbations has grown, either through trapping of medium sized planetesimals in resonance \citep{dong10,bonsor10} or through the growth of $\delta a_{\rm max}$.
Planetesimals that are on initially circular orbits interact gravitationally with a dominant giant planet and become perturbed into a highly elliptical orbit. The perturbed planetesimals eventually obtain such high eccentricity that they are tidally disrupted by the central WD--the streams of debris from the initial disruption are primarily on similar orbits as their host planetesimal. Over subsequent passes, more disruptions occur, spreading the material so that collisions increase. Over many orbits these collisions damp down mutual eccentricity and form a coherent disk structure that evolves through a combination of further collisions and Poynting-Robertson drag until the dust becomes optically thick. At this point the disk evolves viscously until it becomes optically thin, or all dust is sputtered into gas from grain-grain collisions and the debris from other asteroids \citep{jura08}.
\section{Numerical Methods}
\label{sec:numerical}
In order to test the above IMMP model, we will demonstrate that planetesimals 1) entered a highly eccentric orbit within the tidal disruption radius, and 2) disrupted within the expected disruption radius. Ideally our simulations would also follow the disruption at later times to see if a dust disk forms that matches observations of known dusty white dwarfs, but this is computationally intensive with the techniques we use in this study and is saved for future work.
To demonstrate that the IMMP model satisfies the first criterion, we dynamically simulated the response of the Solar System's asteroid belt to post main sequence evolution. We chose the Solar System primarily because it should be a decent proxy for a planetary system where only one planet dominates the inner planetesimal region, and should not be subject to any biases associated with an incorrect treatment of Gyr of dynamical evolution between a planet and its planetesimal belt.
Our simulations follow the largest asteroids with well known radii--the lower limit to our population is larger than the minimum post main sequence survival radius ($\approx$10~km) for distances of a few AU. We do not follow smaller asteroids that may be trapped in IMMR resonances during the late stages of post main sequence evolution. If we assume that WDs have similar mass asteroid belts, our simulations should represent a lower limit to the amount of material tidally disrupted and accreted onto WDs.
We also simulated the tidal disruption of a small asteroid by a white dwarf using the {\tt pkdgrav} code. We modified the code to treat the approach of a small asteroid to within $<$1\rm R$_{\odot}$\ at various close approaches to determine whether criterion 2) was satisfied, drawing on the orbital elements inferred from our {\em N}-body simulations.
\subsection{The Solar System as a Laboratory for Dusty White Dwarfs}
In order to follow the behavior of the 2:1 resonance, we performed numerical simulations of large Solar System asteroids in which mass loss from the central star is included. We have taken ten MERCURY simulations using a Burlirsch-Stoer integrator with an adaptive timestep \citep{chambers99} of 710 Solar System asteroids including Jupiter to determine if any follow the IMMP model.
The asteroids were chosen to have radii R$>$50~km at all orbital semi-major axes and R$>10$~km with perihelia $>$3~AU based on the latest known asteroid data compiled by E. Bowell\footnote{ftp://ftp.lowell.edu/pub/elgb/astorb.html} and the assumption that the Sun will sublimate asteroids smaller and closer than this during post main sequence evolution \citep{schroeder08,jura08}. The mass of the Sun was slowly removed over 1000~yr using the equation
\begin{equation}
M_\odot(t) = 1.0-0.46 \left[\left(\frac{t}{t_{\rm stop}}\right)^2-2\left(\frac{t}{t_{\rm stop}}\right)^3\right],
\end{equation}
reaching a mass of 0.54 \rm M$_\odot$\ \citep{schroeder08}. Asteroids were removed if they strayed within 1 solar radius and considered tidally disrupted.
Ten simulations were run for 100~Myr. Four further simulations were performed for 200 Myr and three separate simulations using a Bulirsch-Stoer integrator from the hybrid symplectic integration package described in \citet{stark08} with a nominal step size of 1/40th of an orbit of Jupiter and adaptive timesteps were run for 1~Gyr to test whether the perturbation declines over timescales comparable to the cooling time of most white dwarfs. The 1~Gyr simulations were also used as an independent test to confirm the general behavior seen with the MERCURY code. The timescales we cover correspond to a large fraction of the cooling age for observed dusty and polluted white dwarfs. For example, a $\log{g}$=8 white dwarf has an effective temperature of $\sim$18500~K at a cooling age of 100~Myr, $\sim$15000~K at 200~Myr, and $\sim$8200~K at 1~Gyr. For comparison, all but one of the known dusty WDs have T$_{eff}$ $>$8200~K and 67\% of {\em Spitzer} observed DAZs have T$_{eff}$ $>$8200~K.
\subsection{Modeling the Tidal Disruption of a Small Planetesimal}
The tidal disruption is simulated using a "rubble pile" model, based on a collection of 5000 hard, spherical particles. The simulations use the {\em N}-body code {\tt pkdgrav}, originally used for cosmological simulations, to provide a very fast, parallelized tree for computing inter-particle gravity \citep{richardson00}. The rubble pile simulations also take advantage of its collision resolution, in which particle collisions are detected and resolved according to a number of adjustable physical parameters, namely the coefficients of restitution (both tangential and normal). The body is then held together by its own self-gravity with particle collisions preventing a collapse. Previously this code has been used in numerous "rubble pile" asteroid models, including collisions \citep{leinhardt00,leinhardt02}, tidal disruptions \citep{richardson98,walsh06}, and also for planet formation simulations \citep{leinhardt05,leinhardt09}. Tidal disruption simulations have proven insensitive to numerous physical parameters, such as the tangential and normal coefficients of restitution, as the rotation imparted by the encounter and the "depth" of the encounter with the planet dominate the outcome \citep{richardson98,walsh06}.
The modeled tidal disruption consists of a 5000 particle body, having a single close encounter with a point mass of 0.5 \rm M$_\odot$. Such a point mass has a Roche radius of 89 R$_{\rm WD}$, or about 0.9 R$_\odot$, 695,000 km. We chose orbits with semi-major axes of 4.77 AU, based on the orbital elements from one of the tidally disrupted asteroids in our N-body simulations. The semi-major axis is larger than one would expect for a Solar System asteroid primarily due to the fact that asteroid orbits will expand in response to the mass loss from the central star. The asteroid was modeled with different eccentricities to produce encounters inside the Roche limit at 80, 75, 70, 65, and 60 R$_{\rm WD}$. The progenitor was not spinning on the initiation of the encounter, and was only simulated for a single encounter. The simulation was well resolved, with a duration of 25,000 timesteps with each timestep representing 1e-5 yr/2$\pi$, or about 50 sec. At the end of each simulation the state of the aggregate was analyzed, and individual orbital elements for each ``clump'' were produced, along with the statistics for the number and size distribution of fragments.
\section{Results}
\label{sec:results}
\subsection{The Solar System}
Out of all the IMMP simulations, roughly 2\% of the modeled asteroids strayed within the tidal disruption radius, with the majority disrupting within the first few hundred Myr. The resulting asteroids that were tidally disrupted are plotted as a function of their initial semi-major axis $a$ and eccentricity $e$ in Figure \ref{fig:f3}. Overplotted are calculations of the libration width before and after post-main sequence evolution.
In the 100~Myr simulations, one asteroid impacted approximately every 10~Myr. From 100 to 200~Myr, the average time between impacts increased to 71~Myr, implying a near power law drop in the impact frequency on the white dwarf as time went on. In the 1~Gyr simulations, only one asteroid per simulation was disrupted beyond 200~Myr. In order to compare across simulations, the number of asteroids disrupted as a function of time were normalized by the total number of asteroids simulated to the per asteroid frequency and binned logarithmically in time. Figure \ref{fig:f4} shows the normalized number of asteroids disrupted as a function of time for all the simulations. Taken as a whole, this curve would represent the average probability at any given time of an asteroid becoming tidally disrupted for the size distribution we use. There is a peak at $\sim$30~Myr, which is consistent with a white dwarf T$_{eff}\sim$24000~K. The hottest dusty white dwarfs are slightly cooler than this value and also represent some of the largest accretors of material, implying dust rich disks close to where the peak of perturbations occur in our simulations. Beyond this peak, the frequency drops roughly as $t^{-0.6}$, but this is uncertain due to the small number statistics of our simulations.
We can compare these results to \citet{malhotra10}, who calculated the diffusion of solar system asteroids (with D$>$30~km) out of the asteroid belt due to perturbations from the giant planets. They used second-order mixed variable symplectic mapping \citep{wisdom91,saha92} to integrate test particles between Mars and Jupiter to within 1~AU and then MERCURY with its hybrid integrator to perform their integrations within 1~AU. For \citet{malhotra10}, 18\% of those asteroids perturbed into the inner Solar System impacted the Sun. For our IMMP simulations, 17\% of the asteroids that were perturbed out of the asteroid belt also strayed to within 1~\rm R$_{\odot}$, our adopted tidal disruption radius. \citet{malhotra10} also found that between 1-4~Gyr, roughly 0.4\% of asteroids in their simulations impacted the Sun, suggesting that even at later times, a significant flux of asteroids can be tidally disrupted. Similarly, the late time dynamical evolution of the asteroid belt could be approximated by a power-law decline in the total number of asteroids close to $t^{-1}$, similar to the decline we see in our own impact frequency, suggesting that the rate of impacts decline with the total mass of the asteroid belt \citep{jura08}.
As noted previously in \citet{bonsor11}, one can determine an accretion rate $\dot{M}$ onto WDs based on the fraction of asteroidal belt mass scattered ($f_{\rm SI}$), disrupted ($f_{\rm TD}$), and eventually accreted ($f_{\rm acc}$), as well as the mean time of disruption $<t_{\rm TD}>$:
\begin{equation}
\label{eq:bonacc}
\dot{M}_{\rm metal} = \frac{f_{\rm acc} f_{\rm TD} f_{\rm SI} M_{\rm belt}}{<t_{TD}>}.
\end{equation}
In reality, the accretion rate onto the white dwarf surface is determined by the accretion of the dusty or gaseous disks that are generated from the asteroid tidal disruption. Determining the precise evolution of a WD dust/gas disk is beyond the scope of this paper, so we assume an average accretion rate from the disk is given by $<t_{TD}>$. We can determine accretion rate vs. time for our simulations if we can calculate a mass disrupted. We took the asteroids tidally disrupted in our simulations normalized by the total number of asteroids sampled and calculated their mass using their measured radii, assuming a mean density of 4~g cm$^{-3}$. This gives us the total mass of our asteroids normalized by the mass of the Solar System's asteroid belt ($M_{\rm belt}=3.6\times10^{24}$~g) \citep{krasinsky02}. At each disruption time we determined the average mass mass accretion onto the white dwarf by setting $<t_{TD}>$ equal to the average time between the previous and next disruption, or the end time of the simulation and assuming that $f_{\rm acc}$ was of order unity. This mostly follows the same procedure as \citet{bonsor11}, with the exception that we implicitly determine the quantity $f_{\rm TD} f_{SI} M_{\rm belt}$ by following individual disruptions, and we assume that $f_{\rm acc}$ is more efficient.
In principle the results of our simulations can be scaled to any starting asteroid belt mass by the total mass of asteroids, assuming a similar dynamical architecture of a dominant giant planet with an interior belt of asteroids following a similar size distribution to that of the Solar System's asteroid belt. Figure \ref{fig:f5} shows our results for the accretion evolution of the Solar System compared to observed dusty and metal polluted white dwarfs as collected in \citet{farihi09,farihi10}. For the figure, we have converted the time covered by our simulations into corresponding $T_{eff}$ assuming a hydrogen WD with $\log$~g=8.0. Looking at all disruptions in our simulations, we take a power-law fit to our results and find that the mass accretion rate drops as t$^{-2}$. We can extrapolate this relationship between time and accretion rate to the oldest WDs. This is shown as the solid line in Figure \ref{fig:f5}. The inferred accretion rates depend on our assumptions in the following way:
\begin{equation}
\label{eq:assump}
\dot{M}_{\rm metal} = \dot{M}_{\rm metal,o}\left(\frac{f_{\rm acc}}{1}\right)\left(\frac{M_{\rm belt}}{3.6\times10^{24}~g}\right).
\end{equation}
Our results suggest that the required mass in asteroids around other white dwarfs might have been much larger than our own Solar System, by as much as a factor of 10$^{3}$. However, several uncertainties exist in our model. The biggest uncertainty for this model is the exact location of the giant planet and how the timescale for asteroid perturbation out of the 2:1 resonance might scale with planetary architecture. We also haven't accounted for how the accretion rate might evolve over the lifetime of a gaseous or dusty disk caused by the tidal disruption of an asteroid. One would expect orders of magnitude higher accretion rates soon after a tidal disruption, with a power-law or exponential decay in accretion rate as mass is lost from the disk of material that forms from a tidal disruption--this effect would be balanced by the fraction of a WD's cooling age over which this high accretion phase might occur. Such modeling would require a more careful analysis of how such disks evolve and the main mechanism for accretion onto the WD. Recently, it has been shown that Poynting-Robertson drag may drive accretion in some cases, but cannot account for all of the accretion observed \citep{rafikov11a,rafikov11b,xu}.
It is also unclear how tidally disrupted streams of material will evolve into dusty/gaseous disks and over what timescales this process occurs. If the relative velocities of collisions are too high close to the WD, dust grains will suffer evaporative collisions and might not be efficient at leaving material close enough to accrete onto the WD. Similarly, if relative velocities are too high, the tidally disrupted streams of material may not be able to collisionally damp into a disk with low enough eccentricity to efficiently accrete \citep{shannon11}. Our sample of asteroids encompasses most of the largest asteroids present in our Solar System, but is missing most asteroids with radii $<$20~km. Therefore it is possible that our implied tidal disruption rates could be higher if we take these objects into account, or other planetesimal belts have different size distributions such as what might be expected if a large number of planetesimals are trapped in resonances during post-main sequence evolution \citep{dong10}. These uncertainties affect many of our assumed model values for the efficiency with which tidally disrupted material is accreted onto the WD and the average timescale between tidal disruptions. Further work on these uncertainties through better observations of dusty disks, the modeling of WD disks, and longer simulations of dynamical perturbations with more particles could help to better predict how efficient the IMMP model is at delivering material close to a WD.
\subsection{Tidal Disruption}
\label{sec:tidal}
In all of our tidal disruption simulations, the asteroid was significantly affected by the tidal forces of the central white dwarf, with varying degrees of disruption after the first pass. As the periastron decreased, more violent disruptions occurred, with more mass being generated in smaller bodies.
Figure \ref{fig:f7} shows a snapshot of one asteroid disruption for a close encounter to 60~$R_{\rm WD}$. The asteroid is elongated at the early stages of the disruption and quickly spreads out over 2~AU from head to tail of the train of debris after the first pass. Despite this, the disrupted stream is rather similar in its orbital elements.
This tight stream is reflected for all the encounters and the orbital elements of the fragments were tightly centered around the original orbit of the incoming asteroidal body. Further work needs to be done to determine whether these fragments will settle through mutual collisions and further tidal disruptions into a more circularized disk. This suggests a phase of evolution, perhaps lasting many orbital timescales for the fragments, where dust is in an elliptical distribution. In fact, observations of dusty white dwarfs with emission lines from gaseous disks show evidence of ellipticity \citep{gaensicke08}, though not at the level implied by our simulations, where the eccentricity of the debris stream is in excess of 0.99. Over many orbital timescales, the fragments will spread and precess at slightly different rates and start colliding. The exact evolution of the system will depend on the timescale for fragments to be directly perturbed onto the WD surface by the giant planet and the timescale for mutual collisions and a dynamical cooling of the systems. Any dynamical cooling due to collisions could pull fragments out of resonance and allow them to survive longer to collide with other fragments, generating more dust. The timescales and exact evolution of highly eccentric disks, like what we would expect from a tidal disruption, is an open question.
Figure \ref{fig:f8} shows the cumulative number distribution of fragments as a function of mass. From a close approach distance of 60 to 80 R$_{\rm WD}$. Even after a single pass, several fragments were generated, with an increasing number of smaller fragments as the radius of close approach occurred. In all of our simulations, there was a negligible amount of mass lost, suggesting that the initial disruption of the asteroid will not show in the WD photosphere until dust and material accreted through viscous evolution.
\section{Comparison of the IMMP model to the observed WD population}
\label{sec:comp}
We can compare the results of our IMMP model to observed metal polluted and dusty white dwarfs. This is possible through the extensive compilation of {\em Spitzer} observed metal enriched white dwarfs presented in \citet{farihi09} and \citet{farihi10}. We took these populations and divided them to investigate how each were distributed by WD cooling time and the total age of the system.
The top panel of Figure \ref{fig:f1} shows the cooling age for the disk/non-disk populations. There appears to be a bimodal population for non-disk systems that cluster around 10$^8$~yr and 10$^{9}$~yr, hinting at possibly two different mechanisms, or at least one mechanism that has two characteristic timescales after post-main sequence evolution. If we interpret this in light of the IMMP model, these two timescales could represent the peak we observed at 30~Myr, but scaled to a longer characteristic timescale for perturbation, such as for a more widely separated planet. The bimodality could also be a selection effect, since a smaller number of hotter WDs have been observed for metal accretion and small number statistics dominate. There is not such a clear bimodality for the disk systems. The median cooling age of the non-disk systems is 0.9~Gyr while the median cooling age for disk systems is 0.4~Gyr.
We also calculate the total age of each white dwarf, $t_{\rm tot}$=$t_{\rm MS}$+$t_{\rm cool}$. The quantity $t_{\rm MS}$ can be determined first by inferring an initial mass for each WD using an empirical initial-final mass function \citep{williams09}:
\begin{equation}
\label{eq:initial}
M_{\rm final}=\frac{\left(M_{\rm initial}-0.339\right)}{0.129}
\end{equation}
The main sequence lifetime is then given by $t_{\rm MS}=10M_{\rm initial}^{-2.5}$.
The bimodality of the non-disk systems now disappears, instead the distribution shows a peak around 2.3~Gyr, with a high age tail. The disk systems are on average younger, with a peak closer to 1.3~Gyr and fewer systems that are older. This may suggest that the underlying mechanism for perturbing planetesimals is related to the {\em total} age of the planetary system, rather than the time from when the mass of the central star changed by a factor of $>$2, consistent with our IMMP model, which depends not only on perturbations, but the total mass of asteroids available as a reservoir.
The relatively large ages at which disks and pollution appear suggests that the mechanism for delivering small planetesimals also must be efficient over Gyr timescales. Our simulations show that the IMMP model should be effective at delivering asteroids for hundreds of Myr.
Finally, we can use our IMMP results to infer a distribution of asteroid belt masses based on the observed accretion rates for WDs. We assumed that each non-dusty WD system represents an "average" accretion rate that can be directly compared to the fit of our simulated mass accretion rates. We then determined a scaling factor between our calculated accretion rate and that observed to infer a total belt mass using Equation \ref{eq:assump}. The histogram of resulting values is seen in Figure \ref{fig:f6}. This may overestimate the masses in a particular system if the time between tidal disruptions is longer than the gaseous lifetime of a disk, as a disk evolves from higher accretion rates to lower ones. The range in asteroid belt masses goes from 4~M$_{SS}$ to 6$\times$10$^{5}$~M$_{SS}$. The median mass is 820~M$_{SS}$. This higher mass is may not be unusual given that the median progenitor mass of the non-dusty WDs is 2\rm M$_\odot$, and the median total age of these systems is 1.5~Gyr younger than our own Solar System. The relative youth and greater progenitor mass could correspond to higher belt masses, as could differing dynamical histories. This distribution is also biased at higher WD T$_{eff}$ to the largest accretion rates, due to limited sensitivity in the optical to Ca and Mg absorption features. More unbiased studies may find a lower median belt mass. We can also compare this amount of mass to that in planetesimals assumed for other models used to explain the observed WD accretion rates. \citet{jura08} assumed an asteroid belt mass of 10$^{25}$~g, and in \citet{bonsor11}, the median mass of their planetesimal belts exterior to a giant planet was 10 $M_\oplus$ or on the order of 1600 times our assumed belt mass. If our results hold and the IMMP model is most efficient at explaining the observations, then it would suggest that more massive stars have significantly more massive asteroid belts, or that our asteroid belt is depleted relative to the typical planetary system.
\section{Discussion}
\label{sec:disc}
Our simulations suggest a novel and robust way for explaining the presence of metal polluted white dwarfs in addition to the mechanism first invoked by \citet{debes02} and complementary to the scenario suggested by \citet{bonsor11}. While the Debes \& Sigurdsson and the Bonsor \& Wyatt mechanisms require multiple planets and a relic icy planetesimal disk to reside within the system, the scenario we present requires only one planet roughly the mass of Jupiter as well as a relic asteroid belt similar to the Solar System's. Furthermore, since we have looked at only larger asteroids (our sample were incomplete for asteroids with R$<$20~km), there should exist a population of (20/$R_{min}$)$^p$ (where $p$ is the power law value for the asteroidal size distribution and R$_{min}$ is the smallest asteroid size that survives post-main sequence evolution) more objects that could be participating in tidal disruptions. The recent WISE Mission \citep{wright10} for example will be able to better constrain the Solar System's population of asteroids, as well as find new dusty white dwarfs \citep{debes11}. We have also shown through tidal disruption simulations that significant disruption of smaller asteroids proceeds at radii comparable to where dusty white dwarfs are observed, confirming what had been proposed by \citet{jura03} for G~29-38.
Despite the seeming complexity of such a situation, these scenarios are the best explanations for the observations at this time. It is then important to identify which mechanism, if any, dominates and whether there are suitable observational tests that could also potentially discriminate between planetary instability, the IMMP, or exterior mean motion resonances.
There are several important implications to our IMMP simulations which will warrant further and deeper study. These simulations are useful for understanding the frequency of Solar System like asteroid belts as well as their mass, they provide limits to the mass and location of planets around dusty/polluted white dwarfs, and they help to explain the relative frequency of dusty vs. polluted white dwarfs.
Firstly, if this mechanism is widespread and more common than perturbations due to exterior resonances, this directly tests the terrestrial planetesimal population of main sequence stars with planetary systems. This is akin to both a determination of $\eta_{\rm planetesimal}$ as well as a test of how well populated these regions are. Current observations of main sequence stars with warm dust in the terrestrial planet forming region are rare and represent either recent collisions or high mass asteroidal belts \citep[e.g.][and references therein]{chen06,lisse09,currie11}. Limits on the masses of these types of planetesimal belts will be useful for constraining how dusty main sequence stars are in the terrestrial planet forming regions, a crucial measurement to determine how successful terrestrial planet imaging missions in the future might be \citep{guyon06}.
\acknowledgements
We wish to thank the anonymous referee for greatly enhancing the clarity and quality of this paper. The research and computing needed to generate astorb.dat were conducted by Dr. Edward Bowell and funded principally by NASA grant NAG5-4741, and in part by the Lowell Observatory endowment.
|
2,877,628,088,652 | arxiv | \section*{Abstract}
Data collected in criminal investigations may suffer from:
\begin{enumerate*}[label=(\roman*)]
\item incompleteness, due to the covert nature of criminal organisations;
\item incorrectness, caused by either unintentional data collection errors and intentional deception by criminals;
\item inconsistency, when the same information is collected into law enforcement databases multiple times, or in different formats.
\end{enumerate*}
In this paper we analyse nine real criminal networks of different nature (i.e., Mafia networks, criminal street gangs and terrorist organizations) in order to quantify the impact of incomplete data and to determine which network type is most affected by it. The networks are firstly pruned following two specific methods: \begin{enumerate*}[label=(\roman*)]
\item random edges removal, simulating the scenario in which the Law Enforcement Agencies (LEAs) fail to intercept some calls, or to spot sporadic meetings among suspects;
\item nodes removal,
that catches the hypothesis in which some suspects cannot be intercepted or investigated.
\end{enumerate*}
Finally we compute spectral (i.e., Adjacency, Laplacian and Normalised Laplacian Spectral Distances) and matrix (i.e., Root Euclidean Distance) distances between the complete and pruned networks, which we compare using statistical analysis.
Our investigation identified two main features: first, the overall understanding of the criminal networks remains high even with incomplete data on criminal interactions (i.e., 10\% removed edges); second, removing even a small fraction of suspects not investigated (i.e., 2\% removed nodes) may lead to significant misinterpretation of the overall network.
\section*{Introduction}
Criminal organizations are groups operating outside the boundaries of the law, which make illegal profit from providing illicit goods and services in public demand and whose achievements come at the cost of other people, groups or societies~\cite{Finckenauer2005}. Organised crime is referred to different terms including \textit{gangs}~\cite{thrasher2013gang}, \textit{crews}~\cite{adler1993wheeling}, \textit{firms}~\cite{reuter1983disorganized}, \textit{syndacates}~\cite{reuter1983disorganized}, or \textit{Mafia}~\cite{Morselli2008}. In particular, Gambetta \cite{gambetta1996sicilian} defines Mafia as a ``territorially based criminal organization that attempts to govern territories and markets'' and he identifies the one located in Sicily as the \textit{original Mafia}.
Whatever term is used to call the organised crime, this involves relational traits. For this reason, scholars and practitioners are increasingly adopting a Network Science Analysis (SNA) perspective to explore criminal phenomena~\cite{Campana2016}.
SNA algorithms can produce relevant measurements and parameters describing the role and importance of individuals within criminal organizations, and SNA has been used to identify leaders within a criminal organization~\cite{Johnsen2018IdentifyingCI}
and to construct crime prevention systems~\cite{Calderoni2020}.
Over the last decades, SNA has been employed increasingly by Law Enforcement Agencies (LEAs). This increasing interest from law enforcement is due to SNA's ability to identify mechanisms that are not easily discovered at first glance~\cite{morselliglance}.
SNA relies on real datasets used as sources which allow to build networks that are then examined~\cite{Duijn2014, Rostami2015, Robinson2018, Villani2019, Ficara2020, Calderoni2020, Cavallaro2020, Cavallaro2021IoT}.
However, the collection of complete network data describing the structure and activities of a criminal organization is difficult to obtain.
In a criminal investigation, the individuals subjected to LEAs enquiries may attempt to shield sensible information. Investigators then have to rely on alternative methods and exercise special investigative powers allowing them to gather evidence covertly from sources including phone taps, surveillance, archives, informants, interrogations to witnesses and suspects, infiltration in criminal groups.
Despite significant advantages, such sources may also have a number of drawbacks.
Also, while some individuals providing information during investigations are reliable, others might attempt to deceive the investigations with the aim to protect themselves, their associates, or to achieve a specific goal. For instance, if actors are aware of being phone-tapped, they are more likely to avoid to discuss of self-incriminating evidence.
While the transcripts of discussions between unsuspecting actors can be considered more reliable, the information collected from taps must still be verified against other official records related to the case. This is required since conversations among criminals often involve lies or codes concealing the true nature of the message~\cite{campanavarese2013}.
Moreover, if police misses surveillance targets, central actors may not appear with their actual role in the data, simply because their phones end up not being tapped~\cite{Morselli2008}.
While the police seeks to validate the content of phone-taps, the offenders may also check themselves whether the information received from the police during conversations is accurate. Longer investigations and surveillance tend to eventually expose subtle lies. On the other side, datasets may change with time, due to the variable status of suspects, or to new information being collected. The problem of actors lying is extended to data collected through questionnaires or interviews as well. Information collected from interrogations may not be reliable, with the risk of interviewees downplaying or amplifying their real role, or simply not being representative of the broader group.
Police decisions may even impact the design of an investigation. LEAs normally start with some suspected individuals, and then expand their reach by adding further actors. Not all the individuals linked to the central actors are automatically added, as the investigation of all active criminal groups is not possibile due to limited resources. Prosecution services must prioritise the groups on which evidence gathering is easier. Hence, groups operating under the police radar may be absent from the data collected, and this may generate heavily distorted inferences about the network structure~\cite{snaresearch}.
Incompleteness and incorrectness of criminal network data is then inevitable. This is due to investigators dealing with data of different quality and because in SNA there is currently no standard method to account for such degrees of reliability.
LEAs often have to process lots of data, most of which is of little value. When large volumes of raw data are collected from multiple sources, the risk of inconsistency is also higher.
The identification of relevant and important information from datasets where this is mixed with irrelevant or unreliable information, is referred to as the {\em signal and noise} problem.
Analytical techniques used in intelligence should then be able to cope with large datasets and to effectively distinguish the signal from noise.
In summary, the data collected in criminal investigations regularly suffers from:
\begin{itemize}
\item {\em Incompleteness}, caused by the covert nature of criminal networks;
\item {\em Incorrectness}, caused by either unintentional errors in data collection or intentional deception by criminals;
\item {\em Inconsistency}, when records of the same actors are collected into LEA databases multiple times and not necessarily in a consistent way. This way, the same actor may show up in a network as different individuals.
\end{itemize}
Criminal networks are very dynamic, as they constantly change over time. New data or even different data collection methods are necessary to cover longer time spans~\cite{SPARROW1991251}.
Another problem specific to SNA used for criminal networks lies in data processing.
Often, actors are represented by nodes, and their associations or interactions by links. However, there is no SNA standard methodology for transforming the raw data and the process depends on the subjective judgement of the analyst. This may have to decide whom to include or exclude from the network, when boundaries are ambiguous~\cite{SPARROW1991251}. Also, data conversion is often a labor-intensive and time-consuming process.
An interesting application of SNA is to compare networks by finding and quantifying similarities and differences \cite{Squartini2015, Peixoto2018, Newman2018}. Network comparison requires measures for the distance between graphs, which can be done using multiple metrics. That is a non-trivial task which involves a set of features that are often sensitive to the specific application domain. A few literature reviews on the most common graph comparison metrics are available~\cite{Soundarajan2014, Emmert2016, Donnat2018, Tantardini2019}. In~\cite{Cavallaro2021}, such distance measures were exploited to quantify how much artificial, but also realistic models can represent real criminal networks.
In this work, we adopt a SNA approach to assess the impact of incomplete data in a criminal network. Our aim is to quantify how much information on the criminal network is required, so that the accuracy of investigations is not affected. Specifically, we analyse nine real criminal networks of different nature, which are the result of different investigative operations over Mafia networks, criminal street gangs and terrorist organizations.
To quantify the impact of incomplete data and to determine which network suffers mostly from it, we adopt the following strategies:
\begin{enumerate}
\item We pruned input networks by means of two specific methods, namely:
{\em random edges removal} and {\em random nodes removal}, which reflect the most common scenarios of missing data arising in real investigations.
\item We calculated the distance between the original (defined as complete as a reference) network and its pruned version.
\end{enumerate}
\section*{Materials and methods}
This section presents basic graph theory definitions and the distance metrics used for comparing two graphs. We also describe the datasets used in our experimental analysis, as well as the protocol followed to run our analysis.
\subsection*{Background}
\subsubsection*{Graph properties}
A {\em network} (or {\em graph}) $G = \langle N, E\rangle$ consists of two finite sets $N$ and $E$~\cite{barabasi2016network}. The set $N=\{1,\dots,n\}$ contains the {\em nodes} (or vertices, actors), and $n$ is the \textit{size} of the network, while the set $E \subseteq N \times N$ contains the {\em edges} (or links, ties) between the nodes.
A network is called {\em undirected} if all its edges are bidirectional. If the edges are defined by ordered pairs of nodes, then the network is called {\em directed}.
If an edge $(i, j)$ with $i,j\in N$ is {\em weighted}, then a positive numerical weight $w_{ij}$ is associated; the {\em unweighted} edges have their weight set to the default value $w_{ij}=1$.
Given an undirected network $G$, two nodes $i,j \in N$ are {\em connected} if there is a {\em path} from $i$ to $j$: here a path $p$ is defined as a sequence of nodes $i_0, i_1, \ldots, i_k$ such that each pair of consecutive nodes is connected through an edge. The number of edges in a path $p$ starting at node $i$ and ending at node $j$ is called {\em path length}. While there may be several paths from the node $i$ to the node $j$, we are usually interested in the {\em shortest paths} (i.e., those with the least number of edges), whose length defines the {\em distance} $d_{ij}$ between $i$ and $j$. Of course, in undirected networks we have $d_{ij} = d_{ji}$.
A graph $G$ is called {\em connected} if every pair of nodes in $G$ is connected, and {\em disconnected} otherwise. If a network is disconnected, it fragments into a collection of connected subnetworks, each of them called {\em components}.
Based on the number of edges $m$, a graph is called {\em dense} if $m$ is of the same order of magnitude as $n^2$, or {\em sparse} if $m$ is of the same order of magnitude as $n$. The {\em density} $\delta$ of an undirected graph is defined as
\begin{eqnarray}
\delta=\frac{2 \lvert E \rvert}{\lvert V \rvert (\lvert V \rvert-1)} = \frac{2m}{n(n-1)},
\end{eqnarray}
that is the total number of edges over the maximum possible number of edges.
The degree $k_i$ of the node $i$ represents the number of adjacent edges, while the degree distribution $p_k$ provides the probability that a randomly selected node in the graph has degree $k$. Given a graph of $n$ nodes, $p_k$ is the normalised histogram given by
\begin{eqnarray}
p_k=\frac{n_k}{n},
\end{eqnarray}
where $n_k$ is the number nodes of degree $k$.
The degree $k_i$ allows to compute the {\em clustering coefficient} $C_i$ of a node $i$~\cite{Ficara2021IoT}, which captures the degree to which the neighbors of the node $i$ link to each other, given by
\begin{eqnarray}
C_i=\frac{2L_i}{k_i(k_i-1)},
\end{eqnarray}
where $L_i$ represents the number of links between the $K_i$ neighbors of node $i$. The average of $C_i$ over all nodes defined the average clustering coefficient $\langle C_i \rangle$,
measuring the probability that two neighbors of a randomly selected node link to each other.
Given a pair of graphs, say $G_1$ and $G_2$, we are often interested in defining a measure of similarity (or, equivalently, distance) between them. In what follows we review some methods one can use to compute the distance of two graphs.
\subsubsection*{Spectral distances}
Spectral distances allow to measure the structural similarity between two graphs starting from their spectra. The spectrum of a graph is widely used to characterise its properties and to extract information from its structure.
The most common matrix representations of a graph are the adjacency matrix $A$, the Laplacian matrix $L$ and the normalised Laplacian $\mathcal{L}$.
Given a graph $G$ with $n$ nodes, its adjacency matrix $A$ is an $n \times n$ square matrix denoted by $A =(a_{ij})$, with $1\leq i,j\leq n$, where $a_{ij} = 1$ if there exists an edge joining nodes $i$ and $j$, and $a_{ij} = 0$ otherwise.
For undirected graphs the adjacency matrix is symmetric, i.e., $a_{ij}$=$a_{ji}$.
The degree matrix $D$ is a diagonal matrix where $D_{ii} = k_i$ and $D_{ij} = 0$ for $i\neq j$.
\begin{eqnarray}
{D_{ij}} =
\begin{cases}
k_i & \text{if } i=j \\
0 & \text{otherwise}
\end{cases}
\end{eqnarray}
The adjacency matrix and the degree matrix are used to compute the combinatorial Laplacian matrix $L$, which is an $n \times n$ symmetric matrix defined as
\begin{eqnarray}
\label{eq:lmatrix}
L = D - A.
\end{eqnarray}
The diagonal elements $L_{ii}$ of $L$ are then equal to the degree $k_i$ of the node $i$, while off-diagonal elements $L_{ij}$ are $-1$ if the node $i$ is adjacent to $j$ and 0 otherwise.
A normalised version of the Laplacian matrix, denoted as $\mathcal{L}$, is defined as
\begin{eqnarray}
\label{eq:nlaplacian}
\mathcal{L} = D^{-\frac{1}{2}} LD^{-\frac{1}{2}},
\end{eqnarray}
where the diagonal matrix $D^{-\frac{1}{2}}$ is given by
\begin{eqnarray}
\label{eq:dmatrix}
{D^{-\frac{1}{2}}_{i,i}} =
\begin{cases}
\frac{1}{\sqrt{k_i}} & \text{if } k_i \neq 0\\
0 & \text{otherwise.}
\end{cases}
\end{eqnarray}
The spectrum of a graph consists of the set of the sorted eigenvalues of one of its representation matrices. The sequence of eigenvalues may be ascending or descending depending on the chosen matrix. The spectra derived from each representation matrix may reveal different properties of the graph. The largest eigenvalue absolute value in a graph is called the \textit{spectral radius} of the graph.
If $\lambda^A_k$ is the $k^{th}$ eigenvalue of the adjacency matrix $A$, then the spectrum is given by the descending sequence
\begin{eqnarray}
\label{eq:spectrum1}
\lambda^A_1 \geq \lambda^A_2 \geq \dots \geq \lambda^A_n.
\end{eqnarray}
If $\lambda^L_k$ is the $k^{th}$ eigenvalue of the Laplacian matrix $L$, such eigenvalues are considered in ascending order so that
\begin{eqnarray}
\label{eq:spectrum2}
0=\lambda^L_1 \leq \lambda^L_2 \leq \dots \leq \lambda^L_n.
\end{eqnarray}
The second smallest eigenvalue of the Laplacian matrix of a graph is called its \textit{algebraic connectivity}.
Similarly, if we denote the $k^{th}$ eigenvalue of the normalised Laplacian matrix $\mathcal{L}$ as $\lambda^{\mathcal{L}}_k$, then its spectrum is given by
\begin{eqnarray}
\label{eq:spectrum3}
0=\lambda^{\mathcal{L}}_1 \leq \lambda^{\mathcal{L}}_2 \leq \dots \leq \lambda^{\mathcal{L}}_n.
\end{eqnarray}
The {\em spectral distance} between two graphs is the euclidean
distance between their spectra~\cite{Wilson2008}.
Given two graphs $G$ and $G^{\prime}$ of size $n$, with their spectra respectively given by the set of eigenvalues $\lambda_i$ and $\lambda_{i'}$, their spectral distance, according to the chosen representation matrix, is computed as follows by the formula
\begin{eqnarray}
\label{eq:spectdistance}
{d(G,G')} = \sqrt{\sum_{i=0}^n (\lambda_i - \lambda_{i'})^2}.
\end{eqnarray}
Based on the chosen representation matrix and consequently its spectrum, the most common spectral distances are the adjacency spectral distance $d_A$, the Laplacian spectral distance $d_L$ and the normalised Laplacian spectral distance $d_{\mathcal{L}}$.
If the two spectra are of different sizes, the smaller graph is brought to the same cardinality of the other by adding zero values to its spectrum. In such case, only the first $k \ll n$ eigenvalues are compared.
Given the definitions of spectra of the different matrices, the adjacency spectral distance $d_A$ compares the
largest $k$ eigenvalues, while $d_{L}$ and $d_{\mathcal{L}}$ compare the smallest $k$ eigenvalues. This determines the scale at which the graphs are studied, since comparing the higher eigenvalues allows to focus more on global features, while the other two allow to focus more on local features.
\subsubsection*{Matrix distances}
Another class of distances between graphs is the matrix distance~\cite{Wills2020}. A matrix of pairwise distances $d_{ij}$ between nodes on the single graph is constructed for each as
\begin{eqnarray}
\label{eq:distmat}
M_{ij}=d_{ij}.
\end{eqnarray}
While most common distance $d$ is the shortest path distance, other measures can also be used, such as the effective graph resistance or variations on random-walk distances.
Such matrices provide a signature of the graph characteristics and carry important structural information. Matrices $M$ are then compared using some norm or distance.
Given two graphs $G$ and $G^{\prime}$, with $M$ and $M^{\prime}$ being their respective matrices of pairwise distances, the matrix distance between the $G$ and $G^{\prime}$ is introduced as:
\begin{eqnarray}
\label{eq:genmatdist}
d(G,G^{\prime})= \|M-M^{\prime}\|,
\end{eqnarray}
where $\|.\|$ is a norm to be chosen. If the matrix used is the adjacency matrix $A$, the resulting distance is called \textit{edit distance}.
The similarity measure used in this work is called \textsc{DeltaCon}~\cite{koutra2013}. It is based on the root euclidean distance $\mathrm{d_{rootED}}$, also called \textit{Matsusita difference}, between matrices $S$ created from the fast belief propagation method of measuring node affinities.
The \textsc{DeltaCon} similarity is defined as
\begin{eqnarray}
\label{eq:deltaconsim}
sim_{DC}(G,G^{\prime})=
\frac{1}{1+\mathrm{d_{rootED}}(G,G^{\prime}),}
\end{eqnarray}
where $\mathrm{d_{rootED}}(G,G^{\prime})$ is defined as
\begin{eqnarray}
\label{eq:deltacondist}
\mathrm{d_{rootED}}(G,G') = \sqrt{\sum_{i,j} (\sqrt{S_{i,j}} - \sqrt{S'_{i,j})}^2}.
\end{eqnarray}
When used instead of the Euclidean distance, $\mathrm{d_{rootED}}(G,G')$ may even detect small changes in the graphs. The fast belief propagation matrix $S$ is defined as
\begin{eqnarray}
\label{eq:deltaconS}
S = [I+ \varepsilon^2 D - \varepsilon A]^{-1},
\end{eqnarray}
where $\varepsilon=1/(1 + max_i D_{ii})$ and it is assumed to be $\varepsilon \ll 1$, so that S can be rewritten in a matrix power series as:
\begin{eqnarray}
\label{eq:deltaconS_approx}
S \approx I + \varepsilon A + \varepsilon^2 (A^2-D) + \dots.
\end{eqnarray}
Fast belief propagation is a fast algorithm and is designed to perceive both global and local structures of the graph.
\subsection*{Criminal networks data sources}
Our analysis focuses on nine real criminal networks of different nature (see Table~\ref{tab:table1}). The first six networks relate to three distinct Mafia operations, while the other three are linked to street gangs and terrorist organizations.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\centering
\caption{
{\bf Criminal networks characterization.}}
\begin{tabular}{|c|c|l|l|c|}
\hline
{\bf Investigation} & \multicolumn{3}{|c|}{\bf Network} & {\bf Source} \\ \cline{2-4}
& \textbf{Name} & \textbf{Nodes} & {\bf Edges} &\\ \thickhline
\hline
\makecell{Montagna Operation \\ (Sicilian Mafia) \\ 2003-2007} & \makecell{ MN \\[2pt] PC} & Suspects & \makecell[l]{Physical Surveillance \\ Audio Surveillance}& \cite{Ficara2020,Calderoni2020,Cavallaro2020, Cavallaro2021, Zenodo2020} \\ \hline
\makecell{Infinito Operation \\ (Lombardian 'Ndrangheta) \\ 2007-2009} & SN & Suspects & \makecell[l]{Physical and \\ Audio Surveillance} & \cite{Calderoni2014, Calderoni2014b, Calderoni2015, Calderoni2017, Grassi2019} \\ \hline
\makecell{Oversize Operation \\ (Calabrian 'Ndrangheta) \\ 2000-2009} & \makecell{WR \\ AW \\ JU} & Suspects & \makecell[l]{Audio Surveillance \\ Physical Surveillance \\ Audio Surveillance} & \cite{Berlusconi2016, piccardi2016} \\ \hline
\makecell{Swedish Police Operation \\ (Stockholm Street Gangs) \\ 2000-2009} & SV & Gang members & Physical Surveillance & \cite{Rostami2015, rostami_mondani_2015} \\ \hline
\makecell{Caviar Project \\ (Montreal Drug Traffickers) \\ 1994-1996} & CV & Criminals & Audio Surveillance & \cite{Morselli2008} \\ \hline
\makecell{Abu Sayyaf Group \\ (Philippines Kidnappers) \\ 1991-2011} & PK & Kidnappers & Attacks locations & \cite{Gerdes2014} \\ \hline
\end{tabular}
\label{tab:table1}
\end{adjustwidth}
\end{table}
The Montagna Operation was an investigation concluded in 2007 by the Public Prosecutor’s Office of Messina (Sicily) focused on the Sicilian Mafia groups known as Mistretta and Batanesi clans.
Between 2003 and 2007 these families infiltrated several economic activities including public works in the area, through a cartel of entrepreneurs close to the Sicilian Mafia. The main data source is the pre-trial detention order issued by the Preliminary Investigation Judge of Messina on March 14, 2007.
The order concerned a total of 52 suspects, all charged with the crime of participation in a Mafia clan as well as other crimes such as theft, extortion or damaging followed by arson. From the analysis of this legal document we built two weighted and undirected graphs: the Meeting network (MN) with 101 nodes and 256 edges,
and the Phone Calls (PC) network with 100 nodes and 124 edges (see Table~\ref{tab:mafia}). In both networks, nodes are suspected criminals and edges represent meetings (MN), or recorded phone calls (PC). These original datasets have been already studied in some of our previous works~\cite{Ficara2020,Calderoni2020,Cavallaro2020, Cavallaro2021, Cavallaro2021IoT} and they are available on Zenodo~\cite{Zenodo2020}.
The Infinito Operation was a large law enforcement operation against 'Ndrangheta groups (i.e., groups of the Calabrian Mafia) and Milan cosche (i.e., crime families or clans) concluded by the courts of Milan and Reggio Calabria, Italian cities situated in Northern and Southern Italy, respectively. The investigation started 2003 is still in progress. On July 5, 2010, the Preliminary Investigations Judge of Milan issued a pre-trial detention order for 154 people, with charges ranging from mafia-style association to arms trafficking, extortion and intimidation for the awarding of contracts or electoral preferences.
The dataset was extracted from this judicial act and is available as a $2$-mode matrix on the UCINET~\cite{Borgatti2002} website (Link: \url{https://sites.google.com/site/ucinetsoftware/datasets/covert-networks/ndranghetamafia2}).
The Infinito Operation dataset was investigated by Calderoni and his co-authors in several works~\cite{Calderoni2014, Calderoni2014b, Calderoni2015, Calderoni2017, Grassi2019}.
From the original $2$-mode matrix, we constructed the weighted and undirected graph Summits Network (SN) with 156 nodes and 1619 edges (Table~\ref{tab:mafia}). Nodes are suspected members of the 'Ndrangheta criminal organization. Edges are summits (i.e., meetings whose purpose is to make important decisions and/or affiliations, but also to solve internal problems and to establish roles and powers) taking place between 2007 and 2009. This network describes how many summits in common any two suspects have. Attendance at summits was registered by police authorities through wiretapping and observations during this operation.
The Oversize Operation is an investigation lasting from 2000 to 2006, which targeted more than 50 suspects of the Calabrian 'Ndrangheta involved in international drug trafficking, homicides, and robberies. The trial led to the conviction of the main suspects from 5 to 22 years of imprisonment between 2007-2009. Berlusconi et al.~\cite{Berlusconi2016} studied three unweighted and undirected networks extracted from three judicial documents corresponding to three different stages of the criminal proceedings (Table~\ref{tab:mafia}): wiretap records (WR), arrest warrant (AW), and judgment (JU).
Each of these networks has 182 nodes which corresponding to the individuals involved in illicit activities. The WR network has 247 edges which represent the wiretap conversations transcribed by the police and considered relevant at first glance. The AW network contains 189 edges which are meetings emerging from the physical surveillance. The JU network has 113 edges which are wiretap conversations emerging from the trial and several other sources of evidence, including wiretapping and audio surveillance. These datasets are available as three $1$-mode matrices on Figshare~\cite{piccardi2016}.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\centering
\caption{
{\bf Mafia networks properties.}}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\textbf{Network} & \textbf{MN} & \textbf{PC} & \textbf{SN} & \textbf{WR} & \textbf{AW} & \textbf{JU} \\ \thickhline
weights & weighted & weighted & weighted & unweighted & unweighted & unweighted\\ \hline
directionality & undirected & undirected & undirected & undirected & undirected & undirected \\ \hline
connectedness & false & false & false & false & false & false \\ \hline
n. of nodes $n$ & 101 & 100 & 156 & 182 & 182 & 182 \\ \hline
n. of isolated nodes $n_i$ & 0 & 0 & 5 & 0 & 36 & 93\\ \hline
n. of edges $m$ & 256 & 124 & 1619 & 247 & 189 & 113\\ \hline
n. of components $\lvert cc\rvert$ & 5 & 5 & 6 & 3 & 38 & 96\\ \hline
max avg. path length $\langle d\rangle$ for $cc$ & 3.309 & 3.378 & 2.361 & 3.999 & 4.426 & 3.722 \\ \hline
max shortest path length $d$ & 7 & 7 & 5 & 8 & 9 & 7 \\ \hline
density $\delta$ & 0.051 & 0.025 & 0.134 & 0.015 & 0.011 & 0.007 \\ \hline
avg. degree $\langle k \rangle$ & 5.07 & 2.48 & 20.76 & 2.71 & 2.08 & 1.24 \\ \hline
max degree $k$ & 24 & 25 & 75 & 32 & 29 & 13\\ \hline
avg. clust. coeff. $\langle C \rangle$ & 0.656 & 0.105 & 0.795 & 0.149 & 0.122 & 0.059\\ \hline
\end{tabular}
\label{tab:mafia}
\end{adjustwidth}
\end{table}
The Stockholm street gangs dataset was extracted from the National Swedish Police Intelligence (NSPI), which collects and registers the information from different kinds of intelligence sources to identify gang membership in Sweden. The organization investigated here is a Stockholm-based street gang localised in southern parts of Stockholm County, consisting of marginalised suburbs of the capital. All gang members are male with high levels of violence, thefts, robbery and drug-related crimes. Rostami and Mondani~\cite{Rostami2015} constructed the Surveillance (SV) network (Table~\ref{tab:others}). It contains data from the General Surveillance Register (GSR) which covers the period 1995–2010 and aims to facilitate access to the personal information revealed in law enforcement activities needed in police operations. SV is a weighted network with 234 nodes that are gang members. Some of them were no longer part of the gang in the period covered by the data and have been included as isolated nodes. The link weight counts the number of occurrence of a given edge. This dataset is available on Figshare~\cite{rostami_mondani_2015}.
Project Caviar~\cite{Morselli2008} was a unique investigation against hashish and cocaine importers operating out of Montreal, Canada. The network was targeted between 1994 and 1996 by a tandem investigation uniting the Montreal Police, the Royal Canadian Mounted Police, and other national and regional law-enforcement agencies from England, Spain, Italy, Brazil, Paraguay, and Colombia. In a 2-year period, 11 importation drug consignments were seized at different moments and arrests only took place at the end of the investigation. The principal data sources are the transcripts of electronically intercepted telephone conversations between suspects submitted as evidence during the trials of 22 individuals. Initially, 318 individuals were extracted because of their appearence in the surveillance data. From this pool, 208 individuals were not implicated in the trafficking operations. Most were simply named during the many transcripts of conversations, but never detected. Others who were detected had no clear participatory role within the network (e.g., family members or legitimate entrepreneurs). The final Caviar (CV) network was composed of 110 nodes. The $1$-mode matrix with weighted and directed edges is available on the UCINET~\cite{Borgatti2002} website.
(Link: \url{https://sites.google.com/site/ucinetsoftware/datasets/covert-networks/caviar}).
From this matrix, we extracted an undirected and weighted network with 110 nodes which are criminals and 205 edges which represent the communications exchanges between them (see Table~\ref{tab:others}). Weights are level of communication activity.
Philippines Kidnappers data refer to the Abu Sayyaf Group (ASG)~\cite{Gerdes2014}, a violent non-state actor operating in the Southern Philippines. In particular, this dataset is related to the Salast movement that has been founded by Aburajak Janjalani, a native terrorist of the Southern Philippines in 1991. ASG is active in kidnapping and other kinds of terrorist attacks. The reconstructed $2$-mode matrix is available on UCINET~\cite{Borgatti2002} (Link: \url{https://sites.google.com/site/ucinetsoftware/datasets/covert-networks/philippinekidnappings}).
From the $2$-mode matrix, we constructed a weighted and undirected graph called Philippines Kidnappers (PK) (see Table~\ref{tab:others}). The PK network has 246 nodes and 2571 edges. Nodes are terrorist kidnappers of the ASG. Edges are the terrorist events they have attended. This network describes how many events in common any two kidnappers have.
\begin{table}[!ht]
\centering
\caption{
{\bf Street gangs and terrorist networks properties.}}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Network} & \textbf{SV} & \textbf{CV} & \textbf{PK} \\ \thickhline
weights & weighted & weighted & weighted \\ \hline
directionality & undirected & undirected & undirected \\ \hline
connectedness & false & true & false \\ \hline
nr. of nodes $n$ & 234 & 110 & 246 \\ \hline
nr. of isolated nodes $n_i$ & 12 & 0 & 16 \\ \hline
nr. of edges $m$ & 315 & 205 & 2571 \\ \hline
nr. of components $cc$ & 13 & 1 & 26 \\ \hline
max avg. path length $\langle d\rangle$ for $cc$ & 3.534 & 2.655 & 3.034 \\ \hline
max shortest path length $d$ & 6 & 5 & 9 \\ \hline
density $\delta$ & 0.012 & 0.034 & 0.085 \\ \hline
avg. degree $\langle k \rangle$ & 2.69 & 3.73 & 20.9 \\ \hline
max degree $k$ & 34 & 60 & 78 \\ \hline
avg. clustering coeff. $\langle C \rangle$ & 0.15 & 0.335 & 0.753 \\ \hline
\end{tabular}
\label{tab:others}
\end{table}
Useful information about Mafia, street gangs and terrorist networks is provided in Tables~\ref{tab:mafia} and~\ref{tab:others}, including edges weight and directionality, connectedness, number of nodes including isolated ones, number of edges, number of connected components, maximum average path length for each connected component, maximum shortest path length, average degree, maximum degree and the average clustering coefficient. The CV network seems to be the only fully connected network (i.e., $\lvert cc \rvert = 1$) and, for this reason, in all the considered networks we chose to compute the average path length for the single components and then to show the maximum value.
\begin{figure}[!h]
\caption{{\bf Degree Distributions.} The degree distribution $p_k$ provides the probability that a randomly selected node in each criminal network has degree $k$. Same colors imply the networks belong to the same police investigation.}
\includegraphics[width=\textwidth]{Fig1.eps}
\label{fig:degdist}
\end{figure}
Then, we showed the degree distributions for each criminal network as a normalised histogram (see Fig.~\ref{fig:degdist}). MN, PC, WR, AW, JU, SV and CV have similar degree distributions in which most nodes have a relatively small degree $k$ with values around $0$, $1$ or $2$, while a few nodes have very large degree $k$ and are connected to many other nodes. SN and PK are the only networks having different degree distributions compared to other criminal networks, as most of their nodes have large degree $k$. In particular, we note that most nodes in PK are strongly connected and have a degree $k=57$.
\subsection*{Design of Experiments}
In this section we describe the technical details in the design of the experiments conducted. To understand how much partial knowledge of a criminal network may negatively affect the investigations, we have implemented several tests.
Since we are trying to understand how much differences can be spotted based on different types/amount of data missing, we set up the experiments by two main strategies: random edges removal and nodes removal. The first case simulates the scenario in which LEAs miss to intercept some calls or to spot sporadic meetings among suspects (i.e., due to the delays in obtaining a warrant). By nodes removal we mean that the selected nodes have been removed, jointly with their incident edges, and afterwords they have been reinserted within the networks as isolated nodes. Indeed, the second case catches the hypothesis in which some suspects cannot be intercepted. For instance, if a criminal is known to be a boss but there are not enough proofs to be investigated, then that criminal can be identified as an isolated node with no incident edges.
Note that for a better comparison among the networks, the graphs have been all considered as unweighted because both AW and JU are. Furthermore, all the suspects showed as isolated nodes of the original network have been excluded. In fact, our input parameter was the edge list of the graph, which does not take into account nodes with no incident edges.
Algorithm~\ref{alg:pseudo} shows the pseudocode of our approach. In order to obtain the subgraphs, we started from the previously described datasets; then, we converted them into graphs (i.e., $G$) and, lastly, we pruned them (i.e., $G^{'}$) according to a prefixed fraction $torem=10\%$. We opted for the 10\% because the criminal networks considered are small, as they have a total number of nodes lower than 250. Afterwards, we have computed the spectral and matrix distances $d(G,G^{'})$ between the original and the pruned graphs. Each edges removal process has been repeated a fixed number of times ($nrep=100$) and the results obtained have been averaged. Thus, the averaged distances values $\langle X \rangle$ and their standard deviations $\sigma$ have been computed.
\begin{algorithm}
\caption{Pseudocode for computing the distances}
\label{alg:pseudo}
\begin{algorithmic}[1]
\State Parameter configuration: $nrep$, $torem$, and $check$
\State Read the dataset and covert it as graph $G$
\If{$check = True$}
\State Isolate $torem$ of nodes
\Else
\State Remove $torem$ of random edges
\EndIf
\State Compute $S(G)$
\State Compute the matrices $A(G)$, $L(G)$, $\mathcal{L}(G)$
\For{\texttt{$torem$}}
{\For{$nrep$}
\State Create a pruned graph $G^{'}$ and compute $S^{'}(G^{'})$
\State Compute $\mathrm{d_{rootED}}(G,G^{'})$, $d_{A}(G,G^{'})$, $d_{L}(G,G^{'})$, and $d_{\mathcal{L}}(G,G^{'})$
\EndFor}
\State Compute $\langle X \rangle$, $\sigma$ $\forall$ $d(G,G^{'})$ $\in$ $nrep$
\EndFor
\end{algorithmic}
\end{algorithm}
\section*{Results}
Here we present the results obtained from the network pruning experiments. The distance analysis between the real and the pruned networks is reported starting from the random edges removal approach (Fig.~\ref{fig:edges_dist}), moving to the analysis on the networks after node pruning (Fig.~\ref{fig:nodes_dist}). The plots show the distances between the original graphs and their pruned versions up to 10\% of edges ($F_e$) and nodes ($F_n$), respectively.
In both removal processes, $d_A$ displays a saturation effect that makes the results difficult to be interpreted. Indeed, with a fraction of approximately the 2\% of removed elements (i.e, nodes/edges), the growth became flatter. Hence, this distance is not effective for highlighting the effects of missing data on criminal networks. Furthermore, from this metric it might seem that the two pruned networks of PK and SN show a greater deviation from their original counterparts, but this is due to the inner structure of this metric, which is highly influenced by the nodes' degree. In fact, the average degree of PK an SN (see Tables~\ref{tab:mafia} and~\ref{tab:others}) is significantly higher (i.e., $\langle k \rangle \simeq 21$) than the other networks herein studied (i.e., $1 < \langle k \rangle < 4$); moreover, their different topology is also evident from their degree distribution (see Fig.~\ref{fig:degdist}). This is the reason why these networks seem to have a more significant detachment effect than others; however, they too suffer the saturation effect mentioned above as they grow. A similar behavior has also been encountered in $d_L$ and its explanation is the same.
On the other hand, the distance metric which more effectively catches the damage caused by a significant amount of missing data is $d_{\mathcal{L}}$, where distance growth is linear. Indeed, the effects of $\langle k \rangle$ are smaller as this aspect is compressed by the structure of this distance metric. It would seem that this metric is the most effective measure compared to other spectral distances, in understanding how much lacking data affects the total knowledge of the network. A similar trend was also found in $\mathrm{d_{rootED}}$; however, for a better comparison between nodes and edges removal processes, we analysed this last metric in more detail by considering its $DeltaCon$ similarity $sim_{DC}$ (Fig.~\ref{fig:sim}).
The figure shows the difference between the original and pruned networks as the fraction of elements removed increases (i.e., $F_e$ for edges and $F_n$ for nodes).
Before starting pruning, we have $sim_{DC}=1$. Afterwards, the drop start to became more evident as the fraction $F$ increases. In addition, as expected, the nodes removal process affects more significantly the networks. This means that if the lack of data relates to sporadically missed wiretaps, or to just a few random connections between suspects, then the network structure is not as much misinterpreted as if the case when one suspect has not been tracked at all. Indeed, pruning the network at its 2\%, causes a $sim_{DC}\geq0.8$ for edges pruning, compared with a $sim_{DC}\simeq 0.2$ for the nodes ones. Therefore, even when a small amount of suspects are not included in the investigations, this can lead to a very different network. The exclusion of the suspects could be voluntary or not. It highly depends on the overall investigation process, starting from the very preliminary analysis, and up to the judges' decision to allow warrants, or to exclude data considered irrelevant for the current investigation.
\begin{figure}[!h]
\begin{center}
\caption{{\bf The removal effects of a fraction $F_e$ of edges by showing the distances $d(G, G^{'})$ between the original graphs with their pruned versions.} (A) Adjacency Spectral Distance $d_A$ (B) Laplacian Spectral Distance $d_L$ (C) Normalised Laplacian Spectral Distance $d_{\mathcal{L}}$ (D) Root Euclidean Distance $\mathrm{d_{rootED}}$.}
\includegraphics[width=\textwidth]{Fig2.eps}
\label{fig:edges_dist}
\end{center}
\end{figure}
\begin{figure}[!h]
\caption{{\bf The removal effects of a fraction $F_n$ of nodes by showing the distances $d(G, G^{'})$ between the original graphs with their pruned versions.} (A) Adjacency Spectral Distance $d_A$ (B) Laplacian Spectral Distance $d_L$ (C) Normalised Laplacian Spectral Distance $d_{\mathcal{L}}$ (D) Root Euclidean Distance $d_{rootED}$.}
\includegraphics[width=\textwidth]{Fig3.eps}
\label{fig:nodes_dist}
\end{figure}
\begin{figure}[!h]
\caption{{\bf DeltaCon similarity $sim_{DC}$ computation} (A) Edges removal process by the fraction $F_e$ (B) Nodes removal process by the fraction $F_n$.}
\includegraphics[width=\textwidth]{Fig4.eps}
\label{fig:sim}
\end{figure}
\section*{Discussion}
In this paper we analysed nine datasets of real criminal networks extracted from six police operations to investigate on the effects of missing data. More specifically, three of them rely on Mafia operations (i.e., Montagna, Infinito, and Oversize) and the remaining ones refer to other criminal networks such as street gangs, drug traffics, or terrorist networks (i.e., Stockholm street gangs, Caviar Project, Philippines Kidnappers).
Our study focused on a careful analysis of the datasets, in order to simulate the events where some of the data is missing. In particular, two different scenarios have been considered: \begin{enumerate*}[label=(\roman*)]
\item random edges removal, which simulates the case in which LEAs miss to intercepts some calls or to spot sporadic meetings among suspects, and
\item nodes removal that catches the hypothesis in which some suspects cannot be intercepted for some reason. For instance, if a criminal is known to be a boss, but there are not enough proofs to be investigated, then the criminal can be identified by an isolated node with no incident edges on it.
\end{enumerate*}
In order to quantify the difference between the original network and its pruned version, we computed several distance metrics, to the one which is most sensitive. Hence, we computed the Adjacency, Laplacian and Normalised Spectral distances (i.e., $d_A(G,G^{'})$, $d_L(G,G^{'})$, and $d_{\mathcal{L}}(G,G^{'})$, respectively) plus the Root Euclidean Distance (i.e., $\mathrm{d_{rootED}}(G,G^{'})$) because this metric allows to compute the DeltaCon similarity (i,e., $sim_{DC}$), which can quantify even small differences between two graphs in $[0,1]$. The pruning process involved removing up to 10\% of elements, that is the fraction $F_e$ of edges and $F_n$ of nodes. This percentage has been chosen as the networks size was quite small (less than 250 nodes per each dataset).
Our analysis suggests that \begin{enumerate*}[label=(\roman*)] \item the spectral metric $d_{\mathcal{L}}(G,G^{'})$ is best at catching the expected linear growth of differences with the incomplete graph against its complete counterpart; \item the nodes removal process is significantly more damaging than random edges removal; \end{enumerate*} thus, it translates to a negligible error in terms of graph analysis when, for example, some wiretaps are missing. Indeed, in terms of $sim_{DC}$ drop, there is a 30\% difference from the real network, for a pruned version at 10\%. On the other hand, it is crucial to be able to investigate the suspects in time because excluding them from the investigation could produce a very different network respect to the real one, that is up to 80\% of $sim_{DC}$ drop on some networks.
A final consideration concerns the impossibility of conducting this type of analysis through the use of Machine Learning, as it is currently practically impossible to obtain a sufficient number of reliable and complete datasets of real criminal networks in order to be able to conduct an appropriate training of a Neural Network.
For the future, we plan to extend the analysis by considering weights as well. This will allow to conduct a comparative analysis of the missing data effects when not only the connections between nodes, but also their frequency is known. Another interesting aspect to be considered is the network behaviour after their pruning in both criminal and general social networks. Lastly, using the future knowledge gained from the network analysis herein presented, one could try to define an artificial network able to accurately simulate the behavior of real criminal networks.
\nolinenumbers
\bibstyle{plos2015}
|
2,877,628,088,653 | arxiv | \section{Introduction}
\ \ The key object of the up-to-date circuit QED is the system comprised of
a qubit coupled to the quantum transmission-line resonator \cite{Koch07}.
Such systems are useful for both studying fundamental quantum phenomena and
for quantum information protocols including control, readout, and memory
\cite{Ashhab10, Wendin16}. Realistic QED system includes also electronics
for driving and probing, while the general consideration should include in
addition the inevitable dissipative environment and non-zero temperature.
In many cases, the temperature can be assumed equal to zero. However, there
are situations when it is important both to take into account and to monitor
the effective temperature \cite{Giazotto06, Albash17}. One of the reasons is
that it is a variable value, which depends on several factors \cit
{Wilson10, Forn-Diaz16, Stehlik16}, for example it significantly varies with
increasing driving power. Different aspects of the thermometry involving
qubits were studied in Refs.~
\onlinecite{Palacios-Laloy09, Fink10, Brunelli11, Higgins13, Ashhab14,
Jevtic15, Ahmed18}].
Even though our consideration is quite general and can be applied to other
types of qubit-resonator systems, including semiconductor qubits \cite{Mi17
, for concreteness we concentrate on a transmon-type qubit in a cavity, of
which the versatile study was presented in Ref.~[\onlinecite{Bianchetti09}].
These systems were studied for different perspectives, recently including
such an elaborated phenomena as the Landau-Zener-St\"{u}ckelberg-Majorana
interference \cite{Gong16}. The impact of the temperature was studied in
Ref.~[\onlinecite{Fink10}], however the authors were mainly interested in
the resonator temperature. Here we explicitly take into account the non-zero
effective temperature impact on both resonator and qubit. First, we obtain
simplified but transparent analytical expressions for the transmission
coefficient in the semi-classical approximation, which ignores the
qubit-resonator correlations. Such semiclassical approach is useful, but its
validity should be checked \cite{Remizov16}. For this reason, we further
develop our calculations, by taking into account the qubit-resonator
correlators.
Having obtained agreement with previous experiments, such as the ones in
Refs.~[\onlinecite{Bianchetti09, Jin15, Pietikainen17a}], we also consider
another emergent application, for memory devices. Different types of memory
devices, such as memcapacitors and meminductors, were introduced in addition
to memristors \cite{Diventra09, Pershin11}. See also Refs.~
\onlinecite{Peotta14,
Guarcello16, Guarcello17}] for different proposals of superconducting memory
elements. Quantum versions of memristors, memcapacitors, and meminductors
were discussed in Refs.~
\onlinecite{Pfeiffer16, Shevchenko16, Salmilehto16,
Li16, Sanz17}]. In particular, in Ref.~[\onlinecite{Shevchenko16}] it was
suggested that a charge qubit can behave as a quantum memcapacitor. We
consider here a transmon qubit in a cavity, instead of a charge qubit, as a
possible candidature for the realization of the quantum memcapacitor. For
this, we demonstrate that the transmon-resonator system can be described by
the relations defining a memcapacitor.
Overall, the paper is organized as following. In Sec.~II we consider the
driven qubit-resonator system probed via quadratures of the transmitted
field. This is developed by taking temperature into account in Sec.~III,
where continuous measurements are considered. While we compare our results
with Ref.~[\onlinecite{Bianchetti09}], our approach there (also presented in
Appendix~A), was the semiclassical theory, valid for both dispersive and
resonant cases. Importantly, we verify the results with the more elaborated
calculations, taking into account two-operator qubit-resonator correlators,
of which the details are presented in Appendix~B. Section~IV is devoted to
the case of single-shot pulsed measurements. In Sec.~V, we consider cyclic
dynamics with hysteretic dependencies, needed for emergent memory
applications.
\section{Time-dependence of the quadratures}
The qubit-resonator system we consider in the circuit-QED realization, as
studied in Refs.~[\onlinecite{Koch07, Bianchetti09}]. The qubit is the
transmon formed by an effective Josephson junction and the shunt capacitance
$C_{\mathrm{B}}$; it is capacitively coupled to the transmission-line
resonator via $C_{\mathrm{g}}$, as shown in the inset in Fig.~\re
{Fig:vs_time_1}. The resonator is driven via $C_{\mathrm{in}}$ and measured
value is the transmitted electromagnetic field after $C_{\mathrm{out}}$. In
addition, the effective Josephson junction stands for the loop with two
junctions controlled by an external magnetic flux $\Phi $; the respective
Josephson capacitance and energy are denoted in the scheme with $C_{\mathrm{
}}$\ and $E_{\mathrm{J}}$. The qubit characteristic charging energy is $E_
\mathrm{c}}=e^{2}/2C_{\Sigma }$ with $C_{\Sigma }=C_{\mathrm{J}}+C_{\mathrm{
}}+C_{\mathrm{g}}$.
\begin{figure}[t]
\includegraphics[width=8cm]{vs_time_1}
\caption{Time evolution of the quadratures, $Q$ (a) and $I$ (b), for the
parameters of Ref.~[\onlinecite{Bianchetti09}] for the two situations, when
the qubit was initialized in either the ground or excited state, denoted as
\textquotedblleft $\left\vert g\right\rangle $-response\textquotedblright\
and \textquotedblleft $\left\vert e\right\rangle $-respons
\textquotedblright , respectively. The inset presents the scheme of the
transmon-type qubit coupled to the transmission-line resonator.}
\label{Fig:vs_time_1}
\end{figure}
The driven transmon-resonator system \cite{Koch07, Bianchetti09, Bishop09}
is described by the Jaynes-Cummings Hamiltonian \cite{Schleich}
\begin{eqnarray}
H &=&\hbar \omega _{\mathrm{r}}a^{\dag }a+\hbar \frac{\omega _{\mathrm{q}}}{
}\sigma _{z}+\hbar \mathrm{g}\left( \sigma a^{\dag }+\sigma ^{\dag }a\right)
+ \label{JC} \\
&&+\hbar \xi \left( a^{\dag }e^{-i\omega t}+ae^{i\omega t}\right) . \notag
\end{eqnarray
Here the transmon is considered in the two-level approximation, described by
the energy distance $\hbar \omega _{\mathrm{q}}$ between the levels and the
Pauli operators $\sigma _{i}$ and $\sigma _{\pm }=\left( \sigma _{x}\pm
i\sigma _{y}\right) /2$, where we rather use the ladder-operator notations
\sigma \equiv \sigma _{-}$ and $\sigma ^{\dag }\equiv \sigma _{+}$; the
resonator is described by the resonant frequency $\omega _{\mathrm{r}}$ and
the annihilation operator $a$; the transmon-resonator coupling constant
\mathrm{g}$ relates to the bare coupling $\mathrm{g}_{0}$ as $\mathrm{g}
\mathrm{g}_{0}\sqrt{E_{\mathrm{c}}/\left\vert \Delta -E_{\mathrm{c
}\right\vert }$ with $\Delta =\hbar \left( \omega _{\mathrm{q}}-\omega _
\mathrm{r}}\right) $ (this renormalization is due to the virtual transitions
through the upper transmon's states); the probing signal is described by the
amplitude $\xi $ and frequency $\omega $.
The system's dynamics obeys the master equation, which is described in
Appendix A. There, it is demonstrated that the Lindblad equation for the
density matrix can be rewritten\ as an infinite set of equations for the
expectation values. In Refs.~[\onlinecite{Bianchetti09, Shevchenko14}] the
set of equations was reduced to six complex equations for the single
expectation values and the two-operator correlators. Meanwhile, many
quantum-optical phenomena can be described within the semiclassical theory,
assuming all the correlation functions to factorize (e.g. Refs.~
\onlinecite{Mu92, Hauss08, Andre09, Macha14}]). This approach results in
that the system's dynamics is described by the set of three equations only,
Eqs.~(\ref{Maxwell-Bloch}), which are more suitable for analytic
consideration, as we will see below. This was also analyzed in Ref.~
\onlinecite{Shevchenko14}]; in particular, the robustness of the
semiclassical approximation was demonstrated even in the limit of small
photon number, at small probing amplitude $\xi $.
The observable value can be either transmission signal amplitude or the
quadrature amplitudes. The quadratures of the transmitted field $I$ and $Q$
are related to the cavity field $\left\langle a\right\rangle $ as following
\cite{Bianchetti09, Bishop09
\begin{equation}
I=2V_{0}\,\,\text{Re}\left\langle a\right\rangle \text{, \ }Q=2V_{0}\,\
\text{Im}\left\langle a\right\rangle , \label{IQ}
\end{equation
where $V_{0}$ is a voltage related to the gain of the experimental
amplification chain \cite{Bishop09} and it is defined as \cite{Bianchetti09}
$V_{0}^{2}=Z\hbar \omega _{\mathrm{r}}\varkappa /4$ with $Z$ standing for
the transmission-line impedance. The transmission amplitude $A$ is given
\cite{Bishop09, Macha14} by the absolute value of $\left\langle
a\right\rangle $
\begin{equation}
A=\sqrt{I^{2}+Q^{2}}=2V_{0}\left\vert \left\langle a\right\rangle
\right\vert . \label{A}
\end{equation}
\begin{figure}[t]
\includegraphics[width=8cm]{vs_time_2}
\caption{Time evolution of the quadratures, $Q$ (a) and $I$ (b), for
non-zero temperature $T$. The situation when the qubit was initialized in
the excited state is considered. The parameters are the same as in Fig.
\protect\ref{Fig:vs_time_1}, besides the temperature, so that the solid red
curves for the low temperature repeat the ones from the previous figure.}
\label{Fig:vs_time_2}
\end{figure}
As an illustration of the semiclassical theory, presented in more detail in
Appendix A, consider the experimental realization in Ref.~
\onlinecite{Bianchetti09}]. There, the qubit was initialized in either
ground or excited state and then, by means of either continuous or pulsed
measurements, the quadratures of the transmitted field were probed.
Correspondingly, we make use of Eqs.~(\ref{IQ})~and~(\ref{Maxwell-Bloch}),
which include the resonator relaxation rate $\varkappa $\ and the qubit
decoherence rate $\Gamma _{2}=\Gamma _{\phi }+\Gamma _{1}/2$ with $\Gamma
_{\phi }$ and $\Gamma _{1}$ being the intrinsic qubit pure dephasing and
relaxation rates. We take the following parameters \cite{Bianchetti09}:
\omega _{\mathrm{r}}/2\pi =6.4425$~GHz, $\omega _{\mathrm{q}}/2\pi =4.01
~GHz, $\mathrm{g}_{0}/2\pi =134$~MHz,$\ \varkappa /2\pi =1.7$~MHz, $\Gamma
_{1}/2\pi =0.2$~MHz, $\Gamma _{2}=\Gamma _{1}/2$, $E_{\mathrm{c}}/h=232
~MHz, and $V_{0}=5$~mV, where the latter was chosen as a fitting parameter.
The results for low temperature (i.e. for $k_{\mathrm{B}}T\ll \hbar \omega
_{_{\mathrm{q}}}$) are presented in Fig.~\ref{Fig:vs_time_1}. Note the
agreement with the experimental observations in Ref.~
\onlinecite{Bianchetti09}]; see also detailed calculations in Appendix~B
below. There, in Ref.~[\onlinecite{Bianchetti09}] it is discussed in detail
that the relaxation of the quadratures is determined for the ground-state
formulation by the resonator rate $\varkappa $ only, while for the
excited-state formulation this is determined by the collaborative evolution
of the qubit-resonator system. For example, one can observe that the
relaxation of the quadratures in Fig.~\ref{Fig:vs_time_1} for the
\textquotedblleft $\left\vert e\right\rangle $-response\textquotedblright\
happens in two stages, during the times $T_{\varkappa }=2\pi /\varkappa
\simeq 0.6\mu $s and $T_{1}=2\pi /\Gamma _{1}\gg T_{\varkappa }$.
\section{Thermometry with continuous measurements}
In previous Section we calculated the low-temperature behaviour of the
observable quadratures for the qubit-resonator system and illustrated this
in Fig.~\ref{Fig:vs_time_1}. Having obtained the agreement with the
experimental observations of Ref.~\cite{Bianchetti09}, we can proceed with
posing other problems for the system. Consider now the sensitivity of the
system to the changes of temperature. How the behaviour of the observables
changes? Is this useful for a single-qubit thermometry?\ To respond to such
questions, we describe below both dynamical and stationary behaviour for
non-zero temperature.
In Fig.~\ref{Fig:vs_time_2} we plot the time evolution of the quadratures
for the same parameters as in Fig.~\ref{Fig:vs_time_1} besides the
temperature, which now is considered non-zero. Figure~\ref{Fig:vs_time_2}
demonstrates that both evolution and stationary values (at long times,
independent of initial conditions) are strongly temperature dependent.
To further explore the temperature dependence, we now consider the
steady-state measurements. In equilibrium, the observables are described by
the steady-state values of $\left\langle a\right\rangle $, $\left\langle
\sigma \right\rangle $, and $\left\langle \sigma _{z}\right\rangle $. The
steady-state solution for the weak driving amplitude in the semiclassical
approximation is the following (for details see Appendix A):
\begin{equation}
\left\langle a\right\rangle =-\xi \frac{\delta \omega _{\mathrm{q}}^{\prime
}{\left\langle \sigma _{z}\right\rangle \mathrm{g}^{2}+\delta \omega _
\mathrm{q}}^{\prime }\delta \omega _{\mathrm{r}}^{\prime }}, \label{a}
\end{equation
where
\begin{eqnarray}
\delta \omega _{\mathrm{r}}^{\prime } &=&\omega _{\mathrm{r}}-\omega -i\frac
\varkappa }{2},\text{ }\delta \omega _{\mathrm{q}}^{\prime }=\omega _
\mathrm{q}}-\omega -i\frac{\Gamma _{2}}{z_{0}}, \\
z_{0} &=&\tanh \left( \frac{\hbar \omega _{_{\mathrm{q}}}}{2k_{\mathrm{B}}T
\right) . \notag
\end{eqnarray
In equilibrium, the qubit energy-level populations are defined by the
temperature $T$: $\left\langle \sigma _{z}\right\rangle =-z_{0}$ \cite{Jin15
. Importantly, formula~(\ref{a}) bears the information about the qubit
temperature and via formula (\ref{A}) brings this dependence to the
observables.
Formula~(\ref{a}) is quite general. To start with, for an isolated resonator
(without qubit) at $\mathrm{g}=0$ this gives
\begin{equation}
\left\vert \left\langle a\right\rangle \right\vert ^{2}=\xi ^{2}\frac{1}
\delta \omega _{\mathrm{r}}^{2}+\varkappa ^{2}/4}, \label{a0}
\end{equation
which defines the resonator width.
Consider now the \textit{dispersive} limit, where $\Delta /\hbar \equiv
\omega _{\mathrm{q}}(\Phi )-\omega _{\mathrm{r}}\gg \mathrm{g}/h,\,\,\delta
\omega _{\mathrm{r}}$. Then we have for the transmission amplitude
\begin{equation}
\left\vert \left\langle a\right\rangle \right\vert ^{2}\approx \xi ^{2}\frac
\Delta ^{2}}{\left( \left\langle \sigma _{z}\right\rangle \mathrm{g
^{2}+\Delta \delta \omega _{\mathrm{r}}\right) ^{2}+\Delta ^{2}\varkappa
^{2}/4}. \label{general}
\end{equation
This, in particular, gives the maxima for the transmission a
\begin{equation}
\delta \omega _{\mathrm{r}}=-\left\langle \sigma _{z}\right\rangle \frac
\mathrm{g}^{2}}{\Delta }\equiv -\left\langle \sigma _{z}\right\rangle \chi .
\label{dispersive}
\end{equation
Then, for the ground/excited states with $\left\langle \sigma
_{z}\right\rangle =\mp 1$, one obtains the two dispersive shifts for the
maximal transmission, $\delta \omega _{\mathrm{r}}=\pm \chi =\pm \mathrm{g
_{0}^{2}E_{\mathrm{c}}/\Delta (\Delta -E_{\mathrm{c}})$, respectively. In
thermal equilibrium, equation~(\ref{dispersive}) for the resonance frequency
shift gives $\delta \omega _{\mathrm{r}}(T)=\frac{\mathrm{g}^{2}}{\Delta
\tanh \left( \frac{\hbar \omega _{_{\mathrm{q}}}}{2k_{\mathrm{B}}T}\right) $.
\begin{figure}[t]
\includegraphics[width=8cm]{phase_shift}
\caption{Transmission amplitude and the frequency shift. First, the inset
shows the transmission amplitude $A$ versus the frequency $\protect\omega $
when the qubit is either in the ground state (solid line) or in the excited
state (dashed line). Then, the main panel demonstrates the frequency shift
\protect\delta \protect\omega _{\mathrm{r}}=\protect\omega _{\mathrm{r}}
\protect\omega $, corresponding to the frequency $\protect\omega $ at which
the transmission is maximal, plotted as a function of temperature $T$. Here
the transmission amplitude is normalized with $A_{0}=4V_{0}\mathrm{g}\protec
\xi /\varkappa $. }
\label{Fig:phase_shift}
\end{figure}
Making use of Eqs.~(\ref{A})~and~(\ref{general}) in thermal equilibrium,
when $\left\langle \sigma _{z}\right\rangle =-z_{0}$, in the inset in Fig.
\ref{Fig:phase_shift} we plot the frequency dependence of the transmission
amplitude for the parameters of Ref.~[\onlinecite{Bianchetti09}]. We plot
two cuves, where the solid one corresponds to a low-temperature limit (
z_{0}=1$) with the system in the ground state, while the dashed line is
plotted in a high-temperature limit ($z_{0}=0$), when the system is in the
superposition of the ground and excited state. The maximal frequency shift
is denoted with $\chi $. Note that the low-temperature limit (solid line in
the inset), with $\left\langle \sigma _{z}\right\rangle \sim -1$,
corresponds to the ground state, while the high-temperature limit (dashed
line), with $\left\langle \sigma _{z}\right\rangle \sim 0$, is equivalent to
the absence of the qubit, at $\mathrm{g}=0$.
For varying temperature, the frequency shift is plotted in the main panel of
Fig.~\ref{Fig:phase_shift}, for the parameters of Ref.~
\onlinecite{Bianchetti09}]. We note that similar dependence can be found in
Fig.~4.2 of Ref.~[\onlinecite{Bianchetti10}]; the difference is in that in
the case of Refs.~[\onlinecite{Bianchetti09, Bianchetti10}] similar change
of $\left\langle \sigma _{z}\right\rangle $ from $-1$ to $0$ was due to
varying the driving power. When driven with low power, qubit stayed in the
ground state with $\left\langle \sigma _{z}\right\rangle =-1$, while with
increasing the power its stationary state tended to equally populated states
with $\left\langle \sigma _{z}\right\rangle =0$. Also, to this case of
varying the qubit driving, we further devote Appendix~C.
The temperature dependence in Fig.~\ref{Fig:phase_shift} becomes apparent at
$T\geq T^{\ast }$, where $T^{\ast }=0.1\hbar \omega _{_{\mathrm{q}}}/k_
\mathrm{B}}$ is the characteristic temperature, which, say, for $\omega _{_
\mathrm{q}}}/2\pi =4$~GHz is quite low, $T^{\ast }=20$ mK. This means that
such measurements may be useful for realizing the \textit{one-qubit
thermometry} for $T\geq T^{\ast }$.
\begin{figure}[t]
\includegraphics[width=8cm]{resonant}
\caption{Transmission $A^{2}$, normalized with its maximal value $A_{\max }
, versus the frequency shift $\protect\delta \protect\omega $ for different
values of temperature $T$.}
\label{Fig:resonant}
\end{figure}
It is important to note that Eq.~(\ref{a}) was obtained without making use
of the dispersive limit, and thus this is applicable to the opposite limit.
Consider in this way $\omega _{_{\mathrm{q}}}(\Phi )=\omega _{_{\mathrm{r}}}
, which is the \textit{resonant} limit, $\Delta =0$. With equal detunings
for both qubit and resonator, $\omega _{\mathrm{q}}-\omega =\omega _{\mathrm
r}}-\omega \equiv \delta \omega $, we can use the formula for the photon
operator, Eq.~(\ref{a}), which gives
\begin{equation}
\left\vert \left\langle a\right\rangle \right\vert ^{2}\approx \xi ^{2}\frac
\delta \omega ^{2}+\Gamma _{2}^{2}/z_{0}^{2}}{\left( \left\langle \sigma
_{z}\right\rangle \mathrm{g}^{2}+\delta \omega ^{2}\right) ^{2}+\delta
\omega ^{2}\left( \Gamma _{2}/z_{0}+\varkappa /2\right) ^{2}}.
\label{resonant_limit}
\end{equation
With this we plot the transmission amplitude as a function of the frequency
detuning in Fig.~\ref{Fig:resonant} for different temperatures. Formula~(\re
{resonant_limit}) describes maxima, which, assuming large cooperativity
\mathrm{g}^{2}/\Gamma _{2}\varkappa \gg 1$, are situated at $\delta \omega
=0 $ (the high-temperature peak) and at
\begin{equation}
\delta \omega \approx \pm \mathrm{g}\tanh ^{1/2}\left( \frac{\hbar \omega
_{_{\mathrm{q}}}}{2k_{\mathrm{B}}T}\right) . \label{dw}
\end{equation
The latter formula, in particular, in the low-temperature limit describes
the peaks at $\delta \omega =\pm \mathrm{g}$, which is known as the vacuum
Rabi splitting. Note that recently such vacuum Rabi splitting was also
demonstrated in silicon qubits in Ref.~[\onlinecite{Mi17}]. With increasing
the qubit temperature, equation (\ref{dw}) means the temperature-dependent
resonance-frequency shift. We note, that this shift is again described by
the factor $\left\langle \sigma _{z}\right\rangle =-z_{0}$. If we assume
here the qubit in the ground state, $\left\langle \sigma _{z}\right\rangle
=-1$, then the increase of the temperature would result in suppressing the
peaks at $\delta \omega =\pm \mathrm{g}$, without their shift, in agreement
with Ref.~[\onlinecite{Fink10}].
\section{Thermometry with pulsed measurements}
Above we have considered the case when the measurement is done in a weak
continuous manner. Then, the resonator probes the averaged qubit state,
defined by $\left\langle \sigma _{z}\right\rangle $, and changing the qubit
state resulted in shifting the position of the resonant transmission.
Alternatively, the measurements can be done with the single-shot readout
\cite{Reed10, Jin15, Jerger16b, Jerger16, Reagor16}. In this case, in each
measurement, the resonator would see the qubit in either the ground or
excited state, with $\left\langle \sigma _{z}\right\rangle $ equal to $-1$
or $1$, respectively \cite{Vijay10}. Probability of finding the qubit in the
excited state is $P_{+}$ and in the ground state: $P_{-}=1-P_{+}$. Then, the
weighted (averaged over many measurements) transmission amplitude can be
calculated as followin
\begin{equation}
A=P_{-}A_{-}+P_{+}A_{+},
\end{equation
where $A_{\pm }$ describe the transmission amplitudes calculated for
\left\langle \sigma _{z}\right\rangle =\pm 1$, respectively, as given by
Eq.~(\ref{general}).
We may now consider two cases, of a qubit driven resonantly and when the
excitation happens due to the temperature. In the former case, when a qubit
is driven with frequency $\omega _{\mathrm{d}}=\omega _{\mathrm{q}}$ and
amplitude $\hbar \Omega $, the excited qubit state is populated with the
probability
\begin{eqnarray}
P_{+}(\Omega ) &=&\frac{1}{2}\left[ 1+\overline{\Omega }^{-2}\right] ^{-1},
\label{resonant} \\
\overline{\Omega } &=&\frac{1}{2}\hbar \Omega \sqrt{T_{1}T_{2}}. \notag
\end{eqnarray
This is obtained from the full formula for a qubit excited near the resonant
frequency \cite{Shevchenko14}
\begin{equation}
P_{+}=\frac{1}{2}\frac{\omega _{\mathrm{q}}^{2}J_{1}^{2}\left( \frac{\Omega
}{\omega _{\mathrm{d}}}\right) }{\omega _{\mathrm{q}}^{2}J_{1}^{2}\left(
\frac{\Omega }{\omega _{\mathrm{d}}}\right) +\frac{T_{2}}{T_{1}}\left(
\omega _{\mathrm{q}}-\omega _{\mathrm{d}}\right) ^{2}+\frac{1}{T_{1}T_{2}}},
\label{non-resonant}
\end{equation
where we then take $\omega _{\mathrm{d}}=\omega _{\mathrm{q}}$ and
J_{1}(x)\approx x/2$.
In thermal equilibrium the upper-level occupation probability is defined by
the Maxwell-Boltzmann distribution, $\left\langle \sigma _{z}\right\rangle
=-z_{0}$ \cite{Jin15}, so that $P_{+}=\frac{1}{2}\left[ 1+\left\langle
\sigma _{z}\right\rangle \right] $\ o
\begin{equation}
P_{+}(T)=\frac{1}{2}\left[ 1-\tanh \left( \frac{\hbar \omega _{_{\mathrm{q}}
}{2k_{\mathrm{B}}T}\right) \right] . \label{T-excited}
\end{equation}
With these equations (\ref{resonant}) and~(\ref{T-excited}) we calculate the
transmission amplitude, when the qubit was either resonantly driven (Fig.
\ref{Fig:two-peaks}) or in a thermal equilibrium (Fig.~\ref{Fig:thermometer
), respectively. For the former case we plot the frequency dependence of the
transmission amplitude in Fig.~\ref{Fig:two-peaks}. Similar dependence would
be for varying temperature; in Fig.~\ref{Fig:thermometer} we rather present
the transmission amplitude versus temperature for a fixed frequency $\omega
=\omega _{\mathrm{r}}+\chi =\omega _{\mathrm{r}}-\left\vert \chi \right\vert
$, where the excited-state peak appears. For calculations we took here the
parameters close to the ones of Ref.~[\onlinecite{Jin15}]:~$\omega _{\mathrm
r}}/2\pi =10.976$~GHz, $\omega _{\mathrm{q}}/2\pi =4.97$~GHz, $\chi /2\pi
=-4 $~MHz, and we have chosen$\ \varkappa /2\pi =1$~MHz. Again, as above, we
observe strong dependence on temperature. Advantages of probing qubit state
in a similar manner were discussed in Ref.~[\onlinecite{Reed10}]. There, it
was proposed to probe a \textit{driven} qubit state, while our proposal here
relates to the \textit{thermal}-equilibrium measurement and consists in
providing sensitive tool for thermometry. Indeed, similar temperature
dependence was recently observed by Jin et al. in Ref.~[\onlinecite{Jin15}].
In that work the authors studied the excited-state occupation probability in
a transmon with variable temperature. For comparison with that publication,
in the inset of Fig.~\ref{Fig:thermometer} we also present the
low-temperature region, with linear scale.
\begin{figure}[t]
\includegraphics[width=8cm]{two-peaks}
\caption{Transmission amplitude for monitoring the state of a driven qubit.
The frequency $\protect\omega $ is in the range from $\protect\omega _
\mathrm{r}}-3\left\vert \protect\chi \right\vert $ to $\protect\omega _
\mathrm{r}}+3\left\vert \protect\chi \right\vert $. The peak corresponding
to the ground state is at $\protect\omega _{\mathrm{r}}+\left\vert \protec
\chi \right\vert =10.98$~GHz$\cdot 2\protect\pi $. The peak appearing for
non-zero occupation of the excited state is at $\protect\omega =\protec
\omega _{\mathrm{r}}-\left\vert \protect\chi \right\vert =10.972$~GHz$\cdot
\protect\pi $. The height of the latter is defined by the normalized driving
amplitude $\overline{\Omega }$.}
\label{Fig:two-peaks}
\end{figure}
\begin{figure}[t]
\includegraphics[width=8cm]{thermometer}
\caption{Temperature dependence of the transmission amplitude~$A$ at the
frequency corresponding to the excited-state peak, which is $\protect\omega
\protect\omega _{\mathrm{r}}-\left\vert \protect\chi \right\vert $ in the
previous figure. Inset demonstrates the low-temperature region.}
\label{Fig:thermometer}
\end{figure}
\section{Memcapacitance}
Now, having reached the agreement of the theory with the experiments, we
wish to explore other applications. In this section we mean possibilities
for memory devices, such as memcapacitors.
In general, a memory device with the input $u(t)$ and the output $y(t)$, by
definition, is described by the following relations \cite{Diventra09}
\begin{eqnarray}
y(t) &=&g(\mathbf{x},u,t)u(t), \label{y(t)} \\
\mathbf{\dot{x}} &=&\mathbf{f}(\mathbf{x},u,t). \label{x_dot}
\end{eqnarray
Here $g$ is the response function, while the vector function $\mathbf{f}$
defines the evolution of the internal variables, denoted as a vector
\mathbf{x}$. Depending on the choice of the circuit variables $u$ and $y$,
the relations (\ref{y(t)}-\ref{x_dot}) describe memristive, meminductive, or
memcapacitive systems. Relevant for our consideration is the particular case
of the voltage-controlled memcapacitive systems, defined by the relations
\begin{eqnarray}
q(t) &=&C_{\mathrm{M}}(\mathbf{x},V,t)V(t), \label{q(t)} \\
\mathbf{\dot{x}} &=&\mathbf{f}(\mathbf{x},V,t). \label{x_dot_2}
\end{eqnarray
Here the response function $C_{\mathrm{M}}$ is called the memcapacitance.
Relations (\ref{y(t)}-\ref{x_dot}) and their particular case, Eqs.~(\re
{q(t)}-\ref{x_dot_2}), were related to diverse systems, as described e.g. in
the review article~[\onlinecite{Pershin11}]. It was shown that the
reinterpretation of known phenomena in terms of these relations makes them
useful for memory devices. However, until recently their quantum analogues
remained unexplored. Then, some similarities and distinctions from classical
systems were analyzed in Refs.~
\onlinecite{Pfeiffer16, Shevchenko16, Salmilehto16,
Li16, Sanz17}]. In particular, it was argued that in the case of quantum
systems, the circuit input and output variables $u$ and $y$ should be
interpreted as quantum-mechanically averaged values or in the ensemble
interpretation \cite{Shevchenko16, Salmilehto16}. Detailed analysis of
diverse systems \cite{Pfeiffer16, Shevchenko16, Salmilehto16, Li16, Sanz17}
demonstrated that, being described by relations (\ref{y(t)}-\ref{x_dot}),
quantum systems could be considered as quantum memristors, meminductors, and
memcapacitors. These indeed displayed the pinched-hysteresis loops for
periodic input, while the frequency dependence may significantly differ from
the related classical devices. The former distinction is due to the
probabilistic character of measurements in quantum mechanics. Note that the
\textquotedblleft pinched-hysteretic loop\textquotedblright\ dependence is
arguably the most important property of memristors, meminductors, and
memcapacitors.\cite{Diventra09, Pershin11}
It is thus our goal in this section to demonstrate how the evolution
equations for a qubit-resonator system can be written in the form of the
memcapacitor relations, Eqs.~(\ref{q(t)}-\ref{x_dot_2}). This would allow us
to identify the related input and output variables, the internal-state
variables, the response and evolution functions. As a further evidence, we
will demonstrate one particular example, when for a resonant driving the
pinched-hysteresis loop appears.
The transmon treated as a memcapacitor is depicted in Fig.~\re
{Fig:memcapacitor}(a). As an input of such a memcapacitor we assume the
resonator antinode voltage $V$ (how a transmon is coupled to a
transmission-line resonator was shown in Fig.~\ref{Fig:vs_time_1}), while
the output is the charge $q$ on the external plate of the gate capacitor $C_
\mathrm{g}}$. One should differentiate between the externally applied
voltage, $V_{\mathrm{g}}=V_{A}\sin \omega t$, and the quantized antinode
voltage,
\begin{equation}
V=\left\langle \widehat{V}\right\rangle =V_{\mathrm{rms}}\left\langle
ae^{-i\omega t}+a^{\dag }e^{i\omega t}\right\rangle =2V_{\mathrm{rms}}\text
Re}\left\langle ae^{-i\omega t}\right\rangle , \label{V}
\end{equation
where $V_{\mathrm{rms}}=\sqrt{\hbar \omega _{\mathrm{r}}/2C_{\mathrm{r}}}$
is the root-mean-square voltage of the resonator, defined by its resonant
frequency $\omega _{\mathrm{r}}$ \ and capacitance $C_{\mathrm{r}}$.\cit
{Koch07} This makes the difference from a charge qubit coupled directly to a
gate, such as in Ref.~[\onlinecite{Shevchenko16}]. Accordingly to Eq.~(\re
{V}), the voltage is related to the measurable values, the resonator output
field quadratures, Eq.~(\ref{IQ}). The charge $q$ is related to the voltage
V$ and the island charge $2e\left\langle n\right\rangle $ ($\left\langle
n\right\rangle $ is the average Cooper-pair number) as following \cit
{Shevchenko16}:
\begin{equation}
q=C_{\mathrm{geom}}V+\frac{C_{\mathrm{g}}}{C_{\Sigma }}2e\left\langle
n\right\rangle \equiv C_{\mathrm{M}}V, \label{q}
\end{equation
where we formally introduced the memcapacitance $C_{\mathrm{M}}$ as a
proportionality coefficient between the input $V$ and the output $q$. Given
the leading role of the shunt capacitance, here we have $C_{\Sigma
}=C_{J}+C_{\mathrm{g}}+C_{B}\sim C_{B}$ and $C_{\mathrm{geom}}=C_{\mathrm{g
}(C_{J}+C_{B})/C_{\Sigma }\approx C_{\mathrm{g}}$. The number operator $n$
is defined by the qubit Pauli matrix $\sigma _{y}$: $n=\frac{1}{4}\sqrt
\hbar \omega _{_{\mathrm{q}}}/E_{\mathrm{c}}}\sigma _{y}$. This allows us
rewriting Eq.~(\ref{q}),
\begin{equation}
\widetilde{q}=\text{Re\negthinspace }\left\langle a\right\rangle \cos \omega
t-\text{Im\negthinspace }\left\langle a\right\rangle \sin \omega t+\lambda
\left\langle \sigma _{y}\right\rangle , \label{q_}
\end{equation
where $\widetilde{q}=q/2C_{\mathrm{g}}V_{\mathrm{rms}}$ and $\lambda
=(e/4C_{\Sigma }V_{\mathrm{rms}})\sqrt{\hbar \omega _{_{\mathrm{q}}}/E_
\mathrm{c}}}$. We note that in related experiments, not only the quadratures
$I$ and $Q$ (which define Re$\left\langle a\right\rangle $ and Im
\left\langle a\right\rangle $), but also the qubit state, defined by the
values $\left\langle \sigma _{z}\right\rangle $ and $\left\langle \sigma
_{y}\right\rangle $, can be reliably probed, see Refs.
\onlinecite{Filipp09, Bianchetti09, McClure16, Gong16, Jerger17}].
Importantly, the memcapacitor's dynamics, i.e.~$q(t)$, is defined by rich
dynamics of both the resonator and the qubit, via $\left\langle
a\right\rangle $ and $\left\langle \sigma _{y}\right\rangle $, respectively.
\begin{figure}[t]
\includegraphics[width=8cm]{memcapacitor}
\includegraphics[width=8cm]{hysteresis}
\caption{A transmon qubit can be accounted as a memcapacitor. (a) Scheme of
a transmon-type qubit is shown to be equivalent to a memcapacitor $C_
\mathrm{M}}$, of which the symbol is shown to the right. (b)
Pinched-hysteretic curve in the voltage-charge plane as a fingerprint of a
memcapacitive behaviour.}
\label{Fig:memcapacitor}
\end{figure}
Importantly, here we have written the transmon-resonator equations in the
form of the memcapacitor first relation, Eq.~(\ref{q(t)}). We can see that
the role of the internal variables is played by the qubit charge
\left\langle n\right\rangle $. In its turn, the qubit state is defined by
the Lindblad equation, which now takes place of the second memcapacitor
relation, Eq.~(\ref{x_dot_2}). Such formulation demonstrates that our
qubit-resonator system can be interpreted as a quantum memcapacitor, which
is schematically displayed in Fig.~\ref{Fig:memcapacitor}(a).
The above formulas allow us to plot the charge--versus--voltage diagram. For
this we assume now that the qubit is driven by the field with amplitude
\Omega $ and frequency $\omega _{\mathrm{d}}$, which is resonant, $\omega _
\mathrm{d}}=\omega _{\mathrm{q}}(\Phi )$. This induces Rabi oscillations in
the qubit with the frequency $\Omega $. By numerically solving Eqs.~(5a-5d)
in Ref.~[\onlinecite{Bianchetti09}], we plot the $q$ versus $V$ diagram in
Fig.~\ref{Fig:memcapacitor}(b). To obtain the pinched-hysteresis-type loop,
we take the driving amplitude, $\Omega =2\omega _{\mathrm{r}}$, which
corresponds to the strong-driving regime. Other system parameters are the
same as used above (the ones of Ref.~[\onlinecite{Bianchetti09}]) and
\lambda =0.2$. In addition we have taken Re$\left\langle a\right\rangle =0$
and Im$\left\langle a\right\rangle =1$. Here we note that \negthinspace
\negthinspace $\left\langle a\right\rangle $ is a slow function of time, as
demonstrated in Fig.~\ref{Fig:vs_time_1}. (The characteristic time for this
is $2\pi /\varkappa $, which is $\gg $ $2\pi /\Omega $.) Moreover, the
diagram is not only defined by $\lambda $ (which is a constant), but also by
$\left\langle a\right\rangle $ (which can be adjusted, for example, by
choosing a moment of time in Fig.~\ref{Fig:vs_time_1}); so, for a different
value of $\lambda $ another value of $\left\langle a\right\rangle $ can be
taken. We note that the shaded area in Fig.~\ref{Fig:memcapacitor}(b) equals
to the energy consumed by the memcapacitor, $\int VIdt$.\cite{Pershin11}
Note that we made use of the two-level approximation for the transmon, and,
on the other hand considered the strong-driving regime, where $\Omega
=2\omega _{\mathrm{r}}$. This was needed for demonstrating the
pinched-hysteresis loop by illustrative means. While the strong-driving
regime was demonstrated in many types of qubits, in the transmon ones this
is complicated due to the weak anharmonicity, which may result in
transitions to the upper levels (cf. Refs.~[\onlinecite{Gong16, Lu17, Dai17
] though). In this way, one would have to confirm the calculations with the
more elaborated ones, by taking into account the higher levels (as e.g. in
Refs.~[\onlinecite{Peterer15, Pietikainen17a, Pietikainen17b}], see also our
discussion below, in Appendix~C) and clarify the relation, needed for the
hysteretic-type loops. Alternatively, one may think of the readily observed
Rabi oscillations in the megahertz domain and combine these with the
oscillations related to another resonator. At such low frequency the
resonator may be considered as classical, similarly to calculations in Ref.~
\onlinecite{Shevchenko16}].
\section{Conclusion}
We have considered the qubit-resonator system, accentuating on the situation
with a transmon-type qubit in a transmission-line resonator. The most
straightforward approach is the semiclassical theory, when all the
correlators are assumed to factorize, which has the advantage of getting
transparent analytical equations and formulas. We demonstrated that with
this we can describe relevant experiments~\cite{Bianchetti09, Jin15,
Pietikainen17a}. On the other hand, the validity of the semiclassical theory
was checked with the approach taking into account the two-operator
qubit-photon correlators, so-called semi-quantum approach. Furthermore, we
included temperature into consideration and studied its impact on the
measurable quadratures of the transmitted field. Due to the qubit-resonator
entanglement, the resonator transmission bears information about the
temperature experienced by the qubit. Consideration of this application, the
thermometry, was followed by another one, the memory device, known as a
memcapacitor. As a proof-of-concept, we demonstrated the pinched hysteretic
loop in the charge-voltage plane, the fingerprint of memcapacitance. In the
case with qubits, this loop is related to the Rabi-type oscillations. We
believe that such quantum memcapacitors, along with quantum meminductors and
memristors, will add new functionality to the toolbox of their classical
counterparts.
\begin{acknowledgments}
We are grateful to A.~Fedorov for stimulating discussions and critical
comments, to S.~Ashhab for critically reading the manuscript and for the
comments, and to Y.~V.~Pershin and E.~Il'ichev for fruitful discussions.
S.N.S. acknowledges the hospitality of School of Mathematics and Physics of
the University of Queensland, where part of this work was done; D.S.K.
acknowledges the hospitality of Leibnitz Institute of Photonic Technology.
This work was partly supported by the State Fund for Fundamental Research of
Ukraine (project \#~F66/95-2016) and DAAD bi-nationally supervised doctoral
degree program (grant \#~57299293).
\end{acknowledgments}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.